From yong.hu at intel.com Wed Aug 1 00:42:08 2018 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 1 Aug 2018 00:42:08 +0000 Subject: [Starlingx-discuss] build-pkg --parallel In-Reply-To: References: Message-ID: <50CFC055-9033-4710-BA22-5B2008B77535@intel.com> It is awesome!! Running “create_dependancy_cache.py” against the mirror (of RPMs) is to generate a new “dependancy-cache”, isn’t it? From: Scott Little Date: Wednesday, 1 August 2018 at 3:03 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] build-pkg --parallel I had a successful parallel build (aka build-pkgs --parallel) inside the docker container. ~1h45m on 24 core, 64G ram The prerequisite was a populated $MY_REPO/cgcs-tis-repo/dependancy-cache. Currently we only generate the cache after the build in the 'generate-cgcs-tis-repo' step. I'd like to see the cache stored in git and updated regularly by 'official' builds. Note: The cache doesn't have to be perfect, so a cache that is out of date by a day or a week is still very useful. build-pkgs/mockchain just needs a rough guide on build dependencies and potential dependency loops. Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From huang.shuquan at 99cloud.net Wed Aug 1 01:29:34 2018 From: huang.shuquan at 99cloud.net (Shuquan Huang) Date: Wed, 01 Aug 2018 09:29:34 +0800 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> Message-ID: <89632245-2881-4351-9CC8-A296F7C12DD3@99cloud.net> Concept 3, +1 From: on behalf of James Cole Date: Tuesday, 31 July 2018 at 4:51 AM To: Subject: [Starlingx-discuss] StarlingX Logo Concepts Hello StarlingX Team, I’m James, a graphic designer with the OpenStack Foundation. We’ve been working on a few logo concepts for StarlingX and wanted to get your thoughts. There were quite a few possibilities for this, but we narrowed it down three concepts, all with the same overarching theme—using multiple component pieces to create one unifying symbol as a nod to edge network components. The first features an X icon that is partially created using a bird silhouette. The second is also an X, but it is formed in part by a number of small dots. The third uses a bird as the central icon, with a texture created from a photo of a Starling murmuration. We have not explored color yet since we’re hoping to get your thoughts about form and concept before moving to the smaller details, but feel free to share any ideas on color you think make sense. I’ve attached a PDF showing all three concepts. It is also available on Dropbox in case you can’t view the attachment. Looking forward to hearing what you think! James Cole Graphic Designer OpenStack Foundation _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Aug 1 01:32:40 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 1 Aug 2018 01:32:40 +0000 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: <210898B96CA058408C55992CCAD98676B9E04F04@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C752F52B57@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E049F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F52D91@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C752F52DDC@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E04AF2@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F53540@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E04F04@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C752F56180@SHSMSX104.ccr.corp.intel.com> Hi Eric, Sorry for missing your email. Actually, I tried serial times to download this package through download_mirror.sh But could not got it, then I got one through google and download, it seems a little small Than the right one. (Right one is 683K) zhipengl at zhipengl-nuc:~/stx-tools/centos-mirror-tools$ rpm -K collectd-5.7.1-2.el7.x86_64.rpm collectd-5.7.1-2.el7.x86_64.rpm: sha1 md5 OK zhipengl at zhipengl-nuc:~/stx-tools/centos-mirror-tools$ ll collectd-5.7.1-2.el7.x86_64.rpm -rw-rw-r-- 1 zhipengl zhipengl 613644 8月 1 18:24 collectd-5.7.1-2.el7.x86_64.rpm Thanks again! BRs zhipeng -----Original Message----- From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] Sent: 2018年7月27日 21:08 To: Liu, ZhipengS ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Subject: RE: [RFC] StarlingX Developer Guide Ok, interesting. Glad your unblocked. However, I have to ask. Have you or will you be root causing what caused the build system to choose and include the wrong package ? even one of the same name and version but without the correct fundamental content. Weird. Wonder if that is hitting use elsewhere not or will again in the future ? Eric. > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Thursday, July 26, 2018 10:54 PM > To: MacDonald, Eric; Arce Moreno, Abraham; > starlingx-discuss at lists.starlingx.io > Subject: RE: [RFC] StarlingX Developer Guide > Importance: High > > Thanks Eric!! > Use below 2 commands, no python.so found. > > However, RC found that a wrong collectd-5.7.1-2.el7.x86_64.rpm used. > Its size is not the same as the one used by Abraham. > After used the right collectd-5.7.1-2.el7.x86_64.rpm, no issue anymore > The iso can be deloyed successfully on multimode. > > Zhipeng > > -----Original Message----- > From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > Sent: 2018年7月26日 23:28 > To: Liu, ZhipengS ; Arce Moreno, Abraham > ; starlingx-discuss at lists.starlingx.io > Subject: RE: [RFC] StarlingX Developer Guide > > I've been able to reproduce the log failure signature with a change to > the collectd.conf file > > The issue is definitely related to the python library but it is packed in the collectd rpm. > The one you call out is not in my working env. > To further this investigation please provide the output of the > following commands > > ls -lrt /usr/lib64/collectd/python.so > ldd /usr/lib64/collectd/python.so > > Eric. > > > -----Original Message----- > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > Sent: Thursday, July 26, 2018 10:45 AM > > To: Liu, ZhipengS; MacDonald, Eric; Arce Moreno, Abraham; > > starlingx-discuss at lists.starlingx.io > > Subject: RE: [RFC] StarlingX Developer Guide > > Importance: High > > > > Do we need collected-python package in mirror? > > > > -----Original Message----- > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > Sent: 2018年7月26日 22:25 > > To: MacDonald, Eric ; Arce Moreno, > > Abraham ; > > starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > > > Hi Eric, > > > > Thanks for your comment!! Could you help further check the error I saw below? > > ==================================================================== > > == ======= controller-0:/var/log/puppet# grep -R Error* > > latest/puppet.log:2018-07-26T16:36:33.125 Error: 2018-07-26 16:36:33 > > +0000 Systemd start for collectd failed! > > latest/puppet.log:2018-07-26T16:36:33.236 Error: 2018-07-26 16:36:33 > > +0000 > > /Stage[main]/Platform::Collectd/Service[collectd]/ensure: change from stopped to running failed: > > Systemd start for collectd failed! > > 2018-07-26-16-34-02_controller/puppet.log:2018-07-26T16:36:33.125 > > Error: 2018-07-26 16:36:33 > > +0000 Systemd start for collectd failed! > > 2018-07-26-16-34-02_controller/puppet.log:2018-07-26T16:36:33.236 > > Error: 2018-07-26 16:36:33 > > +0000 /Stage[main]/Platform::Collectd/Service[collectd]/ensure: change from stopped to running failed: > > Systemd start for collectd failed! > > ==================================================================== > > == ========= ontroller-0:~$ cat /var/log/daemon.log | grep collectd > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > plugin "network" successfully loaded. > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > Could not find plugin "python" in /usr/lib64/collectd > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > Could not find plugin "python" in /usr/lib64/collectd > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info > > Automatically loading plugin "python" failed with status 1. > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > plugin "threshold" successfully loaded. > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: plugin "df" successfully loaded. > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info plugin_load: > > Could not find plugin "python" in /usr/lib64/collectd > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info plugin_load: > > Could not find plugin "python" in /usr/lib64/collectd > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info > > Automatically loading plugin "python" failed with status 1. > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info Error: Reading the config file failed! > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info Read the syslog for details. > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info = > > 2018-07-26T16:36:33.100 localhost systemd[1]: notice collectd.service: > > main process exited, code=exited, status=1/FAILURE > > 2018-07-26T16:36:33.110 localhost systemd[1]: notice Unit collectd.service entered failed state. > > 2018-07-26T16:36:33.110 localhost systemd[1]: warning collectd.service failed. > > ==================================================================== > > == > > > > Thanks! > > Zhipeng > > -----Original Message----- > > From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > Sent: 2018年7月26日 21:19 > > To: Liu, ZhipengS ; Arce Moreno, Abraham > > ; > > starlingx-discuss at lists.starlingx.io > > Subject: RE: [RFC] StarlingX Developer Guide > > > > Looks like the manifest that configures and starts the collectd process is failing. > > > > Can you please execute the following commands on the host that shows > > collectd failing and publish the errors you see that might reveal the cause. > > > > # Get the config error logs ... > > sudo -i > > cd /var/log/puppet ; fgrep -R Error * > > > > # get the collectd startup failure logs cat /var/log/daemon.log | > > grep collectd > > > > one startup block is fine ; start to exit should show the error. > > > > Eric MacDonald > > > > > -----Original Message----- > > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > > Sent: Thursday, July 26, 2018 4:53 AM > > > To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io > > > Subject: Re: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > > > > > Hi Abraham and Hayde > > > > > > I rebuilt and deployed my iso again, but still fail at step 6 with the same cause. > > > Start collectd failed. > > > Could you help deploy ISO I built to see if it can work in your environment, thanks! > > > > > > Zhipeng > > > > > > > > > -----Original Message----- > > > From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] > > > Sent: 2018年7月26日 10:13 > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > > > > > Hi again, > > > > > > Can someone please help test the process to build an ISO based on master using Developer Guide [0]? > > > Please use Github page [1] as a documentation support. > > > > > > Requirements: > > > > > > - Repo status checked 7/25/2018 19:12 PST > > > - stx-tools master branch > > > - latest change: b65fa0a0ec6297199843b1455615d0126bb7e7c7 Update > > > RPM macros > > > - Temporal! Changes, already in Developer Guide > > > - RPM: selinux-policy-devel required > > > https://review.openstack.org/#/c/585915 > > > > > > [0] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide > > > [1] > > > https://github.com/xe1gyq/starlingx/blob/master/DeveloperGuide.md > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > ss _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > ss > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Aug 1 01:41:41 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 1 Aug 2018 01:41:41 +0000 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: References: <93814834B4855241994F290E959305C752F5547F@SHSMSX104.ccr.corp.intel.com> Message-ID: <93814834B4855241994F290E959305C752F561AB@SHSMSX104.ccr.corp.intel.com> Hi Abraham, Another thing here is. We'd better reminder user to set right pip source. For example, in China, we need change default source like below in Dockerfile.centos73 #RUN pip install python-subunit junitxml --upgrade && \ # pip install tox --upgrade RUN pip install -i https://pypi.tuna.tsinghua.edu.cn/simple python-subunit junitxml --upgrade && \ pip install -i https://pypi.tuna.tsinghua.edu.cn/simple tox --upgrade Zhipeng -----Original Message----- From: Arce Moreno, Abraham Sent: 2018年8月1日 5:36 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [RFC] StarlingX Developer Guide Thanks Zhipeng! > Some place need to be changed in GUIDE. > 1) Setup Building Docker Container > > Need change to below. > ENV http_proxy " http://your.actual_http_proxy.com:your_port " > ENV https_proxy " https://your.actual_https_proxy.com:your_port " > ENV ftp_proxy " http://your.actual_ftp_proxy.com:your_port " > RUN echo " proxy=http://your-proxy.com:port " >> /etc/yum.conf > > Story raised: https://storyboard.openstack.org/#!/story/2003169 Done > 2) Update the symbolic links > Update the symbolic links > $ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ > Need change to > $ generate-cgcs-centos-repo /import/mirrors/CentOS/stx-r1/CentOS/pike/ Done From yan.chen at intel.com Wed Aug 1 02:09:26 2018 From: yan.chen at intel.com (Chen, Yan) Date: Wed, 1 Aug 2018 02:09:26 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <89632245-2881-4351-9CC8-A296F7C12DD3@99cloud.net> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <89632245-2881-4351-9CC8-A296F7C12DD3@99cloud.net> Message-ID: <72AD03D27224C74982BE13246D75B39739926AA3@SHSMSX103.ccr.corp.intel.com> Concept 1 + 1 Yan From: > on behalf of James Cole > Date: Tuesday, 31 July 2018 at 4:51 AM To: > Subject: [Starlingx-discuss] StarlingX Logo Concepts Hello StarlingX Team, I’m James, a graphic designer with the OpenStack Foundation. We’ve been working on a few logo concepts for StarlingX and wanted to get your thoughts. There were quite a few possibilities for this, but we narrowed it down three concepts, all with the same overarching theme—using multiple component pieces to create one unifying symbol as a nod to edge network components. The first features an X icon that is partially created using a bird silhouette. The second is also an X, but it is formed in part by a number of small dots. The third uses a bird as the central icon, with a texture created from a photo of a Starling murmuration. We have not explored color yet since we’re hoping to get your thoughts about form and concept before moving to the smaller details, but feel free to share any ideas on color you think make sense. I’ve attached a PDF showing all three concepts. It is also available on Dropbox in case you can’t view the attachment. Looking forward to hearing what you think! James Cole Graphic Designer OpenStack Foundation _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From yan.chen at intel.com Wed Aug 1 02:12:06 2018 From: yan.chen at intel.com (Chen, Yan) Date: Wed, 1 Aug 2018 02:12:06 +0000 Subject: [Starlingx-discuss] Python 2to3 code porting In-Reply-To: <9A85D2917C58154C960D95352B22818BAB5670B4@fmsmsx117.amr.corp.intel.com> References: <72AD03D27224C74982BE13246D75B397399260EA@SHSMSX103.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB0E5AA4@ALA-MBD.corp.ad.wrs.com> <233d9f7d-e660-efbc-5f1a-11e17c916325@windriver.com> <75202A5D-E336-47DB-B1BA-72119BEEBB87@intel.com> <8B9B7BE5-D938-46E6-B0CB-5B410B762F5A@intel.com> <9A85D2917C58154C960D95352B22818BAB5670B4@fmsmsx117.amr.corp.intel.com> Message-ID: <72AD03D27224C74982BE13246D75B39739926ACE@SHSMSX103.ccr.corp.intel.com> Sorry for late reply, I’m just back from some family affairs. Thanks for your suggestion, I will update the wiki and etherpad. Yan From: Jones, Bruce E Sent: Wednesday, July 25, 2018 06:32 To: Ramirez, Eddie ; Chen, Yan Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Python 2to3 code porting Thanks for getting started on this. I’m setting up formal sub-projects and giving each its own wiki page. I’ve started one for this sub-project: https://wiki.openstack.org/wiki/StarlingX/Pyton2. It links to the Etherpad below. Eddie and Yan – can you please update that wiki page so it captures the info in this email thread? And meanwhile, while we thank you both for looking at this, this is not the highest priority work right now, and there are some very big work items hiding under the covers of this one. Please focus on tasks higher on the project's priority list. brucej From: Ramirez, Eddie [mailto:eddie.ramirez at intel.com] Sent: Tuesday, July 24, 2018 10:16 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Python 2to3 code porting Yan, I created this list https://etherpad.openstack.org/p/stx-python-2-to-3, I hope it can help us to track those packages as we make more discoveries. From: "Ramirez, Eddie" > Date: Tuesday, July 24, 2018 at 9:52 AM To: Scott Little >, "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Python 2to3 code porting +1 We would need to add cgtsclient to the list. I will confirm with other TiS packages today as Horizon uses packages like: cgtsclient, cgcs_patch, sysinv, tsconfig and mfclient… From: Scott Little > Date: Tuesday, July 24, 2018 at 7:23 AM To: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Python 2to3 code porting Agreed. We need the code restructuring and a working build before starting this task. Scott On 18-07-24 07:22 AM, Rowsell, Brent wrote: The priority of this needs to be discussed at the next architecture meeting. With all current churn and lack of a working build, this activity in my opinion needs to wait. Brent From: Chen, Yan [mailto:yan.chen at intel.com] Sent: Tuesday, July 24, 2018 12:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Python 2to3 code porting Hi, there, I just wonder if anyone is already working on this task now? I created a story days ago, and studied most of our Python code and the python rpms in the system. Here’s the conclusions: 1. Most of the Tis dependency python packages are already Python 2/3 compatible, but still some exceptions: * The following packages are Python2 only and same for the latest code tree: i. createrepo-0.9.9 (used by cgcs-patch-controller) ii. net-snmp-5.7.2 (used by snmp-audittrail) iii. pyparted-3.9 (used by sysinv) * The following packages are Python 2 but the latest version has Python 3 support, need upgrade: i. python-daemon-1.6 (used by logmgmt) * This analysis is still on-going, more packages may be found. 1. The Python modules for Python 2/3 compatibility (python-futures and python-six) are already included. 2. Some of the openstack packages/dependencies are Python 2 only, we may need to find a good version of Openstack to upgrade. We can start to clean our code first, I think we follow the guideline here, one topic each time, how do you think? http://python-future.org/compatible_idioms.html Here’s the link for the story: https://storyboard.openstack.org/#!/story/2002909 Yan _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Wed Aug 1 13:37:47 2018 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Wed, 1 Aug 2018 13:37:47 +0000 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: <93814834B4855241994F290E959305C752F56180@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C752F52B57@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E049F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F52D91@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C752F52DDC@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E04AF2@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F53540@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E04F04@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F56180@SHSMSX104.ccr.corp.intel.com> Message-ID: <210898B96CA058408C55992CCAD98676B9E13296@ALA-MBD.corp.ad.wrs.com> Hi Zhipeng, OK, thanks for the follow-up. So, the issue you reported was self-inflicted but only after you experienced a real issue with collectd in terms of your build process. Has that initial issue been resolved ? In retrospect (generally to the discussion group), is that if you are reporting an issue for debug it would be useful to include in that request anything that was done outside of 'standard process' (like in this case) as that would provide an initial perspective and potential starting point for the investigation. Cheers, Eric MacDonald > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Tuesday, July 31, 2018 9:33 PM > To: MacDonald, Eric; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io > Subject: RE: [RFC] StarlingX Developer Guide > Importance: High > > Hi Eric, > > Sorry for missing your email. > Actually, I tried serial times to download this package through download_mirror.sh > But could not got it, then I got one through google and download, it seems a little small > Than the right one. (Right one is 683K) > > zhipengl at zhipengl-nuc:~/stx-tools/centos-mirror-tools$ rpm -K collectd-5.7.1-2.el7.x86_64.rpm > collectd-5.7.1-2.el7.x86_64.rpm: sha1 md5 OK > zhipengl at zhipengl-nuc:~/stx-tools/centos-mirror-tools$ ll collectd-5.7.1-2.el7.x86_64.rpm > -rw-rw-r-- 1 zhipengl zhipengl 613644 8月 1 18:24 collectd-5.7.1-2.el7.x86_64.rpm > > Thanks again! > BRs > zhipeng > > -----Original Message----- > From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > Sent: 2018年7月27日 21:08 > To: Liu, ZhipengS ; Arce Moreno, Abraham ; > starlingx-discuss at lists.starlingx.io > Subject: RE: [RFC] StarlingX Developer Guide > > Ok, interesting. Glad your unblocked. > > However, I have to ask. Have you or will you be root causing what caused the build system to choose and > include the wrong package ? even one of the same name and version but without the correct fundamental > content. Weird. Wonder if that is hitting use elsewhere not or will again in the future ? > > Eric. > > > -----Original Message----- > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > Sent: Thursday, July 26, 2018 10:54 PM > > To: MacDonald, Eric; Arce Moreno, Abraham; > > starlingx-discuss at lists.starlingx.io > > Subject: RE: [RFC] StarlingX Developer Guide > > Importance: High > > > > Thanks Eric!! > > Use below 2 commands, no python.so found. > > > > However, RC found that a wrong collectd-5.7.1-2.el7.x86_64.rpm used. > > Its size is not the same as the one used by Abraham. > > After used the right collectd-5.7.1-2.el7.x86_64.rpm, no issue anymore > > The iso can be deloyed successfully on multimode. > > > > Zhipeng > > > > -----Original Message----- > > From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > Sent: 2018年7月26日 23:28 > > To: Liu, ZhipengS ; Arce Moreno, Abraham > > ; starlingx-discuss at lists.starlingx.io > > Subject: RE: [RFC] StarlingX Developer Guide > > > > I've been able to reproduce the log failure signature with a change to > > the collectd.conf file > > > > The issue is definitely related to the python library but it is packed in the collectd rpm. > > The one you call out is not in my working env. > > To further this investigation please provide the output of the > > following commands > > > > ls -lrt /usr/lib64/collectd/python.so > > ldd /usr/lib64/collectd/python.so > > > > Eric. > > > > > -----Original Message----- > > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > > Sent: Thursday, July 26, 2018 10:45 AM > > > To: Liu, ZhipengS; MacDonald, Eric; Arce Moreno, Abraham; > > > starlingx-discuss at lists.starlingx.io > > > Subject: RE: [RFC] StarlingX Developer Guide > > > Importance: High > > > > > > Do we need collected-python package in mirror? > > > > > > -----Original Message----- > > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > > Sent: 2018年7月26日 22:25 > > > To: MacDonald, Eric ; Arce Moreno, > > > Abraham ; > > > starlingx-discuss at lists.starlingx.io > > > Subject: Re: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > > > > > Hi Eric, > > > > > > Thanks for your comment!! Could you help further check the error I saw below? > > > ==================================================================== > > > == ======= controller-0:/var/log/puppet# grep -R Error* > > > latest/puppet.log:2018-07-26T16:36:33.125 Error: 2018-07-26 16:36:33 > > > +0000 Systemd start for collectd failed! > > > latest/puppet.log:2018-07-26T16:36:33.236 Error: 2018-07-26 16:36:33 > > > +0000 > > > /Stage[main]/Platform::Collectd/Service[collectd]/ensure: change from stopped to running failed: > > > Systemd start for collectd failed! > > > 2018-07-26-16-34-02_controller/puppet.log:2018-07-26T16:36:33.125 > > > Error: 2018-07-26 16:36:33 > > > +0000 Systemd start for collectd failed! > > > 2018-07-26-16-34-02_controller/puppet.log:2018-07-26T16:36:33.236 > > > Error: 2018-07-26 16:36:33 > > > +0000 /Stage[main]/Platform::Collectd/Service[collectd]/ensure: change from stopped to running > failed: > > > Systemd start for collectd failed! > > > ==================================================================== > > > == ========= ontroller-0:~$ cat /var/log/daemon.log | grep collectd > > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > > plugin "network" successfully loaded. > > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > > Could not find plugin "python" in /usr/lib64/collectd > > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > > Could not find plugin "python" in /usr/lib64/collectd > > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info > > > Automatically loading plugin "python" failed with status 1. > > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: > > > plugin "threshold" successfully loaded. > > > 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: plugin "df" successfully loaded. > > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info plugin_load: > > > Could not find plugin "python" in /usr/lib64/collectd > > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info plugin_load: > > > Could not find plugin "python" in /usr/lib64/collectd > > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info > > > Automatically loading plugin "python" failed with status 1. > > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info Error: Reading the config file failed! > > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info Read the syslog for details. > > > 2018-07-26T16:36:33.073 localhost collectd[25253]: info = > > > 2018-07-26T16:36:33.100 localhost systemd[1]: notice collectd.service: > > > main process exited, code=exited, status=1/FAILURE > > > 2018-07-26T16:36:33.110 localhost systemd[1]: notice Unit collectd.service entered failed state. > > > 2018-07-26T16:36:33.110 localhost systemd[1]: warning collectd.service failed. > > > ==================================================================== > > > == > > > > > > Thanks! > > > Zhipeng > > > -----Original Message----- > > > From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] > > > Sent: 2018年7月26日 21:19 > > > To: Liu, ZhipengS ; Arce Moreno, Abraham > > > ; > > > starlingx-discuss at lists.starlingx.io > > > Subject: RE: [RFC] StarlingX Developer Guide > > > > > > Looks like the manifest that configures and starts the collectd process is failing. > > > > > > Can you please execute the following commands on the host that shows > > > collectd failing and publish the errors you see that might reveal the cause. > > > > > > # Get the config error logs ... > > > sudo -i > > > cd /var/log/puppet ; fgrep -R Error * > > > > > > # get the collectd startup failure logs cat /var/log/daemon.log | > > > grep collectd > > > > > > one startup block is fine ; start to exit should show the error. > > > > > > Eric MacDonald > > > > > > > -----Original Message----- > > > > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > > > > Sent: Thursday, July 26, 2018 4:53 AM > > > > To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io > > > > Subject: Re: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > > > > > > > Hi Abraham and Hayde > > > > > > > > I rebuilt and deployed my iso again, but still fail at step 6 with the same cause. > > > > Start collectd failed. > > > > Could you help deploy ISO I built to see if it can work in your environment, thanks! > > > > > > > > Zhipeng > > > > > > > > > > > > -----Original Message----- > > > > From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] > > > > Sent: 2018年7月26日 10:13 > > > > To: starlingx-discuss at lists.starlingx.io > > > > Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > > > > > > > Hi again, > > > > > > > > Can someone please help test the process to build an ISO based on master using Developer Guide > [0]? > > > > Please use Github page [1] as a documentation support. > > > > > > > > Requirements: > > > > > > > > - Repo status checked 7/25/2018 19:12 PST > > > > - stx-tools master branch > > > > - latest change: b65fa0a0ec6297199843b1455615d0126bb7e7c7 Update > > > > RPM macros > > > > - Temporal! Changes, already in Developer Guide > > > > - RPM: selinux-policy-devel required > > > > https://review.openstack.org/#/c/585915 > > > > > > > > [0] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide > > > > [1] > > > > https://github.com/xe1gyq/starlingx/blob/master/DeveloperGuide.md > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > > ss _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > > ss > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Aug 1 13:57:30 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 1 Aug 2018 13:57:30 +0000 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: <210898B96CA058408C55992CCAD98676B9E13296@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C752F52B57@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E049F3@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F52D91@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C752F52DDC@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E04AF2@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F53540@SHSMSX104.ccr.corp.intel.com> <210898B96CA058408C55992CCAD98676B9E04F04@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C752F56180@SHSMSX104.ccr.corp.intel.com>, <210898B96CA058408C55992CCAD98676B9E13296@ALA-MBD.corp.ad.wrs.com> Message-ID: <4FEB0F66-FA56-44FD-AB4E-669E6937F0D7@intel.com> Hi Eric, My initial issue has already been resolved. We should take care when download packages from 3rd party. I will use storyboard to track issue next time. Thanks! Zhipeng 发自我的 iPhone > 在 2018年8月1日,21:38,MacDonald, Eric 写道: > > Hi Zhipeng, > > OK, thanks for the follow-up. > So, the issue you reported was self-inflicted but only after you experienced a real issue with collectd in terms of your build process. > Has that initial issue been resolved ? > > In retrospect (generally to the discussion group), is that if you are reporting an issue for debug it would be useful to include in that request anything that was done outside of 'standard process' (like in this case) as that would provide an initial perspective and potential starting point for the investigation. > > Cheers, > > Eric MacDonald > >> -----Original Message----- >> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >> Sent: Tuesday, July 31, 2018 9:33 PM >> To: MacDonald, Eric; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io >> Subject: RE: [RFC] StarlingX Developer Guide >> Importance: High >> >> Hi Eric, >> >> Sorry for missing your email. >> Actually, I tried serial times to download this package through download_mirror.sh >> But could not got it, then I got one through google and download, it seems a little small >> Than the right one. (Right one is 683K) >> >> zhipengl at zhipengl-nuc:~/stx-tools/centos-mirror-tools$ rpm -K collectd-5.7.1-2.el7.x86_64.rpm >> collectd-5.7.1-2.el7.x86_64.rpm: sha1 md5 OK >> zhipengl at zhipengl-nuc:~/stx-tools/centos-mirror-tools$ ll collectd-5.7.1-2.el7.x86_64.rpm >> -rw-rw-r-- 1 zhipengl zhipengl 613644 8月 1 18:24 collectd-5.7.1-2.el7.x86_64.rpm >> >> Thanks again! >> BRs >> zhipeng >> >> -----Original Message----- >> From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] >> Sent: 2018年7月27日 21:08 >> To: Liu, ZhipengS ; Arce Moreno, Abraham ; >> starlingx-discuss at lists.starlingx.io >> Subject: RE: [RFC] StarlingX Developer Guide >> >> Ok, interesting. Glad your unblocked. >> >> However, I have to ask. Have you or will you be root causing what caused the build system to choose and >> include the wrong package ? even one of the same name and version but without the correct fundamental >> content. Weird. Wonder if that is hitting use elsewhere not or will again in the future ? >> >> Eric. >> >>> -----Original Message----- >>> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >>> Sent: Thursday, July 26, 2018 10:54 PM >>> To: MacDonald, Eric; Arce Moreno, Abraham; >>> starlingx-discuss at lists.starlingx.io >>> Subject: RE: [RFC] StarlingX Developer Guide >>> Importance: High >>> >>> Thanks Eric!! >>> Use below 2 commands, no python.so found. >>> >>> However, RC found that a wrong collectd-5.7.1-2.el7.x86_64.rpm used. >>> Its size is not the same as the one used by Abraham. >>> After used the right collectd-5.7.1-2.el7.x86_64.rpm, no issue anymore >>> The iso can be deloyed successfully on multimode. >>> >>> Zhipeng >>> >>> -----Original Message----- >>> From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] >>> Sent: 2018年7月26日 23:28 >>> To: Liu, ZhipengS ; Arce Moreno, Abraham >>> ; starlingx-discuss at lists.starlingx.io >>> Subject: RE: [RFC] StarlingX Developer Guide >>> >>> I've been able to reproduce the log failure signature with a change to >>> the collectd.conf file >>> >>> The issue is definitely related to the python library but it is packed in the collectd rpm. >>> The one you call out is not in my working env. >>> To further this investigation please provide the output of the >>> following commands >>> >>> ls -lrt /usr/lib64/collectd/python.so >>> ldd /usr/lib64/collectd/python.so >>> >>> Eric. >>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >>>> Sent: Thursday, July 26, 2018 10:45 AM >>>> To: Liu, ZhipengS; MacDonald, Eric; Arce Moreno, Abraham; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [RFC] StarlingX Developer Guide >>>> Importance: High >>>> >>>> Do we need collected-python package in mirror? >>>> >>>> -----Original Message----- >>>> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >>>> Sent: 2018年7月26日 22:25 >>>> To: MacDonald, Eric ; Arce Moreno, >>>> Abraham ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: Re: [Starlingx-discuss] [RFC] StarlingX Developer Guide >>>> >>>> Hi Eric, >>>> >>>> Thanks for your comment!! Could you help further check the error I saw below? >>>> ==================================================================== >>>> == ======= controller-0:/var/log/puppet# grep -R Error* >>>> latest/puppet.log:2018-07-26T16:36:33.125 Error: 2018-07-26 16:36:33 >>>> +0000 Systemd start for collectd failed! >>>> latest/puppet.log:2018-07-26T16:36:33.236 Error: 2018-07-26 16:36:33 >>>> +0000 >>>> /Stage[main]/Platform::Collectd/Service[collectd]/ensure: change from stopped to running failed: >>>> Systemd start for collectd failed! >>>> 2018-07-26-16-34-02_controller/puppet.log:2018-07-26T16:36:33.125 >>>> Error: 2018-07-26 16:36:33 >>>> +0000 Systemd start for collectd failed! >>>> 2018-07-26-16-34-02_controller/puppet.log:2018-07-26T16:36:33.236 >>>> Error: 2018-07-26 16:36:33 >>>> +0000 /Stage[main]/Platform::Collectd/Service[collectd]/ensure: change from stopped to running >> failed: >>>> Systemd start for collectd failed! >>>> ==================================================================== >>>> == ========= ontroller-0:~$ cat /var/log/daemon.log | grep collectd >>>> 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: >>>> plugin "network" successfully loaded. >>>> 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: >>>> Could not find plugin "python" in /usr/lib64/collectd >>>> 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: >>>> Could not find plugin "python" in /usr/lib64/collectd >>>> 2018-07-26T16:36:33.072 localhost collectd[25253]: info >>>> Automatically loading plugin "python" failed with status 1. >>>> 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: >>>> plugin "threshold" successfully loaded. >>>> 2018-07-26T16:36:33.072 localhost collectd[25253]: info plugin_load: plugin "df" successfully loaded. >>>> 2018-07-26T16:36:33.073 localhost collectd[25253]: info plugin_load: >>>> Could not find plugin "python" in /usr/lib64/collectd >>>> 2018-07-26T16:36:33.073 localhost collectd[25253]: info plugin_load: >>>> Could not find plugin "python" in /usr/lib64/collectd >>>> 2018-07-26T16:36:33.073 localhost collectd[25253]: info >>>> Automatically loading plugin "python" failed with status 1. >>>> 2018-07-26T16:36:33.073 localhost collectd[25253]: info Error: Reading the config file failed! >>>> 2018-07-26T16:36:33.073 localhost collectd[25253]: info Read the syslog for details. >>>> 2018-07-26T16:36:33.073 localhost collectd[25253]: info = >>>> 2018-07-26T16:36:33.100 localhost systemd[1]: notice collectd.service: >>>> main process exited, code=exited, status=1/FAILURE >>>> 2018-07-26T16:36:33.110 localhost systemd[1]: notice Unit collectd.service entered failed state. >>>> 2018-07-26T16:36:33.110 localhost systemd[1]: warning collectd.service failed. >>>> ==================================================================== >>>> == >>>> >>>> Thanks! >>>> Zhipeng >>>> -----Original Message----- >>>> From: MacDonald, Eric [mailto:Eric.MacDonald at windriver.com] >>>> Sent: 2018年7月26日 21:19 >>>> To: Liu, ZhipengS ; Arce Moreno, Abraham >>>> ; >>>> starlingx-discuss at lists.starlingx.io >>>> Subject: RE: [RFC] StarlingX Developer Guide >>>> >>>> Looks like the manifest that configures and starts the collectd process is failing. >>>> >>>> Can you please execute the following commands on the host that shows >>>> collectd failing and publish the errors you see that might reveal the cause. >>>> >>>> # Get the config error logs ... >>>> sudo -i >>>> cd /var/log/puppet ; fgrep -R Error * >>>> >>>> # get the collectd startup failure logs cat /var/log/daemon.log | >>>> grep collectd >>>> >>>> one startup block is fine ; start to exit should show the error. >>>> >>>> Eric MacDonald >>>> >>>>> -----Original Message----- >>>>> From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] >>>>> Sent: Thursday, July 26, 2018 4:53 AM >>>>> To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io >>>>> Subject: Re: [Starlingx-discuss] [RFC] StarlingX Developer Guide >>>>> >>>>> Hi Abraham and Hayde >>>>> >>>>> I rebuilt and deployed my iso again, but still fail at step 6 with the same cause. >>>>> Start collectd failed. >>>>> Could you help deploy ISO I built to see if it can work in your environment, thanks! >>>>> >>>>> Zhipeng >>>>> >>>>> >>>>> -----Original Message----- >>>>> From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] >>>>> Sent: 2018年7月26日 10:13 >>>>> To: starlingx-discuss at lists.starlingx.io >>>>> Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide >>>>> >>>>> Hi again, >>>>> >>>>> Can someone please help test the process to build an ISO based on master using Developer Guide >> [0]? >>>>> Please use Github page [1] as a documentation support. >>>>> >>>>> Requirements: >>>>> >>>>> - Repo status checked 7/25/2018 19:12 PST >>>>> - stx-tools master branch >>>>> - latest change: b65fa0a0ec6297199843b1455615d0126bb7e7c7 Update >>>>> RPM macros >>>>> - Temporal! Changes, already in Developer Guide >>>>> - RPM: selinux-policy-devel required >>>>> https://review.openstack.org/#/c/585915 >>>>> >>>>> [0] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide >>>>> [1] >>>>> https://github.com/xe1gyq/starlingx/blob/master/DeveloperGuide.md >>>>> >>>>> _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu >>>>> ss _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu >>>>> ss >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From abraham.arce.moreno at intel.com Wed Aug 1 15:54:14 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 1 Aug 2018 15:54:14 +0000 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template Message-ID: Dean, Here you have the high level overview of tasks to get started with our " StarlingX Documentation". @Hayde has raised her hand to help in this short term not time consuming assignment. Objective: Create a first "Gold Initial Commit" based in "Stx-Docs" project including high level requirements from OpenStack Documentation Guidelines so it can ported into the rest of our StarlingX projects. Phase 1: 1. Learning Resources 1.1 Read "OpenStack Documentation Contributor Guide" https://docs.openstack.org/doc-contrib-guide/index.html 2. Initial Code 2.1 Understand existing "Stx-Docs" repository and "docs/" implementation https://review.openstack.org/#/q/project:openstack/stx-docs 3. Translate important topics from "OpenStack Documentation Contributor Guide" into "Stx-Docs" commits: 3.1 Project guide setup 3.2 Writing documentation 3.3 Writing style 3.4 Building documentation 3.5 Landing pages on docs.openstack.org 4. Get Final Gerrit Reviews on commits and make changes 5. Have our "Gold Initial Commit" ready Phase 2: Once the first interaction is done we can take another repository to test our "Gold Initial Commit" having only modifications at the content level. Phase 3: With 2 interactions we are ready to easily move what we have learned and implemented in 2 projects to the rest of our StarlingX projects. Happy to hear your thoughts. From bruce.e.jones at intel.com Wed Aug 1 20:59:50 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 1 Aug 2018 20:59:50 +0000 Subject: [Starlingx-discuss] Release plan update Message-ID: <9A85D2917C58154C960D95352B22818BAB56AAEB@fmsmsx117.amr.corp.intel.com> Here is an update from our F2F meeting today. We are changing the release cadence from 4 releases per year to 3 per year. We are targeting releases for 2019 in March, July and November. We are canceling the August code freeze and the September release for Q3. Instead we will code freeze in September for a release in October, to support the November OpenStack Summit. We will continue monthly branch and test releases, but our shared goal is to invest in test automation (in Zuul and otherwise) to ensure that master remains stable. Once we reach that goal we will revisit the need for monthly branches. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Aug 1 21:03:14 2018 From: scott.little at windriver.com (Scott Little) Date: Wed, 1 Aug 2018 17:03:14 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> Message-ID: <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> 99% of the reviews are now available. I've held back the manifest changes for tomorrow. The relocation updates come in sets for each package, that attempt to preserve the update history found at the original location. One update removes a package from stx-utils, stx-gplv2, stx-gplv3.  A second adds it to stx-integ or stx-updates in it's StarlingX day zero form (author changes from Dean to Me).  Then there may be 0-N updates replaying the subsequent commit history of that package (author and commit text preserved).  Finally there might be a follow up commit by me to fix a build path.  The final result is a glorified 'mv' operation.  The content should be unchanged, So all the code has been reviewed before. Reviews should focus one subject only, was the move executed correctly? Please do not workflow +1!    I couldn't get the scripts to manage Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. Scott On 18-07-31 11:26 AM, Scott Little wrote: > Revised timeline is August 1 or 2. > > Scott > > > On 18-07-17 11:07 AM, Scott Little wrote: >> >> Story: https://storyboard.openstack.org/#!/story/2002801 >> >> *Goals:* >> >> 1) Consolidate the following repo’s under stx-integ. >> • stx-gplv2 >> • stx-gplv3 >> • stx-utils >> >> 2) Restructure the directories under which packages are to be found. >> >> Currently stx-gplv2/3 are largely without structure. Parts of the >> stx-integ structure were inherited from WRLinux and make little >> sense.  stx-utils is just i mess of stuff that never found a home >> when StarlingX was first set up. >> >> Directories should descriptive of the class of packages to be found >> within. >> >> Intent is to preserve update history as best is is possible. >> >> >> *Timeline: * >> >> Probably around July 23 unless there are strong objections. We should >> probably have a freeze on submissions to the affected repos until it >> is all completed. >> >> >> *Code Reviews: * >> >> Most of this is just moving code around.  A few path corrections, but >> no new code.  The number and size of the reviews will be huge, and >> the code should all have been inspected once before.  Is there a way >> to fast track this? Would there be strong objections to me just doing >> a +2/+1 without waiting for independent review? >> >> >> *Details of directories/groups ...* >> >> >> Create new directories under stx-integ (logical groupings for files): >>    ceph >>    config >>    config-files >>    database >>    filesystem >>    filesystem/drbd >>    grub >>    kernel >>    kernel/kernel-modules >>    ldap >>    logging >>    strorage-drivers >>    tools >>    utilities >>    virt >> >> Retained directories under stx-integ (additional logical groupings >> for files): >>    base >>    mellanox >>    monitoring >>    networking >>    python >>    restapi-doc >>    security >> >> Retire directories under stx-integ (non-descriptive or ambiguous >> grouping we will retire): >>    connectivity >>    core >>    devtools >>    extended >>    support >> >> >> *Details of packages ...* >> >> Relocated packages (internal to stx-integ): >>    base/ >>       dhcp >>       initscripts >>       libevent >>       lighttpd >>       memcached >>       net-snmp >>       novnc >>       ntp >>       openssh >>       pam >>       procps >>       sanlock >>       shadow >>       sudo >>       systemd >>       util-linux >>       vim >>       watchdog >> >>    ceph/ >>       python-cephclient >> >>    config/ >>       e2fsprogs >>       facter >>       nfs-utils >>       nfscheck >>       puppet-4.8.2 >>       puppet-modules >> >>    kernel/ >>       kernel-std >>       kernel-rt >> >>    kernel/kernel-modules/ >>       mlnx-ofa_kernel >> >>    ldap/ >>       nss-pam-ldapd >>       openldap >> >>    logging/ >>       syslog-ng >>       logrotate >> >>    networking/ >>       lldpd >>       iproute >>       mellanox >>       python-ryu >>       mlx4-config >> >>    python/ >>       python-2.7.5 >>       python-django >>       python-gunicorn >>       python-setuptools >>       python-smartpm >> >>    security/ >>       shim-signed >>       shim-unsigned >>       tboot >> >>    strorage-drivers/ >>       python-3parclient >>       python-lefthandclient >> >>    virt/ >>       cloud-init >>       libvirt >>       libvirt-python >>       qemu >> >>    tools/ >>       storage-topology >>       vm-topology >> >>    utilities/ >>       tis-extensions >>       namespace-utils >>       nova-utils >>       update-motd >> >> >> >> Relocated packages (stx-utils to stx-update): >>     enable-dev-patch >> >> >> >> Relocated packages (stx-utils to stx-integ): >> >>     config-files/ >>         io-scheduler >> >>     filesystem/ >>         filesystem-scripts >> >>     grub/ >>         grubby >> >>     logging/ >>         logmgmt >> >>     tools/ >>         collector >>         monitor-tools >> >>     tools/engtools/ >>         hostdata-collectors >>         parsers >> >>     utilities/ >>         build-info >>         branding   (formerly wrs-branding) >>         platform-util >> >> >> >> Relocated packages (stx-gpl2 to stx-integ): >>     base/ >>         bash >>         cgcs-users >>         cluster-resource-agents >>         dpkg >>         haproxy >>         libfdt >>         netpbm >>         rpm >> >>     database/ >>         mariadb >> >>     filesystem/ >>         iscsi-initiator-utils >> >>     filesystem/drbd/ >>         drbd-tools >> >>     kernel/kernel-modules/ >>         drbd >>         integrity >>         intel-e1000e >>         intel-i40e >>         intel-i40evf >>         intel-ixgbe >>         intel-ixgbevf >>         qat17 >>         tpmdd >> >>     ldap/ >>         ldapscripts >> >>     networking/ >>         iptables >>         net-tools >> >> >> >> Relocated packages (stx-gpl3 to stx-integ): >>     base/ >>         anaconda >>         crontabs >>         dnsmasq >>         rsync >> >>     database/ >>         python-psycopg2 >> >>     filesystem/ >>         parted >> >>     grub/ >>         grub2 >> >>     security/ >>         python-keyring >> >> >> >> Delete two packages from stx-integ: >>    tgt >>    irqbalance >> >> Delete two packages from stx-gplv3: >>    seabios >>    sysvinit >> >> Delete one package from stx-utils: >>    io-monitor >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 2 13:04:20 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 2 Aug 2018 13:04:20 +0000 Subject: [Starlingx-discuss] No Core team call today Message-ID: <9A85D2917C58154C960D95352B22818BAB56ADBB@fmsmsx117.amr.corp.intel.com> The Core team call is cancelled for today. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 2 15:42:59 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 2 Aug 2018 15:42:59 +0000 Subject: [Starlingx-discuss] New build story Message-ID: <9A85D2917C58154C960D95352B22818BAB56AEE3@fmsmsx117.amr.corp.intel.com> I just created a new story for the Build team to change the mirror download and build scripts to enable us to create and manage per-company, shared import mirrors. This should help insulate most developers from changes in upstream packages. We recommend building one shared mirror per company per geography. This is a short term band-aid while we figure out our long term build strategy. The story is https://storyboard.openstack.org/#!/story/2003288. Build team, please review and start work on this. Thanks! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 2 17:50:27 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 2 Aug 2018 17:50:27 +0000 Subject: [Starlingx-discuss] New build story Message-ID: <9A85D2917C58154C960D95352B22818BAB56AFC4@fmsmsx117.amr.corp.intel.com> Update from Ottawa. Please hold off on this. The team has come up with what might be a better idea. From: Jones, Bruce E Sent: Thursday, August 2, 2018 8:00 AM To: starlingx-discuss at lists.starlingx.io Subject: New build story I just created a new story for the Build team to change the mirror download and build scripts to enable us to create and manage per-company, shared import mirrors. This should help insulate most developers from changes in upstream packages. We recommend building one shared mirror per company per geography. This is a short term band-aid while we figure out our long term build strategy. The story is https://storyboard.openstack.org/#!/story/2003288. Build team, please review and start work on this. Thanks! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Aug 2 22:33:04 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 2 Aug 2018 22:33:04 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects Message-ID: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> In the F2F meeting today, we worked jointly to define sub-project teams to assist with bottom-up planning. https://ethercalc.openstack.org/ctjc7vlbphm1 (also linked from the main StarlingX wiki page) Note: We have started filling out the team members; this is still work in progress. Can I ask the Team Leads for each sub-project to help fill out the names of their team members? I will be the team lead for the Release team. I will help coordinate release schedule, content and planning. I will be working with the Team Leads of the sub-projects to pull together the bottom-up plans. Looking forward to working with all of you. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 3 00:45:40 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 3 Aug 2018 00:45:40 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B2ED6C6@SHSMSX104.ccr.corp.intel.com> I can lead "distro non openstack" subproject. I will fill-up the names for the team members soon. I also want to lead the effort of "Python3 support" as well if no leader has been identified so far. Let me know if you are OK with this. Thanks. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 3, 2018 6:33 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In the F2F meeting today, we worked jointly to define sub-project teams to assist with bottom-up planning. https://ethercalc.openstack.org/ctjc7vlbphm1 (also linked from the main StarlingX wiki page) Note: We have started filling out the team members; this is still work in progress. Can I ask the Team Leads for each sub-project to help fill out the names of their team members? I will be the team lead for the Release team. I will help coordinate release schedule, content and planning. I will be working with the Team Leads of the sub-projects to pull together the bottom-up plans. Looking forward to working with all of you. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 3 00:53:52 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 3 Aug 2018 00:53:52 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B2ED6C6@SHSMSX104.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6C6@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B2ED6F0@SHSMSX104.ccr.corp.intel.com> For "distro non openstack" subproject, while it's forming, please send your name if you are interested to be part of the subproject. @Ken, can you add me and Haitao into "security" subproject? Also, I'd like to have Intel engineers part of the subproject of Flocks (config, fault, HA, metal, NFV, update, distributed cloud, etc) as well - I can send out the names later. Thanks. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Friday, August 3, 2018 8:46 AM To: Khalil, Ghada ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects I can lead "distro non openstack" subproject. I will fill-up the names for the team members soon. I also want to lead the effort of "Python3 support" as well if no leader has been identified so far. Let me know if you are OK with this. Thanks. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 3, 2018 6:33 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In the F2F meeting today, we worked jointly to define sub-project teams to assist with bottom-up planning. https://ethercalc.openstack.org/ctjc7vlbphm1 (also linked from the main StarlingX wiki page) Note: We have started filling out the team members; this is still work in progress. Can I ask the Team Leads for each sub-project to help fill out the names of their team members? I will be the team lead for the Release team. I will help coordinate release schedule, content and planning. I will be working with the Team Leads of the sub-projects to pull together the bottom-up plans. Looking forward to working with all of you. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Fri Aug 3 05:43:01 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Fri, 3 Aug 2018 05:43:01 +0000 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance Message-ID: Hi Today's build failed In the build-iso stage due to irqbalance is missing. The package was removed here[0]. The fix is easy, just to remove the irqbalance package from the image.inc file, however before that I just want to confirm that this removal is ok. Thanks -Erich [0] https://review.openstack.org/#/c/587832/ From scott.little at windriver.com Fri Aug 3 13:31:52 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 09:31:52 -0400 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: References: Message-ID: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> It will be restored by https://review.openstack.org/588043. It seems to be hung up in zuul due to a faulty verification script.  Working it now ... Scott On 18-08-03 01:43 AM, Cordoba Malibran, Erich wrote: > Hi > > Today's build failed In the build-iso stage due to irqbalance is missing. The package was removed here[0]. > The fix is easy, just to remove the irqbalance package from the image.inc file, however before that I just > want to confirm that this removal is ok. > > Thanks > > -Erich > [0] https://review.openstack.org/#/c/587832/ > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Fri Aug 3 13:51:03 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 08:51:03 -0500 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 8:31 AM, Scott Little wrote: > It will be restored by https://review.openstack.org/588043. That appears to be at the bottom of the stack while your fix https://review.openstack.org/588565 is at the top. Either 588565 (rebased on master) or https://review.openstack.org/588534 (making the Zuul job non-voting) needs to merge first, then the rest stack that is blocked need to be rebased. Also, https://review.openstack.org/588566 should not be necessary as that job is already non-voting and everything in that queue merged. dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Fri Aug 3 13:57:58 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 09:57:58 -0400 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> Message-ID: <5b978078-249f-610e-3977-7d70558c7337@windriver.com> Ok, lets go with that. I'm still not clear on how to debug these blockages.  Is there a centralized place to view the work queue and what each job is waiting on ? Scott On 18-08-03 09:51 AM, Dean Troyer wrote: > On Fri, Aug 3, 2018 at 8:31 AM, Scott Little wrote: >> It will be restored by https://review.openstack.org/588043. > That appears to be at the bottom of the stack while your fix > https://review.openstack.org/588565 is at the top. Either 588565 > (rebased on master) or https://review.openstack.org/588534 (making the > Zuul job non-voting) needs to merge first, then the rest stack that is > blocked need to be rebased. > > Also, https://review.openstack.org/588566 should not be necessary as > that job is already non-voting and everything in that queue merged. > > dt > From dtroyer at gmail.com Fri Aug 3 14:25:05 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 09:25:05 -0500 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: <5b978078-249f-610e-3977-7d70558c7337@windriver.com> References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> <5b978078-249f-610e-3977-7d70558c7337@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 8:57 AM, Scott Little wrote: > I'm still not clear on how to debug these blockages. Is there a centralized > place to view the work queue and what each job is waiting on ? http://zuul.openstack.org/ is the starting point, that shows _everything_ that Zuul is doing. So first thing is to put 'stx' or some other sub-string to search for repo names in the filter. There is where you see the specific jobs. For example, right now you can see 588565 in the check queue with the 5 reviews below it (from a stack perspective like Gerrit displays, that graph shows oldest at the top). That was my first clue to look at the review orders in Gerrit. Then it was just following both the right-most pane in the Gerrit review screen and looking at parent commits to confirm. I've found, more often than not, blockage in the queues is due to things not being in the order you expect. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Fri Aug 3 14:30:11 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 09:30:11 -0500 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> <5b978078-249f-610e-3977-7d70558c7337@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 9:25 AM, Dean Troyer wrote: > On Fri, Aug 3, 2018 at 8:57 AM, Scott Little wrote: >> I'm still not clear on how to debug these blockages. Is there a centralized >> place to view the work queue and what each job is waiting on ? > > http://zuul.openstack.org/ is the starting point, that shows > _everything_ that Zuul is doing. So first thing is to put 'stx' or > some other sub-string to search for repo names in the filter. There > is where you see the specific jobs. One other bit about the Zuul status screen, click on the review box and it expands to the list of jobs being run. Clicking on those once they have started will take you to the live log screen for a running job (like watching paint dry sometimes!) or to the same log directory you get from the Gerrit review screen. Our jobs are very uninteresting right now, to get more of a feel for this put 'neutron' or another project you are familiar with into the filter and click around a bit. dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Fri Aug 3 14:31:17 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 10:31:17 -0400 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: <5b978078-249f-610e-3977-7d70558c7337@windriver.com> References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> <5b978078-249f-610e-3977-7d70558c7337@windriver.com> Message-ID: <926fb52f-548f-7da1-3257-482a26773615@windriver.com> http://zuul.openstack.org/ So it seems there can be multiple queues per git.  Withing a queue there are dependencies, but different queues can make progress independently?  Or is there a queue of queues for the git? On top of that, it seems like there are a finite number of execution engines.  If no execution engines are available, none of your queues will progress.  Is that about right? Scott On 18-08-03 09:57 AM, Scott Little wrote: > Ok, lets go with that. > > I'm still not clear on how to debug these blockages.  Is there a > centralized place to view the work queue and what each job is waiting > on ? > > Scott > > > On 18-08-03 09:51 AM, Dean Troyer wrote: >> On Fri, Aug 3, 2018 at 8:31 AM, Scott Little >> wrote: >>> It will be restored by https://review.openstack.org/588043. >> That appears to be at the bottom of the stack while your fix >> https://review.openstack.org/588565 is at the top.  Either 588565 >> (rebased on master) or https://review.openstack.org/588534 (making the >> Zuul job non-voting) needs to merge first, then the rest stack that is >> blocked need to be rebased. >> >> Also, https://review.openstack.org/588566 should not be necessary as >> that job is already non-voting and everything in that queue merged. >> >> dt >> > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Fri Aug 3 14:45:16 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 09:45:16 -0500 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: <926fb52f-548f-7da1-3257-482a26773615@windriver.com> References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> <5b978078-249f-610e-3977-7d70558c7337@windriver.com> <926fb52f-548f-7da1-3257-482a26773615@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 9:31 AM, Scott Little wrote: > http://zuul.openstack.org/ > > So it seems there can be multiple queues per git. Withing a queue there are > dependencies, but different queues can make progress independently? Or is > there a queue of queues for the git? [0] is the Zuul concept page, but basically it has a set of pipelines for different job types, the two we care the most about are check and gate. Check jobs run on every Gerrit submission, gate jobs run after Workflow +1 is set. There is overlap between those job sets (check may have run last week) but some things like non-voting jobs don't run in the gate. The other pipeline we'll be using is experimental, for on-demand runs, that's where I plan to put the initial py3 jobs for example, so we can see where we are but not waste resource running them all the time. > On top of that, it seems like there are a finite number of execution > engines. If no execution engines are available, none of your queues will > progress. Is that about right? Not just about, that is it exactly. An older Zuul status showed the VM allocation graphs at the bottom, I'm not sure where those went after the Zuul v3 upgrade... All of OpenStack CI is run on donated cloud resources (mostly single-use VMs) from places like Rackspace, Vexxhost, OVH, Dreamcloud and about 7 more that I don't remember offhand. We have a quota on each cloud and the load during North American working hours usually puts us way over. This is a big part of why Zuul was born, to dynamically manage that ever-changing pool of test resources (VMs). It turns out that hosting OpenStack CI is an excellent cloud load test, we've found more than a few scaling problems this way. dt [0] https://zuul-ci.org/docs/zuul/user/concepts.html -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Fri Aug 3 14:50:47 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 09:50:47 -0500 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: <5b978078-249f-610e-3977-7d70558c7337@windriver.com> References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> <5b978078-249f-610e-3977-7d70558c7337@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 8:57 AM, Scott Little wrote: > Ok, lets go with that. OK, https://review.openstack.org/588534has merged, so starting with https://review.openstack.org/#/c/588043/ you should be able to rebase each of the stuck reviews directly in Gerrit on that and they should merge (if 588534 actually fixes the problem). There is currently a job for 588043 in the check queue, doing a rebase will kill that and start it over. Also, another way to kick a review to run the jobs again is to put 'recheck' as a review comment. Anyone can do this, not just those who can W+1 a review to try it again. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Fri Aug 3 15:13:55 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 10:13:55 -0500 Subject: [Starlingx-discuss] Broken build due to removal of irqbalance In-Reply-To: References: <53ab1a0f-b670-08bd-bb16-1769fabda8c5@windriver.com> <5b978078-249f-610e-3977-7d70558c7337@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 9:50 AM, Dean Troyer wrote: > OK, https://review.openstack.org/588534has merged, so starting with > https://review.openstack.org/#/c/588043/ you should be able to rebase > each of the stuck reviews directly in Gerrit on that and they should > merge (if 588534 actually fixes the problem). There is currently a > job for 588043 in the check queue, doing a rebase will kill that and > start it over. I'm pleading Friday post-flight brain fog... I need to correct myself here that the rebases are not necessary as there is no conflicting code with the fix review. Just a recheck, or as in the case of 588043, as long as the job starts _after_ the fix merges, life is good... dt -- Dean Troyer dtroyer at gmail.com From abraham.arce.moreno at intel.com Fri Aug 3 16:39:24 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Fri, 3 Aug 2018 16:39:24 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation Message-ID: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack From scott.little at windriver.com Fri Aug 3 18:18:28 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 14:18:28 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> Message-ID: Remaining reviews are now available: stx-manifests:    https://review.openstack.org/588633 Remove empty repos stx-gplv2 and stx-gplv3 stx-manifests:    https://review.openstack.org/588634 Remove empty repo stx-utils stx-root:         https://review.openstack.org/588635 Remove empty repo stx-utils On 18-08-01 05:03 PM, Scott Little wrote: > 99% of the reviews are now available.  I've held back the manifest > changes for tomorrow. > > The relocation updates come in sets for each package, that attempt to > preserve the update history found at the original location.  One > update removes a package from stx-utils, stx-gplv2, stx-gplv3.  A > second adds it to stx-integ or stx-updates in it's StarlingX day zero > form (author changes from Dean to Me).  Then there may be 0-N updates > replaying the subsequent commit history of that package (author and > commit text preserved).  Finally there might be a follow up commit by > me to fix a build path.  The final result is a glorified 'mv' > operation.  The content should be unchanged, So all the code has been > reviewed before. > > Reviews should focus one subject only, was the move executed correctly? > > Please do not workflow +1!    I couldn't get the scripts to manage > Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. > > Scott > > > > > On 18-07-31 11:26 AM, Scott Little wrote: >> Revised timeline is August 1 or 2. >> >> Scott >> >> >> On 18-07-17 11:07 AM, Scott Little wrote: >>> >>> Story: https://storyboard.openstack.org/#!/story/2002801 >>> >>> *Goals:* >>> >>> 1) Consolidate the following repo’s under stx-integ. >>> • stx-gplv2 >>> • stx-gplv3 >>> • stx-utils >>> >>> 2) Restructure the directories under which packages are to be found. >>> >>> Currently stx-gplv2/3 are largely without structure. Parts of the >>> stx-integ structure were inherited from WRLinux and make little >>> sense.  stx-utils is just i mess of stuff that never found a home >>> when StarlingX was first set up. >>> >>> Directories should descriptive of the class of packages to be found >>> within. >>> >>> Intent is to preserve update history as best is is possible. >>> >>> >>> *Timeline: * >>> >>> Probably around July 23 unless there are strong objections.  We >>> should probably have a freeze on submissions to the affected repos >>> until it is all completed. >>> >>> >>> *Code Reviews: * >>> >>> Most of this is just moving code around.  A few path corrections, >>> but no new code.  The number and size of the reviews will be huge, >>> and the code should all have been inspected once before.  Is there a >>> way to fast track this? Would there be strong objections to me just >>> doing a +2/+1 without waiting for independent review? >>> >>> >>> *Details of directories/groups ...* >>> >>> >>> Create new directories under stx-integ (logical groupings for files): >>>    ceph >>>    config >>>    config-files >>>    database >>>    filesystem >>>    filesystem/drbd >>>    grub >>>    kernel >>>    kernel/kernel-modules >>>    ldap >>>    logging >>>    strorage-drivers >>>    tools >>>    utilities >>>    virt >>> >>> Retained directories under stx-integ (additional logical groupings >>> for files): >>>    base >>>    mellanox >>>    monitoring >>>    networking >>>    python >>>    restapi-doc >>>    security >>> >>> Retire directories under stx-integ (non-descriptive or ambiguous >>> grouping we will retire): >>>    connectivity >>>    core >>>    devtools >>>    extended >>>    support >>> >>> >>> *Details of packages ...* >>> >>> Relocated packages (internal to stx-integ): >>>    base/ >>>       dhcp >>>       initscripts >>>       libevent >>>       lighttpd >>>       memcached >>>       net-snmp >>>       novnc >>>       ntp >>>       openssh >>>       pam >>>       procps >>>       sanlock >>>       shadow >>>       sudo >>>       systemd >>>       util-linux >>>       vim >>>       watchdog >>> >>>    ceph/ >>>       python-cephclient >>> >>>    config/ >>>       e2fsprogs >>>       facter >>>       nfs-utils >>>       nfscheck >>>       puppet-4.8.2 >>>       puppet-modules >>> >>>    kernel/ >>>       kernel-std >>>       kernel-rt >>> >>>    kernel/kernel-modules/ >>>       mlnx-ofa_kernel >>> >>>    ldap/ >>>       nss-pam-ldapd >>>       openldap >>> >>>    logging/ >>>       syslog-ng >>>       logrotate >>> >>>    networking/ >>>       lldpd >>>       iproute >>>       mellanox >>>       python-ryu >>>       mlx4-config >>> >>>    python/ >>>       python-2.7.5 >>>       python-django >>>       python-gunicorn >>>       python-setuptools >>>       python-smartpm >>> >>>    security/ >>>       shim-signed >>>       shim-unsigned >>>       tboot >>> >>>    strorage-drivers/ >>>       python-3parclient >>>       python-lefthandclient >>> >>>    virt/ >>>       cloud-init >>>       libvirt >>>       libvirt-python >>>       qemu >>> >>>    tools/ >>>       storage-topology >>>       vm-topology >>> >>>    utilities/ >>>       tis-extensions >>>       namespace-utils >>>       nova-utils >>>       update-motd >>> >>> >>> >>> Relocated packages (stx-utils to stx-update): >>>     enable-dev-patch >>> >>> >>> >>> Relocated packages (stx-utils to stx-integ): >>> >>>     config-files/ >>>         io-scheduler >>> >>>     filesystem/ >>>         filesystem-scripts >>> >>>     grub/ >>>         grubby >>> >>>     logging/ >>>         logmgmt >>> >>>     tools/ >>>         collector >>>         monitor-tools >>> >>>     tools/engtools/ >>>         hostdata-collectors >>>         parsers >>> >>>     utilities/ >>>         build-info >>>         branding   (formerly wrs-branding) >>>         platform-util >>> >>> >>> >>> Relocated packages (stx-gpl2 to stx-integ): >>>     base/ >>>         bash >>>         cgcs-users >>>         cluster-resource-agents >>>         dpkg >>>         haproxy >>>         libfdt >>>         netpbm >>>         rpm >>> >>>     database/ >>>         mariadb >>> >>>     filesystem/ >>>         iscsi-initiator-utils >>> >>>     filesystem/drbd/ >>>         drbd-tools >>> >>>     kernel/kernel-modules/ >>>         drbd >>>         integrity >>>         intel-e1000e >>>         intel-i40e >>>         intel-i40evf >>>         intel-ixgbe >>>         intel-ixgbevf >>>         qat17 >>>         tpmdd >>> >>>     ldap/ >>>         ldapscripts >>> >>>     networking/ >>>         iptables >>>         net-tools >>> >>> >>> >>> Relocated packages (stx-gpl3 to stx-integ): >>>     base/ >>>         anaconda >>>         crontabs >>>         dnsmasq >>>         rsync >>> >>>     database/ >>>         python-psycopg2 >>> >>>     filesystem/ >>>         parted >>> >>>     grub/ >>>         grub2 >>> >>>     security/ >>>         python-keyring >>> >>> >>> >>> Delete two packages from stx-integ: >>>    tgt >>>    irqbalance >>> >>> Delete two packages from stx-gplv3: >>>    seabios >>>    sysvinit >>> >>> Delete one package from stx-utils: >>>    io-monitor >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 3 18:33:18 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 13:33:18 -0500 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 1:18 PM, Scott Little wrote: > Remaining reviews are now available: > > stx-manifests: https://review.openstack.org/588633 Remove empty repos > stx-gplv2 and stx-gplv3 > stx-manifests: https://review.openstack.org/588634 Remove empty repo > stx-utils > stx-root: https://review.openstack.org/588635 Remove empty repo > stx-utils Looks good, thanks Scott! Is there a plan to change the cgcs-root directory name yet? dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Fri Aug 3 18:36:43 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 14:36:43 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> Message-ID: <7f59d4f3-6a59-ea90-51d0-d16dbf796038@windriver.com> I'm looking at it for sure. That one will be really messy.  Affecting not just git managed code, but wikis, in house jenkins jobs, and some prototype code to improve the rpm download issues that's likely more urgent. I'll call that round 3. On 18-08-03 02:33 PM, Dean Troyer wrote: > On Fri, Aug 3, 2018 at 1:18 PM, Scott Little wrote: >> Remaining reviews are now available: >> >> stx-manifests: https://review.openstack.org/588633 Remove empty repos >> stx-gplv2 and stx-gplv3 >> stx-manifests: https://review.openstack.org/588634 Remove empty repo >> stx-utils >> stx-root: https://review.openstack.org/588635 Remove empty repo >> stx-utils > Looks good, thanks Scott! > > Is there a plan to change the cgcs-root directory name yet? > > dt From dtroyer at gmail.com Fri Aug 3 18:52:38 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 3 Aug 2018 13:52:38 -0500 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <7f59d4f3-6a59-ea90-51d0-d16dbf796038@windriver.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> <7f59d4f3-6a59-ea90-51d0-d16dbf796038@windriver.com> Message-ID: On Fri, Aug 3, 2018 at 1:36 PM, Scott Little wrote: > That one will be really messy. Affecting not just git managed code, but > wikis, in house jenkins jobs, and some prototype code to improve the rpm > download issues that's likely more urgent. > > I'll call that round 3. Sure, I didn't know if it was still in the queue. This looks great and is much easier to grok, thanks for the ton of effort that went in to it! dt -- Dean Troyer dtroyer at gmail.com From Brent.Rowsell at windriver.com Fri Aug 3 18:54:21 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 3 Aug 2018 18:54:21 +0000 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <7f59d4f3-6a59-ea90-51d0-d16dbf796038@windriver.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> <7f59d4f3-6a59-ea90-51d0-d16dbf796038@windriver.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1DCA5D@ALA-MBD.corp.ad.wrs.com> This is something that needs to be done but was not planned for this round. We can schedule later. Brent -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Friday, August 3, 2018 2:37 PM To: Dean Troyer Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Restructuring round 2 I'm looking at it for sure. That one will be really messy.  Affecting not just git managed code, but wikis, in house jenkins jobs, and some prototype code to improve the rpm download issues that's likely more urgent. I'll call that round 3. On 18-08-03 02:33 PM, Dean Troyer wrote: > On Fri, Aug 3, 2018 at 1:18 PM, Scott Little wrote: >> Remaining reviews are now available: >> >> stx-manifests: https://review.openstack.org/588633 Remove empty repos >> stx-gplv2 and stx-gplv3 >> stx-manifests: https://review.openstack.org/588634 Remove empty repo >> stx-utils >> stx-root: https://review.openstack.org/588635 Remove empty repo >> stx-utils > Looks good, thanks Scott! > > Is there a plan to change the cgcs-root directory name yet? > > dt _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ian.Jolliffe at windriver.com Fri Aug 3 19:38:23 2018 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 3 Aug 2018 19:38:23 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation Message-ID: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> Hi Abraham; Thanks for kicking this off. On 2018-08-03, 12:40 PM, "Arce Moreno, Abraham" wrote: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? We should do the concepts and the ref at the same time. The new OpenStack approach allows for tags to go in the code. Let's start with this work. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects StarlingX should not document other OpenStack API's, would their documentation not the source of truth? [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? All projects in the Flock should be included. I think there is a dependency on some of the code restructuring activities that are underway, we need to make sure these activities don't collide. Ian [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Aug 3 20:40:02 2018 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 3 Aug 2018 13:40:02 -0700 Subject: [Starlingx-discuss] Next round of failed packages Message-ID: I reset my environment this morning, maybe I got caught in the middle of things, here is the list of missing packages from today: cppcheck-1.80-1.el7.x86_64.rpm ima-evm-utils-1.0-1.el7.x86_64.rpm ima-evm-utils-devel-1.0-1.el7.x86_64.rpm scapy-2.3.3-1.el7.src.rpm I think I have seen these fail before, I am not behind a proxy or firewall. Sau! From scott.little at windriver.com Fri Aug 3 20:44:31 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 16:44:31 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> Message-ID: build-iso failure.  Looks like we still have a unexpected dependency on seabios from qemu.   Trying to figure out if we missed a qemu update, or need to restore seabios temporarily ... Scott On 18-08-03 02:18 PM, Scott Little wrote: > Remaining reviews are now available: > > stx-manifests: https://review.openstack.org/588633 Remove empty repos > stx-gplv2 and stx-gplv3 > stx-manifests: https://review.openstack.org/588634 Remove empty repo > stx-utils > stx-root: https://review.openstack.org/588635 Remove empty repo stx-utils > > > On 18-08-01 05:03 PM, Scott Little wrote: >> 99% of the reviews are now available.  I've held back the manifest >> changes for tomorrow. >> >> The relocation updates come in sets for each package, that attempt to >> preserve the update history found at the original location.  One >> update removes a package from stx-utils, stx-gplv2, stx-gplv3.  A >> second adds it to stx-integ or stx-updates in it's StarlingX day zero >> form (author changes from Dean to Me).  Then there may be 0-N updates >> replaying the subsequent commit history of that package (author and >> commit text preserved).  Finally there might be a follow up commit by >> me to fix a build path.  The final result is a glorified 'mv' >> operation.  The content should be unchanged, So all the code has been >> reviewed before. >> >> Reviews should focus one subject only, was the move executed correctly? >> >> Please do not workflow +1!    I couldn't get the scripts to manage >> Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. >> >> Scott >> >> >> >> >> On 18-07-31 11:26 AM, Scott Little wrote: >>> Revised timeline is August 1 or 2. >>> >>> Scott >>> >>> >>> On 18-07-17 11:07 AM, Scott Little wrote: >>>> >>>> Story: https://storyboard.openstack.org/#!/story/2002801 >>>> >>>> *Goals:* >>>> >>>> 1) Consolidate the following repo’s under stx-integ. >>>> • stx-gplv2 >>>> • stx-gplv3 >>>> • stx-utils >>>> >>>> 2) Restructure the directories under which packages are to be found. >>>> >>>> Currently stx-gplv2/3 are largely without structure. Parts of the >>>> stx-integ structure were inherited from WRLinux and make little >>>> sense.  stx-utils is just i mess of stuff that never found a home >>>> when StarlingX was first set up. >>>> >>>> Directories should descriptive of the class of packages to be found >>>> within. >>>> >>>> Intent is to preserve update history as best is is possible. >>>> >>>> >>>> *Timeline: * >>>> >>>> Probably around July 23 unless there are strong objections.  We >>>> should probably have a freeze on submissions to the affected repos >>>> until it is all completed. >>>> >>>> >>>> *Code Reviews: * >>>> >>>> Most of this is just moving code around.  A few path corrections, >>>> but no new code.  The number and size of the reviews will be huge, >>>> and the code should all have been inspected once before.  Is there >>>> a way to fast track this?  Would there be strong objections to me >>>> just doing a +2/+1 without waiting for independent review? >>>> >>>> >>>> *Details of directories/groups ...* >>>> >>>> >>>> Create new directories under stx-integ (logical groupings for files): >>>>    ceph >>>>    config >>>>    config-files >>>>    database >>>>    filesystem >>>>    filesystem/drbd >>>>    grub >>>>    kernel >>>>    kernel/kernel-modules >>>>    ldap >>>>    logging >>>>    strorage-drivers >>>>    tools >>>>    utilities >>>>    virt >>>> >>>> Retained directories under stx-integ (additional logical groupings >>>> for files): >>>>    base >>>>    mellanox >>>>    monitoring >>>>    networking >>>>    python >>>>    restapi-doc >>>>    security >>>> >>>> Retire directories under stx-integ (non-descriptive or ambiguous >>>> grouping we will retire): >>>>    connectivity >>>>    core >>>>    devtools >>>>    extended >>>>    support >>>> >>>> >>>> *Details of packages ...* >>>> >>>> Relocated packages (internal to stx-integ): >>>>    base/ >>>>       dhcp >>>>       initscripts >>>>       libevent >>>>       lighttpd >>>>       memcached >>>>       net-snmp >>>>       novnc >>>>       ntp >>>>       openssh >>>>       pam >>>>       procps >>>>       sanlock >>>>       shadow >>>>       sudo >>>>       systemd >>>>       util-linux >>>>       vim >>>>       watchdog >>>> >>>>    ceph/ >>>>       python-cephclient >>>> >>>>    config/ >>>>       e2fsprogs >>>>       facter >>>>       nfs-utils >>>>       nfscheck >>>>       puppet-4.8.2 >>>>       puppet-modules >>>> >>>>    kernel/ >>>>       kernel-std >>>>       kernel-rt >>>> >>>>    kernel/kernel-modules/ >>>>       mlnx-ofa_kernel >>>> >>>>    ldap/ >>>>       nss-pam-ldapd >>>>       openldap >>>> >>>>    logging/ >>>>       syslog-ng >>>>       logrotate >>>> >>>>    networking/ >>>>       lldpd >>>>       iproute >>>>       mellanox >>>>       python-ryu >>>>       mlx4-config >>>> >>>>    python/ >>>>       python-2.7.5 >>>>       python-django >>>>       python-gunicorn >>>>       python-setuptools >>>>       python-smartpm >>>> >>>>    security/ >>>>       shim-signed >>>>       shim-unsigned >>>>       tboot >>>> >>>>    strorage-drivers/ >>>>       python-3parclient >>>>       python-lefthandclient >>>> >>>>    virt/ >>>>       cloud-init >>>>       libvirt >>>>       libvirt-python >>>>       qemu >>>> >>>>    tools/ >>>>       storage-topology >>>>       vm-topology >>>> >>>>    utilities/ >>>>       tis-extensions >>>>       namespace-utils >>>>       nova-utils >>>>       update-motd >>>> >>>> >>>> >>>> Relocated packages (stx-utils to stx-update): >>>>     enable-dev-patch >>>> >>>> >>>> >>>> Relocated packages (stx-utils to stx-integ): >>>> >>>>     config-files/ >>>>         io-scheduler >>>> >>>>     filesystem/ >>>>         filesystem-scripts >>>> >>>>     grub/ >>>>         grubby >>>> >>>>     logging/ >>>>         logmgmt >>>> >>>>     tools/ >>>>         collector >>>>         monitor-tools >>>> >>>>     tools/engtools/ >>>>         hostdata-collectors >>>>         parsers >>>> >>>>     utilities/ >>>>         build-info >>>>         branding   (formerly wrs-branding) >>>>         platform-util >>>> >>>> >>>> >>>> Relocated packages (stx-gpl2 to stx-integ): >>>>     base/ >>>>         bash >>>>         cgcs-users >>>>         cluster-resource-agents >>>>         dpkg >>>>         haproxy >>>>         libfdt >>>>         netpbm >>>>         rpm >>>> >>>>     database/ >>>>         mariadb >>>> >>>>     filesystem/ >>>>         iscsi-initiator-utils >>>> >>>>     filesystem/drbd/ >>>>         drbd-tools >>>> >>>>     kernel/kernel-modules/ >>>>         drbd >>>>         integrity >>>>         intel-e1000e >>>>         intel-i40e >>>>         intel-i40evf >>>>         intel-ixgbe >>>>         intel-ixgbevf >>>>         qat17 >>>>         tpmdd >>>> >>>>     ldap/ >>>>         ldapscripts >>>> >>>>     networking/ >>>>         iptables >>>>         net-tools >>>> >>>> >>>> >>>> Relocated packages (stx-gpl3 to stx-integ): >>>>     base/ >>>>         anaconda >>>>         crontabs >>>>         dnsmasq >>>>         rsync >>>> >>>>     database/ >>>>         python-psycopg2 >>>> >>>>     filesystem/ >>>>         parted >>>> >>>>     grub/ >>>>         grub2 >>>> >>>>     security/ >>>>         python-keyring >>>> >>>> >>>> >>>> Delete two packages from stx-integ: >>>>    tgt >>>>    irqbalance >>>> >>>> Delete two packages from stx-gplv3: >>>>    seabios >>>>    sysvinit >>>> >>>> Delete one package from stx-utils: >>>>    io-monitor >>>> >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Aug 3 20:50:24 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 3 Aug 2018 20:50:24 +0000 Subject: [Starlingx-discuss] Next round of failed packages In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332DB@ALA-MBD.corp.ad.wrs.com> Yeah, I hit those earlier in the week. Cindy created stories to deal with them, but I haven't seen any updates: https://storyboard.openstack.org/#!/story/2003173 https://storyboard.openstack.org/#!/story/2003174 -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, August 03, 2018 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Next round of failed packages I reset my environment this morning, maybe I got caught in the middle of things, here is the list of missing packages from today: cppcheck-1.80-1.el7.x86_64.rpm ima-evm-utils-1.0-1.el7.x86_64.rpm ima-evm-utils-devel-1.0-1.el7.x86_64.rpm scapy-2.3.3-1.el7.src.rpm I think I have seen these fail before, I am not behind a proxy or firewall. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Fri Aug 3 20:55:17 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 3 Aug 2018 20:55:17 +0000 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332EB@ALA-MBD.corp.ad.wrs.com> My repoquery isn’t showing anything that requires seabios: [dpenney at yow-dpenney-lx-vm1 /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios [dpenney at yow-dpenney-lx-vm1 /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios [dpenney at yow-dpenney-lx-vm1 /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath tis,$MY_WORKSPACE/std/rpmbuild/RPMS --provides seabios seabios = 1.11.0-2.el7 seabios(x86-64) = 1.11.0-2.el7 From: Scott Little [mailto:scott.little at windriver.com] Sent: Friday, August 03, 2018 4:45 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Restructuring round 2 build-iso failure. Looks like we still have a unexpected dependency on seabios from qemu. Trying to figure out if we missed a qemu update, or need to restore seabios temporarily ... Scott On 18-08-03 02:18 PM, Scott Little wrote: Remaining reviews are now available: stx-manifests: https://review.openstack.org/588633 Remove empty repos stx-gplv2 and stx-gplv3 stx-manifests: https://review.openstack.org/588634 Remove empty repo stx-utils stx-root: https://review.openstack.org/588635 Remove empty repo stx-utils On 18-08-01 05:03 PM, Scott Little wrote: 99% of the reviews are now available. I've held back the manifest changes for tomorrow. The relocation updates come in sets for each package, that attempt to preserve the update history found at the original location. One update removes a package from stx-utils, stx-gplv2, stx-gplv3. A second adds it to stx-integ or stx-updates in it's StarlingX day zero form (author changes from Dean to Me). Then there may be 0-N updates replaying the subsequent commit history of that package (author and commit text preserved). Finally there might be a follow up commit by me to fix a build path. The final result is a glorified 'mv' operation. The content should be unchanged, So all the code has been reviewed before. Reviews should focus one subject only, was the move executed correctly? Please do not workflow +1! I couldn't get the scripts to manage Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. Scott On 18-07-31 11:26 AM, Scott Little wrote: Revised timeline is August 1 or 2. Scott On 18-07-17 11:07 AM, Scott Little wrote: Story: https://storyboard.openstack.org/#!/story/2002801 Goals: 1) Consolidate the following repo’s under stx-integ. • stx-gplv2 • stx-gplv3 • stx-utils 2) Restructure the directories under which packages are to be found. Currently stx-gplv2/3 are largely without structure. Parts of the stx-integ structure were inherited from WRLinux and make little sense. stx-utils is just i mess of stuff that never found a home when StarlingX was first set up. Directories should descriptive of the class of packages to be found within. Intent is to preserve update history as best is is possible. Timeline: Probably around July 23 unless there are strong objections. We should probably have a freeze on submissions to the affected repos until it is all completed. Code Reviews: Most of this is just moving code around. A few path corrections, but no new code. The number and size of the reviews will be huge, and the code should all have been inspected once before. Is there a way to fast track this? Would there be strong objections to me just doing a +2/+1 without waiting for independent review? Details of directories/groups ... Create new directories under stx-integ (logical groupings for files): ceph config config-files database filesystem filesystem/drbd grub kernel kernel/kernel-modules ldap logging strorage-drivers tools utilities virt Retained directories under stx-integ (additional logical groupings for files): base mellanox monitoring networking python restapi-doc security Retire directories under stx-integ (non-descriptive or ambiguous grouping we will retire): connectivity core devtools extended support Details of packages ... Relocated packages (internal to stx-integ): base/ dhcp initscripts libevent lighttpd memcached net-snmp novnc ntp openssh pam procps sanlock shadow sudo systemd util-linux vim watchdog ceph/ python-cephclient config/ e2fsprogs facter nfs-utils nfscheck puppet-4.8.2 puppet-modules kernel/ kernel-std kernel-rt kernel/kernel-modules/ mlnx-ofa_kernel ldap/ nss-pam-ldapd openldap logging/ syslog-ng logrotate networking/ lldpd iproute mellanox python-ryu mlx4-config python/ python-2.7.5 python-django python-gunicorn python-setuptools python-smartpm security/ shim-signed shim-unsigned tboot strorage-drivers/ python-3parclient python-lefthandclient virt/ cloud-init libvirt libvirt-python qemu tools/ storage-topology vm-topology utilities/ tis-extensions namespace-utils nova-utils update-motd Relocated packages (stx-utils to stx-update): enable-dev-patch Relocated packages (stx-utils to stx-integ): config-files/ io-scheduler filesystem/ filesystem-scripts grub/ grubby logging/ logmgmt tools/ collector monitor-tools tools/engtools/ hostdata-collectors parsers utilities/ build-info branding (formerly wrs-branding) platform-util Relocated packages (stx-gpl2 to stx-integ): base/ bash cgcs-users cluster-resource-agents dpkg haproxy libfdt netpbm rpm database/ mariadb filesystem/ iscsi-initiator-utils filesystem/drbd/ drbd-tools kernel/kernel-modules/ drbd integrity intel-e1000e intel-i40e intel-i40evf intel-ixgbe intel-ixgbevf qat17 tpmdd ldap/ ldapscripts networking/ iptables net-tools Relocated packages (stx-gpl3 to stx-integ): base/ anaconda crontabs dnsmasq rsync database/ python-psycopg2 filesystem/ parted grub/ grub2 security/ python-keyring Delete two packages from stx-integ: tgt irqbalance Delete two packages from stx-gplv3: seabios sysvinit Delete one package from stx-utils: io-monitor _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Aug 3 21:06:07 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 17:06:07 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA333306@ALA-MBD.corp.ad.wrs.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3332EB@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA333306@ALA-MBD.corp.ad.wrs.com> Message-ID: <11acab4e-0887-41e4-e801-da8edebc6691@windriver.com> I think adding seavgabios-bin-1.11.0-2.el7.noarch.rpm and seabios-bin-1.11.0-2.el7.noarch.rpm to the .lst files will resolve it. Scott On 18-08-03 05:04 PM, Penney, Don wrote: > > seabios-bin, however… > > qemu-kvm-ev-10:2.10.0-0.tis.0.x86_64 > > seabios-0:1.10.2-3.el7_4.1.tis.2.x86_64 > > swtpm-0:0.1.0-2.tis.0.x86_64 > > *From:*Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Friday, August 03, 2018 4:55 PM > *To:* Little, Scott; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Restructuring round 2 > > My repoquery isn’t showing anything that requires seabios: > > [dpenney at yow-dpenney-lx-vm1 > /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery > --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath > tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios > > [dpenney at yow-dpenney-lx-vm1 > /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery > --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath > tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios > > [dpenney at yow-dpenney-lx-vm1 > /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery > --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath > tis,$MY_WORKSPACE/std/rpmbuild/RPMS --provides seabios > > seabios = 1.11.0-2.el7 > > seabios(x86-64) = 1.11.0-2.el7 > > *From:*Scott Little [mailto:scott.little at windriver.com] > *Sent:* Friday, August 03, 2018 4:45 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Restructuring round 2 > > build-iso failure.  Looks like we still have a unexpected dependency > on seabios from qemu.   Trying to figure out if we missed a qemu > update, or need to restore seabios temporarily ... > > Scott > > > On 18-08-03 02:18 PM, Scott Little wrote: > > Remaining reviews are now available: > > stx-manifests: https://review.openstack.org/588633 Remove empty > repos stx-gplv2 and stx-gplv3 > stx-manifests: https://review.openstack.org/588634 Remove empty > repo stx-utils > stx-root: https://review.openstack.org/588635 Remove empty repo > stx-utils > > > On 18-08-01 05:03 PM, Scott Little wrote: > > 99% of the reviews are now available.  I've held back the > manifest changes for tomorrow. > > The relocation updates come in sets for each package, that > attempt to preserve the update history found at the original > location.  One update removes a package from stx-utils, > stx-gplv2, stx-gplv3.  A second adds it to stx-integ or > stx-updates in it's StarlingX day zero form (author changes > from Dean to Me).  Then there may be 0-N updates replaying the > subsequent commit history of that package (author and commit > text preserved).  Finally there might be a follow up commit by > me to fix a build path.  The final result is a glorified 'mv' > operation. The content should be unchanged, So all the code > has been reviewed before. > > Reviews should focus one subject only, was the move executed > correctly? > > Please do not workflow +1!    I couldn't get the scripts to > manage Depends-On relationships satisfactorily, so I'll hand > manage it tomorrow. > > Scott > > > > > On 18-07-31 11:26 AM, Scott Little wrote: > > Revised timeline is August 1 or 2. > > Scott > > > On 18-07-17 11:07 AM, Scott Little wrote: > > Story: > https://storyboard.openstack.org/#!/story/2002801 > > > *Goals:* > > 1) Consolidate the following repo’s under stx-integ. > • stx-gplv2 > • stx-gplv3 > • stx-utils > > 2) Restructure the directories under which packages > are to be found. > > Currently stx-gplv2/3 are largely without structure. > Parts of the stx-integ structure were inherited from > WRLinux and make little sense.  stx-utils is just i > mess of stuff that never found a home when StarlingX > was first set up. > > Directories should descriptive of the class of > packages to be found within. > > Intent is to preserve update history as best is is > possible. > > *Timeline: * > > Probably around July 23 unless there are strong > objections.  We should probably have a freeze on > submissions to the affected repos until it is all > completed. > > *Code Reviews: * > > Most of this is just moving code around.  A few path > corrections, but no new code.  The number and size of > the reviews will be huge, and the code should all have > been inspected once before.  Is there a way to fast > track this?  Would there be strong objections to me > just doing a +2/+1 without waiting for independent review? > > *Details of directories/groups ...* > > Create new directories under stx-integ (logical > groupings for files): >    ceph >    config >    config-files >    database >    filesystem >    filesystem/drbd >    grub >    kernel >    kernel/kernel-modules >    ldap >    logging >    strorage-drivers >    tools >    utilities >    virt > > Retained directories under stx-integ (additional > logical groupings for files): >    base >    mellanox >    monitoring >    networking >    python >    restapi-doc >    security > > Retire directories under stx-integ (non-descriptive or > ambiguous grouping we will retire): >    connectivity >    core >    devtools >    extended >    support > > *Details of packages ...* > > Relocated packages (internal to stx-integ): >    base/ >       dhcp >       initscripts >       libevent >       lighttpd >       memcached >       net-snmp >       novnc >       ntp >       openssh >       pam >       procps >       sanlock >       shadow >       sudo >       systemd >       util-linux >       vim >       watchdog > >    ceph/ >       python-cephclient > >    config/ >       e2fsprogs >       facter >       nfs-utils >       nfscheck >       puppet-4.8.2 >       puppet-modules > >    kernel/ >       kernel-std >       kernel-rt > >    kernel/kernel-modules/ >       mlnx-ofa_kernel > >    ldap/ >       nss-pam-ldapd >       openldap > >    logging/ >       syslog-ng >       logrotate > >    networking/ >       lldpd >       iproute >       mellanox >       python-ryu >       mlx4-config > >    python/ >       python-2.7.5 >       python-django >       python-gunicorn >       python-setuptools >       python-smartpm > >    security/ >       shim-signed >       shim-unsigned >       tboot > >    strorage-drivers/ >       python-3parclient >       python-lefthandclient > >    virt/ >       cloud-init >       libvirt >       libvirt-python >       qemu > >    tools/ >       storage-topology >       vm-topology > >    utilities/ >       tis-extensions >       namespace-utils >       nova-utils >       update-motd > > > > Relocated packages (stx-utils to stx-update): >     enable-dev-patch > > > Relocated packages (stx-utils to stx-integ): > >     config-files/ >         io-scheduler > >     filesystem/ >         filesystem-scripts > >     grub/ >         grubby > >     logging/ >         logmgmt > >     tools/ >         collector >         monitor-tools > >     tools/engtools/ >         hostdata-collectors >         parsers > >     utilities/ >         build-info >         branding   (formerly wrs-branding) >         platform-util > > > Relocated packages (stx-gpl2 to stx-integ): >     base/ >         bash >         cgcs-users >         cluster-resource-agents >         dpkg >         haproxy >         libfdt >         netpbm >         rpm > >     database/ >         mariadb > >     filesystem/ >         iscsi-initiator-utils > >     filesystem/drbd/ >         drbd-tools > >     kernel/kernel-modules/ >         drbd >         integrity >         intel-e1000e >         intel-i40e >         intel-i40evf >         intel-ixgbe >         intel-ixgbevf >         qat17 >         tpmdd > >     ldap/ >         ldapscripts > >     networking/ >         iptables >         net-tools > > > Relocated packages (stx-gpl3 to stx-integ): >     base/ >         anaconda >         crontabs >         dnsmasq >         rsync > >     database/ >         python-psycopg2 > >     filesystem/ >         parted > >     grub/ >         grub2 > >     security/ >         python-keyring > > > Delete two packages from stx-integ: >    tgt >    irqbalance > > Delete two packages from stx-gplv3: >    seabios >    sysvinit > > Delete one package from stx-utils: >    io-monitor > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Aug 3 21:04:49 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 3 Aug 2018 21:04:49 +0000 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332EB@ALA-MBD.corp.ad.wrs.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3332EB@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA333306@ALA-MBD.corp.ad.wrs.com> seabios-bin, however… qemu-kvm-ev-10:2.10.0-0.tis.0.x86_64 seabios-0:1.10.2-3.el7_4.1.tis.2.x86_64 swtpm-0:0.1.0-2.tis.0.x86_64 From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Friday, August 03, 2018 4:55 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Restructuring round 2 My repoquery isn’t showing anything that requires seabios: [dpenney at yow-dpenney-lx-vm1 /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios [dpenney at yow-dpenney-lx-vm1 /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios [dpenney at yow-dpenney-lx-vm1 /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath tis,$MY_WORKSPACE/std/rpmbuild/RPMS --provides seabios seabios = 1.11.0-2.el7 seabios(x86-64) = 1.11.0-2.el7 From: Scott Little [mailto:scott.little at windriver.com] Sent: Friday, August 03, 2018 4:45 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Restructuring round 2 build-iso failure. Looks like we still have a unexpected dependency on seabios from qemu. Trying to figure out if we missed a qemu update, or need to restore seabios temporarily ... Scott On 18-08-03 02:18 PM, Scott Little wrote: Remaining reviews are now available: stx-manifests: https://review.openstack.org/588633 Remove empty repos stx-gplv2 and stx-gplv3 stx-manifests: https://review.openstack.org/588634 Remove empty repo stx-utils stx-root: https://review.openstack.org/588635 Remove empty repo stx-utils On 18-08-01 05:03 PM, Scott Little wrote: 99% of the reviews are now available. I've held back the manifest changes for tomorrow. The relocation updates come in sets for each package, that attempt to preserve the update history found at the original location. One update removes a package from stx-utils, stx-gplv2, stx-gplv3. A second adds it to stx-integ or stx-updates in it's StarlingX day zero form (author changes from Dean to Me). Then there may be 0-N updates replaying the subsequent commit history of that package (author and commit text preserved). Finally there might be a follow up commit by me to fix a build path. The final result is a glorified 'mv' operation. The content should be unchanged, So all the code has been reviewed before. Reviews should focus one subject only, was the move executed correctly? Please do not workflow +1! I couldn't get the scripts to manage Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. Scott On 18-07-31 11:26 AM, Scott Little wrote: Revised timeline is August 1 or 2. Scott On 18-07-17 11:07 AM, Scott Little wrote: Story: https://storyboard.openstack.org/#!/story/2002801 Goals: 1) Consolidate the following repo’s under stx-integ. • stx-gplv2 • stx-gplv3 • stx-utils 2) Restructure the directories under which packages are to be found. Currently stx-gplv2/3 are largely without structure. Parts of the stx-integ structure were inherited from WRLinux and make little sense. stx-utils is just i mess of stuff that never found a home when StarlingX was first set up. Directories should descriptive of the class of packages to be found within. Intent is to preserve update history as best is is possible. Timeline: Probably around July 23 unless there are strong objections. We should probably have a freeze on submissions to the affected repos until it is all completed. Code Reviews: Most of this is just moving code around. A few path corrections, but no new code. The number and size of the reviews will be huge, and the code should all have been inspected once before. Is there a way to fast track this? Would there be strong objections to me just doing a +2/+1 without waiting for independent review? Details of directories/groups ... Create new directories under stx-integ (logical groupings for files): ceph config config-files database filesystem filesystem/drbd grub kernel kernel/kernel-modules ldap logging strorage-drivers tools utilities virt Retained directories under stx-integ (additional logical groupings for files): base mellanox monitoring networking python restapi-doc security Retire directories under stx-integ (non-descriptive or ambiguous grouping we will retire): connectivity core devtools extended support Details of packages ... Relocated packages (internal to stx-integ): base/ dhcp initscripts libevent lighttpd memcached net-snmp novnc ntp openssh pam procps sanlock shadow sudo systemd util-linux vim watchdog ceph/ python-cephclient config/ e2fsprogs facter nfs-utils nfscheck puppet-4.8.2 puppet-modules kernel/ kernel-std kernel-rt kernel/kernel-modules/ mlnx-ofa_kernel ldap/ nss-pam-ldapd openldap logging/ syslog-ng logrotate networking/ lldpd iproute mellanox python-ryu mlx4-config python/ python-2.7.5 python-django python-gunicorn python-setuptools python-smartpm security/ shim-signed shim-unsigned tboot strorage-drivers/ python-3parclient python-lefthandclient virt/ cloud-init libvirt libvirt-python qemu tools/ storage-topology vm-topology utilities/ tis-extensions namespace-utils nova-utils update-motd Relocated packages (stx-utils to stx-update): enable-dev-patch Relocated packages (stx-utils to stx-integ): config-files/ io-scheduler filesystem/ filesystem-scripts grub/ grubby logging/ logmgmt tools/ collector monitor-tools tools/engtools/ hostdata-collectors parsers utilities/ build-info branding (formerly wrs-branding) platform-util Relocated packages (stx-gpl2 to stx-integ): base/ bash cgcs-users cluster-resource-agents dpkg haproxy libfdt netpbm rpm database/ mariadb filesystem/ iscsi-initiator-utils filesystem/drbd/ drbd-tools kernel/kernel-modules/ drbd integrity intel-e1000e intel-i40e intel-i40evf intel-ixgbe intel-ixgbevf qat17 tpmdd ldap/ ldapscripts networking/ iptables net-tools Relocated packages (stx-gpl3 to stx-integ): base/ anaconda crontabs dnsmasq rsync database/ python-psycopg2 filesystem/ parted grub/ grub2 security/ python-keyring Delete two packages from stx-integ: tgt irqbalance Delete two packages from stx-gplv3: seabios sysvinit Delete one package from stx-utils: io-monitor _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Aug 3 21:18:37 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 3 Aug 2018 17:18:37 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <11acab4e-0887-41e4-e801-da8edebc6691@windriver.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3332EB@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA333306@ALA-MBD.corp.ad.wrs.com> <11acab4e-0887-41e4-e801-da8edebc6691@windriver.com> Message-ID: <4c6551ed-7132-e468-a522-7bd6d39ad876@windriver.com> Yep, that's got it.   Review is up. Scott On 18-08-03 05:06 PM, Scott Little wrote: > I think adding seavgabios-bin-1.11.0-2.el7.noarch.rpm > > and seabios-bin-1.11.0-2.el7.noarch.rpm > > to the .lst files will resolve it. > > Scott > > > > On 18-08-03 05:04 PM, Penney, Don wrote: >> >> seabios-bin, however… >> >> qemu-kvm-ev-10:2.10.0-0.tis.0.x86_64 >> >> seabios-0:1.10.2-3.el7_4.1.tis.2.x86_64 >> >> swtpm-0:0.1.0-2.tis.0.x86_64 >> >> *From:*Penney, Don [mailto:Don.Penney at windriver.com] >> *Sent:* Friday, August 03, 2018 4:55 PM >> *To:* Little, Scott; starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] Restructuring round 2 >> >> My repoquery isn’t showing anything that requires seabios: >> >> [dpenney at yow-dpenney-lx-vm1 >> /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery >> --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath >> tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios >> >> [dpenney at yow-dpenney-lx-vm1 >> /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery >> --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath >> tis,$MY_WORKSPACE/std/rpmbuild/RPMS --whatrequires seabios >> >> [dpenney at yow-dpenney-lx-vm1 >> /localdisk/designer/dpenney/starlingx-3/cgcs-root]$ repoquery >> --repofrompath cgcs,$MY_REPO/cgcs-centos-repo/Binary --repofrompath >> tis,$MY_WORKSPACE/std/rpmbuild/RPMS --provides seabios >> >> seabios = 1.11.0-2.el7 >> >> seabios(x86-64) = 1.11.0-2.el7 >> >> *From:*Scott Little [mailto:scott.little at windriver.com] >> *Sent:* Friday, August 03, 2018 4:45 PM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] Restructuring round 2 >> >> build-iso failure.  Looks like we still have a unexpected dependency >> on seabios from qemu. Trying to figure out if we missed a qemu >> update, or need to restore seabios temporarily ... >> >> Scott >> >> >> On 18-08-03 02:18 PM, Scott Little wrote: >> >> Remaining reviews are now available: >> >> stx-manifests: https://review.openstack.org/588633 Remove empty >> repos stx-gplv2 and stx-gplv3 >> stx-manifests: https://review.openstack.org/588634 Remove empty >> repo stx-utils >> stx-root: https://review.openstack.org/588635 Remove empty repo >> stx-utils >> >> >> On 18-08-01 05:03 PM, Scott Little wrote: >> >> 99% of the reviews are now available.  I've held back the >> manifest changes for tomorrow. >> >> The relocation updates come in sets for each package, that >> attempt to preserve the update history found at the original >> location.  One update removes a package from stx-utils, >> stx-gplv2, stx-gplv3.  A second adds it to stx-integ or >> stx-updates in it's StarlingX day zero form (author changes >> from Dean to Me).  Then there may be 0-N updates replaying >> the subsequent commit history of that package (author and >> commit text preserved).  Finally there might be a follow up >> commit by me to fix a build path.  The final result is a >> glorified 'mv' operation.  The content should be unchanged, >> So all the code has been reviewed before. >> >> Reviews should focus one subject only, was the move executed >> correctly? >> >> Please do not workflow +1!    I couldn't get the scripts to >> manage Depends-On relationships satisfactorily, so I'll hand >> manage it tomorrow. >> >> Scott >> >> >> >> >> On 18-07-31 11:26 AM, Scott Little wrote: >> >> Revised timeline is August 1 or 2. >> >> Scott >> >> >> On 18-07-17 11:07 AM, Scott Little wrote: >> >> Story: >> https://storyboard.openstack.org/#!/story/2002801 >> >> >> *Goals:* >> >> 1) Consolidate the following repo’s under stx-integ. >> • stx-gplv2 >> • stx-gplv3 >> • stx-utils >> >> 2) Restructure the directories under which packages >> are to be found. >> >> Currently stx-gplv2/3 are largely without structure. >> Parts of the stx-integ structure were inherited from >> WRLinux and make little sense. stx-utils is just i >> mess of stuff that never found a home when StarlingX >> was first set up. >> >> Directories should descriptive of the class of >> packages to be found within. >> >> Intent is to preserve update history as best is is >> possible. >> >> *Timeline: * >> >> Probably around July 23 unless there are strong >> objections.  We should probably have a freeze on >> submissions to the affected repos until it is all >> completed. >> >> *Code Reviews: * >> >> Most of this is just moving code around.  A few path >> corrections, but no new code.  The number and size of >> the reviews will be huge, and the code should all >> have been inspected once before.  Is there a way to >> fast track this?  Would there be strong objections to >> me just doing a +2/+1 without waiting for independent >> review? >> >> *Details of directories/groups ...* >> >> Create new directories under stx-integ (logical >> groupings for files): >>    ceph >>    config >>    config-files >>    database >>    filesystem >>    filesystem/drbd >>    grub >>    kernel >>    kernel/kernel-modules >>    ldap >>    logging >>    strorage-drivers >>    tools >>    utilities >>    virt >> >> Retained directories under stx-integ (additional >> logical groupings for files): >>    base >>    mellanox >>    monitoring >>    networking >>    python >>    restapi-doc >>    security >> >> Retire directories under stx-integ (non-descriptive >> or ambiguous grouping we will retire): >>    connectivity >>    core >>    devtools >>    extended >>    support >> >> *Details of packages ...* >> >> Relocated packages (internal to stx-integ): >>    base/ >>       dhcp >>       initscripts >>       libevent >>       lighttpd >>       memcached >>       net-snmp >>       novnc >>       ntp >>       openssh >>       pam >>       procps >>       sanlock >>       shadow >>       sudo >>       systemd >>       util-linux >>       vim >>       watchdog >> >>    ceph/ >>       python-cephclient >> >>    config/ >>       e2fsprogs >>       facter >>       nfs-utils >>       nfscheck >>       puppet-4.8.2 >>       puppet-modules >> >>    kernel/ >>       kernel-std >>       kernel-rt >> >>    kernel/kernel-modules/ >>       mlnx-ofa_kernel >> >>    ldap/ >>       nss-pam-ldapd >>       openldap >> >>    logging/ >>       syslog-ng >>       logrotate >> >>    networking/ >>       lldpd >>       iproute >>       mellanox >>       python-ryu >>       mlx4-config >> >>    python/ >>       python-2.7.5 >>       python-django >>       python-gunicorn >>       python-setuptools >>       python-smartpm >> >>    security/ >>       shim-signed >>       shim-unsigned >>       tboot >> >>    strorage-drivers/ >>       python-3parclient >>       python-lefthandclient >> >>    virt/ >>       cloud-init >>       libvirt >>       libvirt-python >>       qemu >> >>    tools/ >>       storage-topology >>       vm-topology >> >>    utilities/ >>       tis-extensions >>       namespace-utils >>       nova-utils >>       update-motd >> >> >> >> Relocated packages (stx-utils to stx-update): >>     enable-dev-patch >> >> >> Relocated packages (stx-utils to stx-integ): >> >>     config-files/ >>         io-scheduler >> >>     filesystem/ >>         filesystem-scripts >> >>     grub/ >>         grubby >> >>     logging/ >>         logmgmt >> >>     tools/ >>         collector >>         monitor-tools >> >>     tools/engtools/ >>         hostdata-collectors >>         parsers >> >>     utilities/ >>         build-info >>         branding   (formerly wrs-branding) >>         platform-util >> >> >> Relocated packages (stx-gpl2 to stx-integ): >>     base/ >>         bash >>         cgcs-users >>         cluster-resource-agents >>         dpkg >>         haproxy >>         libfdt >>         netpbm >>         rpm >> >>     database/ >>         mariadb >> >>     filesystem/ >>         iscsi-initiator-utils >> >>     filesystem/drbd/ >>         drbd-tools >> >>     kernel/kernel-modules/ >>         drbd >>         integrity >>         intel-e1000e >>         intel-i40e >>         intel-i40evf >>         intel-ixgbe >>         intel-ixgbevf >>         qat17 >>         tpmdd >> >>     ldap/ >>         ldapscripts >> >>     networking/ >>         iptables >>         net-tools >> >> >> Relocated packages (stx-gpl3 to stx-integ): >>     base/ >>         anaconda >>         crontabs >>         dnsmasq >>         rsync >> >>     database/ >>         python-psycopg2 >> >>     filesystem/ >>         parted >> >>     grub/ >>         grub2 >> >>     security/ >>         python-keyring >> >> >> Delete two packages from stx-integ: >>    tgt >>    irqbalance >> >> Delete two packages from stx-gplv3: >>    seabios >>    sysvinit >> >> Delete one package from stx-utils: >>    io-monitor >> >> >> >> _______________________________________________ >> >> Starlingx-discuss mailing list >> >> Starlingx-discuss at lists.starlingx.io >> >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> >> Starlingx-discuss mailing list >> >> Starlingx-discuss at lists.starlingx.io >> >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> >> Starlingx-discuss mailing list >> >> Starlingx-discuss at lists.starlingx.io >> >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> >> Starlingx-discuss mailing list >> >> Starlingx-discuss at lists.starlingx.io >> >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Aug 3 22:41:41 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 3 Aug 2018 22:41:41 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation In-Reply-To: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> References: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> Hi Abraham, (You may know this already) The StarlingX APIs (especially for sysinv) are currently documented at: https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi-doc You can use the content as a starting point. However, the mechanism used is outdated using maven and wadl files. So you need to use the more current approach. Greg Waines did some research on this. I strongly recommend you review with him when he's back from vacation (Tues Aug 7). Is this the story you are working on: https://storyboard.openstack.org/#!/story/2002712 ? If so, I'll add some of the details Greg has captured to the story. Regards, Ghada -----Original Message----- From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Friday, August 03, 2018 3:38 PM To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation Hi Abraham; Thanks for kicking this off. On 2018-08-03, 12:40 PM, "Arce Moreno, Abraham" wrote: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? We should do the concepts and the ref at the same time. The new OpenStack approach allows for tags to go in the code. Let's start with this work. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects StarlingX should not document other OpenStack API's, would their documentation not the source of truth? [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? All projects in the Flock should be included. I think there is a dependency on some of the code restructuring activities that are underway, we need to make sure these activities don't collide. Ian [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Fri Aug 3 23:23:05 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 3 Aug 2018 23:23:05 +0000 Subject: [Starlingx-discuss] New build story In-Reply-To: <9A85D2917C58154C960D95352B22818BAB56AFC4@fmsmsx117.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB56AFC4@fmsmsx117.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB56BBCB@fmsmsx117.amr.corp.intel.com> Here is yet another update on this topic. At our F2F meeting yesterday we agreed on a 3 step plan for addressing the build issues. Our short term solution is the story defined below. We want to change the build to allow each company (and maybe different geos within companies) to create a shared mirror. And change the build scripts to allow them to pick up and use that mirror. As soon as possible. Our medium term solution is to follow up on the idea suggested by Brent yesterday to create a build system where most of the RPM versions float. Only those RPMs we patch or modify need to be tied to specific versions. Long term, we are continuing to investigate options for hosting build artifacts for the project. There are many options that we will be reviewing with our executives for feedback and approval. Meanwhile, I'm very happy to hear that the Build team is already at work on the medium term solution, and I ask that they re-focus a bit on the short term solution so we can all get unblocked. brucej From: Jones, Bruce E Sent: Thursday, August 2, 2018 10:50 AM To: starlingx-discuss at lists.starlingx.io Subject: RE: New build story Update from Ottawa. Please hold off on this. The team has come up with what might be a better idea. From: Jones, Bruce E Sent: Thursday, August 2, 2018 8:00 AM To: starlingx-discuss at lists.starlingx.io Subject: New build story I just created a new story for the Build team to change the mirror download and build scripts to enable us to create and manage per-company, shared import mirrors. This should help insulate most developers from changes in upstream packages. We recommend building one shared mirror per company per geography. This is a short term band-aid while we figure out our long term build strategy. The story is https://storyboard.openstack.org/#!/story/2003288. Build team, please review and start work on this. Thanks! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Aug 3 23:29:04 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 3 Aug 2018 23:29:04 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B2ED6F0@SHSMSX104.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6C6@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6F0@SHSMSX104.ccr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB56BBEE@fmsmsx117.amr.corp.intel.com> Ghada, thank you for sending out the mail I wanted to send while we were flying through the storms last night. The intent of the subteams is to make them cross-org, cross-geo and cross-company. The intent is to also provide clear ownership and responsibility. We discussed whether things like Zuul test support, Python3 support and similar efforts should treated as separate sub-projects with dedicated teams, or if instead each team should own addressing those issues in their own components. The clear consensus was that each team should own their own Zuul test content, Python3 compatibility and (of course) bugs. Intel team - please feel free to start signing up for the team(s) you are interested in. You can participate in multiple teams if you like, and in fact most of us probably should. We'll provide more clarity on the team's themselves at the Wednesday project call. But I'm happy to answer any questions folks have over email too. The list is here: https://ethercalc.openstack.org/ctjc7vlbphm1. I have a ToDo to clean up the sub-project part of the wiki, although help from the teams would of course be great. :) brucej From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 2, 2018 5:54 PM To: Xie, Cindy ; Khalil, Ghada ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects For "distro non openstack" subproject, while it's forming, please send your name if you are interested to be part of the subproject. @Ken, can you add me and Haitao into "security" subproject? Also, I'd like to have Intel engineers part of the subproject of Flocks (config, fault, HA, metal, NFV, update, distributed cloud, etc) as well - I can send out the names later. Thanks. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Friday, August 3, 2018 8:46 AM To: Khalil, Ghada >; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects I can lead "distro non openstack" subproject. I will fill-up the names for the team members soon. I also want to lead the effort of "Python3 support" as well if no leader has been identified so far. Let me know if you are OK with this. Thanks. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 3, 2018 6:33 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In the F2F meeting today, we worked jointly to define sub-project teams to assist with bottom-up planning. https://ethercalc.openstack.org/ctjc7vlbphm1 (also linked from the main StarlingX wiki page) Note: We have started filling out the team members; this is still work in progress. Can I ask the Team Leads for each sub-project to help fill out the names of their team members? I will be the team lead for the Release team. I will help coordinate release schedule, content and planning. I will be working with the Team Leads of the sub-projects to pull together the bottom-up plans. Looking forward to working with all of you. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Sat Aug 4 01:07:17 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Sat, 4 Aug 2018 01:07:17 +0000 Subject: [Starlingx-discuss] Next round of failed packages In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332DB@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332DB@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B2EE5DB@SHSMSX104.ccr.corp.intel.com> Yeah, Shuicheng is working on how to enable a newer version with dependencies updated. But I was assuming that Erich and Marcela will have workaround to find alternative links and get the mirror-check passed. Thanks. - cindy -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Saturday, August 4, 2018 4:50 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Next round of failed packages Yeah, I hit those earlier in the week. Cindy created stories to deal with them, but I haven't seen any updates: https://storyboard.openstack.org/#!/story/2003173 https://storyboard.openstack.org/#!/story/2003174 -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, August 03, 2018 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Next round of failed packages I reset my environment this morning, maybe I got caught in the middle of things, here is the list of missing packages from today: cppcheck-1.80-1.el7.x86_64.rpm ima-evm-utils-1.0-1.el7.x86_64.rpm ima-evm-utils-devel-1.0-1.el7.x86_64.rpm scapy-2.3.3-1.el7.src.rpm I think I have seen these fail before, I am not behind a proxy or firewall. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Mon Aug 6 03:00:54 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 6 Aug 2018 03:00:54 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <9A85D2917C58154C960D95352B22818BAB56BBEE@fmsmsx117.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6C6@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6F0@SHSMSX104.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB56BBEE@fmsmsx117.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B2EFC3C@SHSMSX104.ccr.corp.intel.com> Bruce, I am added couple of engineer names into the Ethercalc, it's not finalized, and we still want to clarify for what the working process of sub-project is. Thx. - cindy From: Jones, Bruce E Sent: Saturday, August 4, 2018 7:29 AM To: Xie, Cindy ; Xie, Cindy ; Khalil, Ghada ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: StarlingX Release Sub-Projects Ghada, thank you for sending out the mail I wanted to send while we were flying through the storms last night. The intent of the subteams is to make them cross-org, cross-geo and cross-company. The intent is to also provide clear ownership and responsibility. We discussed whether things like Zuul test support, Python3 support and similar efforts should treated as separate sub-projects with dedicated teams, or if instead each team should own addressing those issues in their own components. The clear consensus was that each team should own their own Zuul test content, Python3 compatibility and (of course) bugs. Intel team - please feel free to start signing up for the team(s) you are interested in. You can participate in multiple teams if you like, and in fact most of us probably should. We'll provide more clarity on the team's themselves at the Wednesday project call. But I'm happy to answer any questions folks have over email too. The list is here: https://ethercalc.openstack.org/ctjc7vlbphm1. I have a ToDo to clean up the sub-project part of the wiki, although help from the teams would of course be great. :) brucej From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 2, 2018 5:54 PM To: Xie, Cindy >; Khalil, Ghada >; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects For "distro non openstack" subproject, while it's forming, please send your name if you are interested to be part of the subproject. @Ken, can you add me and Haitao into "security" subproject? Also, I'd like to have Intel engineers part of the subproject of Flocks (config, fault, HA, metal, NFV, update, distributed cloud, etc) as well - I can send out the names later. Thanks. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Friday, August 3, 2018 8:46 AM To: Khalil, Ghada >; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects I can lead "distro non openstack" subproject. I will fill-up the names for the team members soon. I also want to lead the effort of "Python3 support" as well if no leader has been identified so far. Let me know if you are OK with this. Thanks. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 3, 2018 6:33 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In the F2F meeting today, we worked jointly to define sub-project teams to assist with bottom-up planning. https://ethercalc.openstack.org/ctjc7vlbphm1 (also linked from the main StarlingX wiki page) Note: We have started filling out the team members; this is still work in progress. Can I ask the Team Leads for each sub-project to help fill out the names of their team members? I will be the team lead for the Release team. I will help coordinate release schedule, content and planning. I will be working with the Team Leads of the sub-projects to pull together the bottom-up plans. Looking forward to working with all of you. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Aug 6 03:02:09 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 6 Aug 2018 03:02:09 +0000 Subject: [Starlingx-discuss] Next round of failed packages In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B2EE5DB@SHSMSX104.ccr.corp.intel.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332DB@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2EE5DB@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765533DC24@SHSMSX101.ccr.corp.intel.com> I am checking it now. For scapy, I could find scapy-2.4.0-2.el7.src.rpm from below repo: [Starlingx-epel-7-source] name=Starlingx-Epel-7-source baseurl=http://linux-ftp.jf.intel.com/pub/mirrors/fedora-epel/7/SRPMS/ Here is the new rpm/srpm for these 4 packages: cppcheck-1.83-3.el7.x86_64.rpm ima-evm-utils-1.1-2.el7.x86_64.rpm ima-evm-utils-devel-1.1-2.el7.x86_64.rpm scapy-2.4.0-2.el7.src.rpm I am doing build now. Will have a basic deploy test before submit the change. Best Regards Shuicheng -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Saturday, August 4, 2018 9:07 AM To: Penney, Don ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Next round of failed packages Yeah, Shuicheng is working on how to enable a newer version with dependencies updated. But I was assuming that Erich and Marcela will have workaround to find alternative links and get the mirror-check passed. Thanks. - cindy -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Saturday, August 4, 2018 4:50 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Next round of failed packages Yeah, I hit those earlier in the week. Cindy created stories to deal with them, but I haven't seen any updates: https://storyboard.openstack.org/#!/story/2003173 https://storyboard.openstack.org/#!/story/2003174 -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, August 03, 2018 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Next round of failed packages I reset my environment this morning, maybe I got caught in the middle of things, here is the list of missing packages from today: cppcheck-1.80-1.el7.x86_64.rpm ima-evm-utils-1.0-1.el7.x86_64.rpm ima-evm-utils-devel-1.0-1.el7.x86_64.rpm scapy-2.3.3-1.el7.src.rpm I think I have seen these fail before, I am not behind a proxy or firewall. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Mon Aug 6 11:58:54 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 6 Aug 2018 11:58:54 +0000 Subject: [Starlingx-discuss] Next round of failed packages In-Reply-To: <9700A18779F35F49AF027300A49E7C765533DC24@SHSMSX101.ccr.corp.intel.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332DB@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2EE5DB@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765533DC24@SHSMSX101.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765533DD3A@SHSMSX101.ccr.corp.intel.com> Hi, Here is the review: https://review.openstack.org/#/c/589122/ https://review.openstack.org/#/c/589126/ Best Regards Shuicheng -----Original Message----- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, August 6, 2018 11:02 AM To: Xie, Cindy ; Penney, Don ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Next round of failed packages I am checking it now. For scapy, I could find scapy-2.4.0-2.el7.src.rpm from below repo: [Starlingx-epel-7-source] name=Starlingx-Epel-7-source baseurl=http://linux-ftp.jf.intel.com/pub/mirrors/fedora-epel/7/SRPMS/ Here is the new rpm/srpm for these 4 packages: cppcheck-1.83-3.el7.x86_64.rpm ima-evm-utils-1.1-2.el7.x86_64.rpm ima-evm-utils-devel-1.1-2.el7.x86_64.rpm scapy-2.4.0-2.el7.src.rpm I am doing build now. Will have a basic deploy test before submit the change. Best Regards Shuicheng -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Saturday, August 4, 2018 9:07 AM To: Penney, Don ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Next round of failed packages Yeah, Shuicheng is working on how to enable a newer version with dependencies updated. But I was assuming that Erich and Marcela will have workaround to find alternative links and get the mirror-check passed. Thanks. - cindy -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Saturday, August 4, 2018 4:50 AM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Next round of failed packages Yeah, I hit those earlier in the week. Cindy created stories to deal with them, but I haven't seen any updates: https://storyboard.openstack.org/#!/story/2003173 https://storyboard.openstack.org/#!/story/2003174 -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Friday, August 03, 2018 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Next round of failed packages I reset my environment this morning, maybe I got caught in the middle of things, here is the list of missing packages from today: cppcheck-1.80-1.el7.x86_64.rpm ima-evm-utils-1.0-1.el7.x86_64.rpm ima-evm-utils-devel-1.0-1.el7.x86_64.rpm scapy-2.3.3-1.el7.src.rpm I think I have seen these fail before, I am not behind a proxy or firewall. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Mon Aug 6 14:35:08 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 6 Aug 2018 14:35:08 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <9A85D2917C58154C960D95352B22818BAB56BBEE@fmsmsx117.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6C6@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B2ED6F0@SHSMSX104.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB56BBEE@fmsmsx117.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA404391@ALA-MBD.corp.ad.wrs.com> Welcome back online Bruce :) I have some cycles today to help with the wiki updates, so I'll start on them this afternoon EDT. I was thinking to have one page for the flock for now with subsections for each component. If anyone has issues with this approach, please let me know. Regards, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, August 03, 2018 7:29 PM To: Xie, Cindy; Xie, Cindy; Khalil, Ghada; 'starlingx-discuss at lists.starlingx.io' Subject: RE: StarlingX Release Sub-Projects Ghada, thank you for sending out the mail I wanted to send while we were flying through the storms last night. The intent of the subteams is to make them cross-org, cross-geo and cross-company. The intent is to also provide clear ownership and responsibility. We discussed whether things like Zuul test support, Python3 support and similar efforts should treated as separate sub-projects with dedicated teams, or if instead each team should own addressing those issues in their own components. The clear consensus was that each team should own their own Zuul test content, Python3 compatibility and (of course) bugs. Intel team - please feel free to start signing up for the team(s) you are interested in. You can participate in multiple teams if you like, and in fact most of us probably should. We'll provide more clarity on the team's themselves at the Wednesday project call. But I'm happy to answer any questions folks have over email too. The list is here: https://ethercalc.openstack.org/ctjc7vlbphm1. I have a ToDo to clean up the sub-project part of the wiki, although help from the teams would of course be great. :) brucej From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 2, 2018 5:54 PM To: Xie, Cindy >; Khalil, Ghada >; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects For "distro non openstack" subproject, while it's forming, please send your name if you are interested to be part of the subproject. @Ken, can you add me and Haitao into "security" subproject? Also, I'd like to have Intel engineers part of the subproject of Flocks (config, fault, HA, metal, NFV, update, distributed cloud, etc) as well - I can send out the names later. Thanks. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Friday, August 3, 2018 8:46 AM To: Khalil, Ghada >; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects I can lead "distro non openstack" subproject. I will fill-up the names for the team members soon. I also want to lead the effort of "Python3 support" as well if no leader has been identified so far. Let me know if you are OK with this. Thanks. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 3, 2018 6:33 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In the F2F meeting today, we worked jointly to define sub-project teams to assist with bottom-up planning. https://ethercalc.openstack.org/ctjc7vlbphm1 (also linked from the main StarlingX wiki page) Note: We have started filling out the team members; this is still work in progress. Can I ask the Team Leads for each sub-project to help fill out the names of their team members? I will be the team lead for the Release team. I will help coordinate release schedule, content and planning. I will be working with the Team Leads of the sub-projects to pull together the bottom-up plans. Looking forward to working with all of you. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Mon Aug 6 16:15:26 2018 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 6 Aug 2018 09:15:26 -0700 Subject: [Starlingx-discuss] Next round of failed packages In-Reply-To: <9700A18779F35F49AF027300A49E7C765533DD3A@SHSMSX101.ccr.corp.intel.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA3332DB@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B2EE5DB@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765533DC24@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765533DD3A@SHSMSX101.ccr.corp.intel.com> Message-ID: <66f79a73-f859-b2a3-939d-fe22a42471a1@linux.intel.com> On 08/06/2018 04:58 AM, Lin, Shuicheng wrote: > Hi, > Here is the review: > https://review.openstack.org/#/c/589122/ > https://review.openstack.org/#/c/589126/ > See my review comments, I tried this over the weekend and was able to use the upstream noarch package for python-scapy-2.4.0-2.el7.noarch.rpm This removes another patched source package from the list. Sau! > > Best Regards > Shuicheng > > > -----Original Message----- > From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] > Sent: Monday, August 6, 2018 11:02 AM > To: Xie, Cindy ; Penney, Don ; Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Next round of failed packages > > I am checking it now. > For scapy, I could find scapy-2.4.0-2.el7.src.rpm from below repo: > [Starlingx-epel-7-source] > name=Starlingx-Epel-7-source > baseurl=http://linux-ftp.jf.intel.com/pub/mirrors/fedora-epel/7/SRPMS/ > > Here is the new rpm/srpm for these 4 packages: > cppcheck-1.83-3.el7.x86_64.rpm > ima-evm-utils-1.1-2.el7.x86_64.rpm > ima-evm-utils-devel-1.1-2.el7.x86_64.rpm > scapy-2.4.0-2.el7.src.rpm > > I am doing build now. Will have a basic deploy test before submit the change. > > Best Regards > Shuicheng > > -----Original Message----- > From: Xie, Cindy [mailto:cindy.xie at intel.com] > Sent: Saturday, August 4, 2018 9:07 AM > To: Penney, Don ; Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Next round of failed packages > > Yeah, Shuicheng is working on how to enable a newer version with dependencies updated. But I was assuming that Erich and Marcela will have workaround to find alternative links and get the mirror-check passed. > > Thanks. - cindy > > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Saturday, August 4, 2018 4:50 AM > To: Saul Wold ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Next round of failed packages > > Yeah, I hit those earlier in the week. Cindy created stories to deal with them, but I haven't seen any updates: > https://storyboard.openstack.org/#!/story/2003173 > https://storyboard.openstack.org/#!/story/2003174 > > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Friday, August 03, 2018 4:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Next round of failed packages > > > I reset my environment this morning, maybe I got caught in the middle of > things, here is the list of missing packages from today: > > cppcheck-1.80-1.el7.x86_64.rpm > ima-evm-utils-1.0-1.el7.x86_64.rpm > ima-evm-utils-devel-1.0-1.el7.x86_64.rpm > scapy-2.3.3-1.el7.src.rpm > > > I think I have seen these fail before, I am not behind a proxy or firewall. > > Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From sgw at linux.intel.com Mon Aug 6 17:38:12 2018 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 6 Aug 2018 10:38:12 -0700 Subject: [Starlingx-discuss] Using short form SPDX style license headers Message-ID: <4e123057-101d-0dbd-78d8-542418a0fbe7@linux.intel.com> Has the project got any firm direction on using the full license text vs the SPDX "short identifiers" [0] for license headers? I am not proposing we change things wholesale, I am just looking to establish a direction moving forward. Any new files written would include the short identifier [1]. Any modified files could switch to SPDX identifiers when edited. At some point we could do a mass replacement, but not recommending that now. Many OpenSource projects are starting to use the one-line SPDX license identifier, including the Linux kernel project [2]. Example instead of including about 15 lines of Apache 2.0 License, the single line would be used: SPDX-License-Identifier: Apache-2.0 Thoughts, flames? Sau! [0] https://spdx.org [1] https://spdx.org/licenses [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/license-rules.rst From scottx.rifenbark at intel.com Mon Aug 6 17:42:04 2018 From: scottx.rifenbark at intel.com (Rifenbark, ScottX) Date: Mon, 6 Aug 2018 17:42:04 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> References: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> Message-ID: Abraham, How are these built into documents? I was playing around with the instructions in the README.mvn_cache file from my Ubuntu box and can't seem to create a mvn.repo.tgz following those steps. Scott -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 3, 2018 3:42 PM To: Jolliffe, Ian ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation Hi Abraham, (You may know this already) The StarlingX APIs (especially for sysinv) are currently documented at: https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi-doc You can use the content as a starting point. However, the mechanism used is outdated using maven and wadl files. So you need to use the more current approach. Greg Waines did some research on this. I strongly recommend you review with him when he's back from vacation (Tues Aug 7). Is this the story you are working on: https://storyboard.openstack.org/#!/story/2002712 ? If so, I'll add some of the details Greg has captured to the story. Regards, Ghada -----Original Message----- From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Friday, August 03, 2018 3:38 PM To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation Hi Abraham; Thanks for kicking this off. On 2018-08-03, 12:40 PM, "Arce Moreno, Abraham" wrote: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? We should do the concepts and the ref at the same time. The new OpenStack approach allows for tags to go in the code. Let's start with this work. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects StarlingX should not document other OpenStack API's, would their documentation not the source of truth? [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? All projects in the Flock should be included. I think there is a dependency on some of the code restructuring activities that are underway, we need to make sure these activities don't collide. Ian [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Mon Aug 6 18:04:50 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 6 Aug 2018 18:04:50 +0000 Subject: [Starlingx-discuss] Using short form SPDX style license headers In-Reply-To: <4e123057-101d-0dbd-78d8-542418a0fbe7@linux.intel.com> References: <4e123057-101d-0dbd-78d8-542418a0fbe7@linux.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB5748A2@fmsmsx115.amr.corp.intel.com> No objection at all to using the SPDX license identifiers. I thought we already were. :) brucej -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Monday, August 6, 2018 10:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Using short form SPDX style license headers Has the project got any firm direction on using the full license text vs the SPDX "short identifiers" [0] for license headers? I am not proposing we change things wholesale, I am just looking to establish a direction moving forward. Any new files written would include the short identifier [1]. Any modified files could switch to SPDX identifiers when edited. At some point we could do a mass replacement, but not recommending that now. Many OpenSource projects are starting to use the one-line SPDX license identifier, including the Linux kernel project [2]. Example instead of including about 15 lines of Apache 2.0 License, the single line would be used: SPDX-License-Identifier: Apache-2.0 Thoughts, flames? Sau! [0] https://spdx.org [1] https://spdx.org/licenses [2] https://git.kernel.org/pub/scm/linux/kernel/git/torvalds/linux.git/tree/Documentation/process/license-rules.rst _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Mon Aug 6 18:11:01 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 6 Aug 2018 13:11:01 -0500 Subject: [Starlingx-discuss] Using short form SPDX style license headers In-Reply-To: <4e123057-101d-0dbd-78d8-542418a0fbe7@linux.intel.com> References: <4e123057-101d-0dbd-78d8-542418a0fbe7@linux.intel.com> Message-ID: On Mon, Aug 6, 2018 at 12:38 PM, Saul Wold wrote: > Has the project got any firm direction on using the full license text vs the > SPDX "short identifiers" [0] for license headers? > > I am not proposing we change things wholesale, I am just looking to > establish a direction moving forward. Any new files written would include > the short identifier [1]. Any modified files could switch to SPDX > identifiers when edited. At some point we could do a mass replacement, but > not recommending that now. We specifically did adopt the policy for this in the source cleanup before release. It was a mostly mechanical change so there may still be plenty of opportunity for straightening things up a bit. I'll spend a minute reviewing the internal wiki docs we wrote and re-organize it a bit to move to the public wiki, there's a good bit that doesn't apply any longer... dt -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Mon Aug 6 21:03:10 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 6 Aug 2018 21:03:10 +0000 Subject: [Starlingx-discuss] [Release] Proposed future release dates Message-ID: <9A85D2917C58154C960D95352B22818BAB574D84@fmsmsx115.amr.corp.intel.com> Release team: At the Ottawa meeting last week, we agreed to change the release cadence from 4/year to 3/year. Instead of releases in Q1, Q2, Q3, Q4 we'd move to March, July and November. We also agreed to change the plan from doing two releases in 2018 (August and November) to doing one release in October. Assuming we continue with our current practice of a code freeze in the 2nd week of the month before the release, and target the release to the 2nd week of the month, we'd end up with the dates below. Is this OK with everyone? brucej Milestones Date Release stx.2018.10 code freeze Sep 12 Release stx.2018.10 Oct 10 OpenStack Berlin Nov 7 Release stx.2019.03 code freeze Feb 13 Release stx.2019.03 Mar 13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Aug 6 21:15:16 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 6 Aug 2018 21:15:16 +0000 Subject: [Starlingx-discuss] [Release] Proposed future release dates In-Reply-To: <9A85D2917C58154C960D95352B22818BAB574D84@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB574D84@fmsmsx115.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4045F3@ALA-MBD.corp.ad.wrs.com> I propose that we make a exception for the 2018 release as follows: Release stx.2018.10 code freeze - Sept 26 Release stx.2018.10 - Oct 24 This makes the release available close to the Berlin Summit where it can be announced/socialized (but still we have a 2wk buffer) and gives the team a couple of extra weeks to get into a working model with the proposed sub-project structure and get more bugs and content into the release. Regards, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, August 06, 2018 5:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] Proposed future release dates Release team: At the Ottawa meeting last week, we agreed to change the release cadence from 4/year to 3/year. Instead of releases in Q1, Q2, Q3, Q4 we'd move to March, July and November. We also agreed to change the plan from doing two releases in 2018 (August and November) to doing one release in October. Assuming we continue with our current practice of a code freeze in the 2nd week of the month before the release, and target the release to the 2nd week of the month, we'd end up with the dates below. Is this OK with everyone? brucej Milestones Date Release stx.2018.10 code freeze Sep 12 Release stx.2018.10 Oct 10 OpenStack Berlin Nov 7 Release stx.2019.03 code freeze Feb 13 Release stx.2019.03 Mar 13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Aug 6 21:52:13 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 6 Aug 2018 23:52:13 +0200 Subject: [Starlingx-discuss] Keystone edge architectures meeting this week Message-ID: <6FE8F059-4EC5-4AB8-AF95-A871A7B73295@gmail.com> Hi, With the Edge Computing Group we agreed to start discussing the different architecture options in details on how to setup Keystone for edge scenarios. We have information captured from pervious discussions here: https://wiki.openstack.org/wiki/Keystone_edge_architectures In an attempt to find a time slot that is doable in most time zones I created the following poll to have the first meeting this week: https://doodle.com/poll/ke39uz49znqh4xci Please fill out the form if you are interested in participating. We can agree on another slot or alternating slots onwards to make attending the meeting more convenient. Please let me know if you have any questions. Thanks and Best Regards, Ildikó From bruce.e.jones at intel.com Mon Aug 6 21:33:02 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 6 Aug 2018 21:33:02 +0000 Subject: [Starlingx-discuss] [Release] Proposed future release dates In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4045F3@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BAB574D84@fmsmsx115.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4045F3@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB574E06@fmsmsx115.amr.corp.intel.com> OK, that works too. LGTM. Release team, please update the Release Plan and Release sub-team wiki pages. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, August 6, 2018 2:15 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: [Release] Proposed future release dates I propose that we make a exception for the 2018 release as follows: Release stx.2018.10 code freeze - Sept 26 Release stx.2018.10 - Oct 24 This makes the release available close to the Berlin Summit where it can be announced/socialized (but still we have a 2wk buffer) and gives the team a couple of extra weeks to get into a working model with the proposed sub-project structure and get more bugs and content into the release. Regards, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, August 06, 2018 5:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] Proposed future release dates Release team: At the Ottawa meeting last week, we agreed to change the release cadence from 4/year to 3/year. Instead of releases in Q1, Q2, Q3, Q4 we'd move to March, July and November. We also agreed to change the plan from doing two releases in 2018 (August and November) to doing one release in October. Assuming we continue with our current practice of a code freeze in the 2nd week of the month before the release, and target the release to the 2nd week of the month, we'd end up with the dates below. Is this OK with everyone? brucej Milestones Date Release stx.2018.10 code freeze Sep 12 Release stx.2018.10 Oct 10 OpenStack Berlin Nov 7 Release stx.2019.03 code freeze Feb 13 Release stx.2019.03 Mar 13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Mon Aug 6 22:11:15 2018 From: claire at openstack.org (Claire Massey) Date: Mon, 6 Aug 2018 17:11:15 -0500 Subject: [Starlingx-discuss] [Release] Proposed future release dates In-Reply-To: <9A85D2917C58154C960D95352B22818BAB574E06@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB574D84@fmsmsx115.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4045F3@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB574E06@fmsmsx115.amr.corp.intel.com> Message-ID: <95DCC93B-3A60-4D91-9577-24CD314F2A8C@openstack.org> Hi all, Just want to flag that the Berlin Summit starts on Tuesday, November 13. Claire > On Aug 6, 2018, at 4:33 PM, Jones, Bruce E wrote: > > OK, that works too. LGTM. > > Release team, please update the Release Plan and Release sub-team wiki pages. > > brucej > > From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] > Sent: Monday, August 6, 2018 2:15 PM > To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Release] Proposed future release dates > > I propose that we make a exception for the 2018 release as follows: > Release stx.2018.10 code freeze – Sept 26 > Release stx.2018.10 – Oct 24 > > This makes the release available close to the Berlin Summit where it can be announced/socialized (but still we have a 2wk buffer) and gives the team a couple of extra weeks to get into a working model with the proposed sub-project structure and get more bugs and content into the release. > > Regards, > Ghada > > From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] > Sent: Monday, August 06, 2018 5:03 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Release] Proposed future release dates > > Release team: > > At the Ottawa meeting last week, we agreed to change the release cadence from 4/year to 3/year. Instead of releases in Q1, Q2, Q3, Q4 we’d move to March, July and November. We also agreed to change the plan from doing two releases in 2018 (August and November) to doing one release in October. > > Assuming we continue with our current practice of a code freeze in the 2nd week of the month before the release, and target the release to the 2nd week of the month, we’d end up with the dates below. > > Is this OK with everyone? > > brucej > > > Milestones > Date > Release stx.2018.10 code freeze > Sep 12 > Release stx.2018.10 > Oct 10 > OpenStack Berlin > Nov 7 > Release stx.2019.03 code freeze > Feb 13 > Release stx.2019.03 > Mar 13 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Aug 7 01:54:41 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 7 Aug 2018 01:54:41 +0000 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: References: Message-ID: <93814834B4855241994F290E959305C752F603E0@SHSMSX103.ccr.corp.intel.com> Hi Abraham and all, I have some proposals as below. In current developer guide, we have 2 containers for mirror download and build. Can we change it to only use 1 container for both? We will have 2 benefits at least 1) Simplify developer guide and remove some steps, such as mirror copy. 2) Developer can do mirror integrity check before start building. We always encounter this kind of issue that when build has been started for a while, then it told us some package missing. If we can do integrity check before building, it will be friendly to Developer. For mirror integrity check, just need to check if local mirror is complete compare to the latest Download list. If not, need download missing package first before start building. Zhipeng -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: 2018年7月26日 10:13 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide Hi again, Can someone please help test the process to build an ISO based on master using Developer Guide [0]? Please use Github page [1] as a documentation support. Requirements: - Repo status checked 7/25/2018 19:12 PST - stx-tools master branch - latest change: b65fa0a0ec6297199843b1455615d0126bb7e7c7 Update RPM macros - Temporal! Changes, already in Developer Guide - RPM: selinux-policy-devel required https://review.openstack.org/#/c/585915 [0] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide [1] https://github.com/xe1gyq/starlingx/blob/master/DeveloperGuide.md _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Tue Aug 7 02:58:05 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 7 Aug 2018 02:58:05 +0000 Subject: [Starlingx-discuss] build-pkg --parallel In-Reply-To: References: Message-ID: <93814834B4855241994F290E959305C752F62C88@SHSMSX103.ccr.corp.intel.com> Hi Scott and all, I have an issue when I did parallel build and need your help It seems b1/b2/b3 could not mount to tmpfs. Only b0 which not mount to tmpfs can work. 00:09:08 ERROR: Command failed: 00:09:08 # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=5g mock_chroot_tmpfs /localdisk/loadbuild/zhipengl/starlingx/std/mock/b1/root Root cause seems to be nr_inode=0, as I saw dmesg log as below. However, I could not find where or how I can change this nr_inode. [22719.688732] tmpfs: Bad value '0' for mount option 'nr_inodes' [22719.710907] tmpfs: Bad value '0' for mount option 'nr_inodes' [22726.037303] tmpfs: Bad value '0' for mount option 'nr_inodes' [22740.384578] tmpfs: Bad value '0' for mount option 'nr_inodes' [22740.385174] tmpfs: Bad value '0' for mount option 'nr_inodes' Thanks! Zhipeng From: Scott Little [mailto:scott.little at windriver.com] Sent: 2018年8月1日 3:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] build-pkg --parallel I had a successful parallel build (aka build-pkgs --parallel) inside the docker container. ~1h45m on 24 core, 64G ram The prerequisite was a populated $MY_REPO/cgcs-tis-repo/dependancy-cache. Currently we only generate the cache after the build in the 'generate-cgcs-tis-repo' step. I'd like to see the cache stored in git and updated regularly by 'official' builds. Note: The cache doesn't have to be perfect, so a cache that is out of date by a day or a week is still very useful. build-pkgs/mockchain just needs a rough guide on build dependencies and potential dependency loops. Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Aug 7 06:34:01 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 7 Aug 2018 06:34:01 +0000 Subject: [Starlingx-discuss] [Mirror]About mvn.repo.tgz Message-ID: <93814834B4855241994F290E959305C752F62DFF@SHSMSX103.ccr.corp.intel.com> Hi Abraham, I have a question about this tgz. As we know, we did not need download this mvn.repo.tgz several months ago. Do you or anyone know the story that why we add it as a part of download mirror now. It usually take us around 1.5H to download the whole of this mvn folder. Thanks! Zhipeng From Ghada.Khalil at windriver.com Tue Aug 7 11:38:51 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 7 Aug 2018 11:38:51 +0000 Subject: [Starlingx-discuss] [Release] Proposed future release dates In-Reply-To: <9A85D2917C58154C960D95352B22818BAB574E06@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB574D84@fmsmsx115.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4045F3@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB574E06@fmsmsx115.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA40471E@ALA-MBD.corp.ad.wrs.com> I have updated the Release Plan wiki: https://wiki.openstack.org/wiki/StarlingX/Release_Plan Feedback is welcome. I also added Hazzim and Mario to the release team in ethercalc. Can the release and test teams share the status of the milestone branch taken in July? In the F2F meeting, we agreed to use the monthly milestone branches as a practice run prior to the formal October release. Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, August 06, 2018 5:33 PM To: Khalil, Ghada; starlingx-discuss at lists.starlingx.io Subject: RE: [Release] Proposed future release dates OK, that works too. LGTM. Release team, please update the Release Plan and Release sub-team wiki pages. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, August 6, 2018 2:15 PM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: [Release] Proposed future release dates I propose that we make a exception for the 2018 release as follows: Release stx.2018.10 code freeze - Sept 26 Release stx.2018.10 - Oct 24 This makes the release available close to the Berlin Summit where it can be announced/socialized (but still we have a 2wk buffer) and gives the team a couple of extra weeks to get into a working model with the proposed sub-project structure and get more bugs and content into the release. Regards, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, August 06, 2018 5:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] Proposed future release dates Release team: At the Ottawa meeting last week, we agreed to change the release cadence from 4/year to 3/year. Instead of releases in Q1, Q2, Q3, Q4 we'd move to March, July and November. We also agreed to change the plan from doing two releases in 2018 (August and November) to doing one release in October. Assuming we continue with our current practice of a code freeze in the 2nd week of the month before the release, and target the release to the 2nd week of the month, we'd end up with the dates below. Is this OK with everyone? brucej Milestones Date Release stx.2018.10 code freeze Sep 12 Release stx.2018.10 Oct 10 OpenStack Berlin Nov 7 Release stx.2019.03 code freeze Feb 13 Release stx.2019.03 Mar 13 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Aug 7 16:28:08 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 7 Aug 2018 16:28:08 +0000 Subject: [Starlingx-discuss] Call for agenda items for this week's project calls Message-ID: <9A85D2917C58154C960D95352B22818BAB57534A@fmsmsx115.amr.corp.intel.com> We have our project call on Wednesday [0] and the Cores call on Thursday [0]. Agendas for both calls are below and are open for input for any topics you'd like to discuss with the team. Please edit the etherpads directly. Brucej [0] https://etherpad.openstack.org/p/stx-status [1] https://etherpad.openstack.org/p/stx-cores -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Aug 7 16:55:53 2018 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 7 Aug 2018 09:55:53 -0700 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: <93814834B4855241994F290E959305C752F603E0@SHSMSX103.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C752F603E0@SHSMSX103.ccr.corp.intel.com> Message-ID: <9139bb35-4c93-c9cf-77d4-b6821f415986@linux.intel.com> On 08/06/2018 06:54 PM, Liu, ZhipengS wrote: > Hi Abraham and all, > > I have some proposals as below. > > In current developer guide, we have 2 containers for mirror download and build. > Can we change it to only use 1 container for both? We will have 2 benefits at least > 1) Simplify developer guide and remove some steps, such as mirror copy. I am not sure this is a good idea, remember we are trying to move to a direction where the developers are not individually creating mirrors, but instead using a yum download or better yet a Koji Instance to to build the packages. I have talked with folks about this, but I actually think we need to break the build down to re-package the Patched SRPMs as a seperate step so it can be done on the mirror so each developer does not need to deal with patching SRPM or creating the SRPM from the tarballs. Ultimately the developers should be consuming prebuilt RPMs (noarch or x86_64) unless they are specifically changing code for a certain feature or functionality in the Flock or Openstack. Sau! > 2) Developer can do mirror integrity check before start building. > We always encounter this kind of issue that when build has been started for a while, then it told us some package missing. If we can do integrity check before building, it will be friendly to > Developer. > > For mirror integrity check, just need to check if local mirror is complete compare to the latest > Download list. If not, need download missing package first before start building. > > Zhipeng > > > > > > > > -----Original Message----- > From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] > Sent: 2018年7月26日 10:13 > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide > > Hi again, > > Can someone please help test the process to build an ISO based on master using Developer Guide [0]? Please use Github page [1] as a documentation support. > > Requirements: > > - Repo status checked 7/25/2018 19:12 PST > - stx-tools master branch > - latest change: b65fa0a0ec6297199843b1455615d0126bb7e7c7 Update RPM macros > - Temporal! Changes, already in Developer Guide > - RPM: selinux-policy-devel required > https://review.openstack.org/#/c/585915 > > [0] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide > [1] https://github.com/xe1gyq/starlingx/blob/master/DeveloperGuide.md > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From abraham.arce.moreno at intel.com Tue Aug 7 17:42:31 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 7 Aug 2018 17:42:31 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> Message-ID: > In the F2F meeting today, we worked jointly to define sub-project teams to > assist with bottom-up planning. > > https://ethercalc.openstack.org/ctjc7vlbphm1 > > (also linked from the main StarlingX wiki page) Ghada, Staff, Is it possible we can create a sub-project team called "Use Cases"? I understand this is a low priority task however bringing some proof of concepts early will allow our community to bring another level of validation. From abraham.arce.moreno at intel.com Tue Aug 7 17:49:44 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 7 Aug 2018 17:49:44 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation In-Reply-To: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> References: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> Message-ID: Thanks Ian! > [ OpenStack :: API ] > > - API Guide .. the concepts in the API > - API Ref .. a reference for the API > Can we prioritize one over the other? > > > We should do the concepts and the ref at the same time. The new OpenStack > > approach allows for tags to go in the code. Let's start with this work. Understood. > [ StarlingX :: API ] > > It seems we can categorize the StarlingX APIs in 2: > - Brand New APIs from StarlingX projects > - Existing APIs from OpenStack projects > > > StarlingX should not document other OpenStack API's, would their > > documentation not the source of truth? They are :) let's prioritize Flock. > [ StarlingX :: API :: Brand New ] > > The projects falling into this category are the following: > > - [0] NFVI Orchestration > - [1] High Availability/Process Monitoring/Service Management > - [2] StarlingX System Configuration Management > - [3] Horizon plugins for new StarlingX services > - [4] Installation/Update/Patching/Backup/Restore > > Can we considered all the above to be included in this API documentation > effort? > Are we missing any other? > > All projects in the Flock should be included. I think there is a dependency on > some of the code restructuring activities that are underway, we need to make > sure these activities don't collide. Yep! As discussed in the thread [Starlingx-discuss] Restructuring round 2 [0] [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000499.html From abraham.arce.moreno at intel.com Tue Aug 7 19:22:44 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 7 Aug 2018 19:22:44 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> References: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> Message-ID: Thanks Ghada! > (You may know this already) The StarlingX APIs (especially for sysinv) are > currently documented at: > https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi- > doc > You can use the content as a starting point. However, the mechanism used is > outdated using maven and wadl files. So you need to use the more current > approach. I was not aware of :) understood so we will take a look. > Greg Waines did some research on this. I strongly recommend you review > with him when he's back from vacation (Tues Aug 7). Ok > Is this the story you are working on: > https://storyboard.openstack.org/#!/story/2002712 ? If so, I'll add some of > the details Greg has captured to the story. Yes, this is the story. From abraham.arce.moreno at intel.com Tue Aug 7 19:58:44 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 7 Aug 2018 19:58:44 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation In-Reply-To: References: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Scott! > How are these built into documents? I was playing around with the > instructions in the README.mvn_cache file from my Ubuntu box and can't > seem to create a mvn.repo.tgz following those steps. To create our StarlingX ISO there are 2 phases: - Build of the CentOS Mirror repository - Build of StarlingX Packages and ISO mvn.repo.tgz is created in the first phase, during the build of the CentOS Mirror repository, specifically in the download and packaging of the Tarball Compressed files [0] and then taken as an input of the Build Packages based on the Spec File: stx-integ/restapi-doc/centos/restapi-doc.spec We depend on a RPM based Linux distro for its generation, I am not sure how tightly coupled it is to our StarlingX Build System to take it out to build individually and based on the comments from Ghada [1] : " However, the mechanism used is outdated using maven and wadl files. So you need to use the more current approach. " So main effort might be to migrate from wadl to OpenStack Doc. I do not know if there is a way to translate but a specific OpenStack related documentation talks about this format [2] For now, I have created a StarlingX Wiki page to document all our StarlingX API journey [3] This is the output of where mvn.repo.tgz is reference across our StarlingX code. [user at 0756d97288e1 starlingx]$ repo grep mvn.repo.tgz cgcs-root/stx/stx-integ/restapi-doc/centos/build_srpm.data:\ $CGCS_BASE/downloads/mvn.repo.tgz \ cgcs-root/stx/stx-integ/restapi-doc/centos/restapi-doc.spec:\ Source1: mvn.repo.tgz cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/Makefile:\ if [ ! -e mvn.repo.tgz ]; then \ cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/Makefile:\ tar -xvzf ./mvn.repo.tgz -C ./mvn.repo/ cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/Makefile.cache:\ cd mvn.repo && tar -czvf ../mvn.repo.tgz . && cd .. cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/README.mvn_cache:\ Steps to produce mvn.repo.tgz [Maven cache] cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/README.mvn_cache:\ mock -r $MY_BUILD_CFG_STD --copyout /builddir/build/BUILD/restapi-doc-1.6.0/mvn.repo.tgz ~/ cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/README.mvn_cache:\ cp ~/mvn.repo.tgz $MY_REPO/stx/downloads/ cgcs-root/stx/stx-integ/restapi-doc/restapi-doc/README.mvn_cache:# \ ln -s ../../../downloads/mvn.repo.tgz mvn.repo.tgz stx-tools/centos-mirror-tools/tarball-dl.lst:\ !mvn.repo.tgz#mvn#https://repo.maven.apache.org/maven2 stx-tools/centos-mirror-tools/tarball-dl.sh: \ # The mvn.repo.tgz tarball will be created downloading a serie of stx-tools/centos-mirror-tools/tarball-dl.sh: \ elif [ "$tarball_name" = "mvn.repo.tgz" ]; then [0] https://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/tarball-dl.sh [1] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000530.html [2] https://wiki.openstack.org/wiki/Documentation/APISite/DocumentingWadls [3] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation From ildiko.vancsa at gmail.com Tue Aug 7 20:05:13 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 7 Aug 2018 22:05:13 +0200 Subject: [Starlingx-discuss] Nova - StarlingX diff analysis Message-ID: Hi, In case you haven’t seen it Matt Riedemann from the Nova core team made an analysis on the StarlingX additions to Nova. You can find the thread and links to the detailed information here: http://lists.openstack.org/pipermail/openstack-dev/2018-August/132936.html Besides discussing the items on the mailing list we can also plan to have discussion on some of the items at upcoming PTG in Denver in September. Thanks and Best Regards, Ildikó From abraham.arce.moreno at intel.com Tue Aug 7 20:19:37 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 7 Aug 2018 20:19:37 +0000 Subject: [Starlingx-discuss] [Mirror]About mvn.repo.tgz In-Reply-To: <93814834B4855241994F290E959305C752F62DFF@SHSMSX103.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C752F62DFF@SHSMSX103.ccr.corp.intel.com> Message-ID: Hi Zhipeng! > I have a question about this tgz. > > As we know, we did not need download this mvn.repo.tgz several months > ago. > Do you or anyone know the story that why we add it as a part of download > mirror now. > It usually take us around 1.5H to download the whole of this mvn folder. Yes, mvn.repo.tgz (Maven) version won't change and based in another thread related to API documentation [0] we learned from Ghada this file is use to generate the StarlingX API documentation. We are in the process to adopt OpenStack API documentation guidelines to generate ours so more to come. Let's wait if someone else know if Maven is used in another place, if not then we can probably remove mvn.repo.tgz Immediate solution is the next revision of tarball compressed file download process taking your feedback as input regarding the skip of download if file is present [1] [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000557.html [1] https://review.openstack.org/#/c/589333/ From abraham.arce.moreno at intel.com Tue Aug 7 20:26:38 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 7 Aug 2018 20:26:38 +0000 Subject: [Starlingx-discuss] [RFC] StarlingX Developer Guide In-Reply-To: <93814834B4855241994F290E959305C752F603E0@SHSMSX103.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C752F603E0@SHSMSX103.ccr.corp.intel.com> Message-ID: Hi Zhipeng, > I have some proposals as below. Thanks for bringing this up. > In current developer guide, we have 2 containers for mirror download and > build. > Can we change it to only use 1 container for both? We will have 2 benefits at > least > 1) Simplify developer guide and remove some steps, such as mirror copy. > 2) Developer can do mirror integrity check before start building. > We always encounter this kind of issue that when build has been started > for a while, then it told us some package missing. If we can do integrity check > before building, it will be friendly to Developer. Yes, we are using 2 containers due to the way we approach to replicate Wind River build environment and I also agree it is time to get new improvements into the build process. Let's wait for some feedback from the rest of the team about pros and cons, if any. > For mirror integrity check, just need to check if local mirror is complete > compare to the latest Download list. If not, need download missing package > first before start building. Yes From bruce.e.jones at intel.com Tue Aug 7 22:47:47 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 7 Aug 2018 22:47:47 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> I don’t think we need a sub-project called Use Cases but I'd sure like to see a document on that topic. I think it makes sense to add this to the work list for the Documentation team. In fact, I've already done so. :) https://storyboard.openstack.org/#!/story/2003331 brucej -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Tuesday, August 7, 2018 10:43 AM To: Khalil, Ghada ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects > In the F2F meeting today, we worked jointly to define sub-project > teams to assist with bottom-up planning. > > https://ethercalc.openstack.org/ctjc7vlbphm1 > > (also linked from the main StarlingX wiki page) Ghada, Staff, Is it possible we can create a sub-project team called "Use Cases"? I understand this is a low priority task however bringing some proof of concepts early will allow our community to bring another level of validation. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From zhipengs.liu at intel.com Wed Aug 8 01:46:58 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 8 Aug 2018 01:46:58 +0000 Subject: [Starlingx-discuss] Some test cases may be added to StarlingX test suit Message-ID: <93814834B4855241994F290E959305C752F698FE@SHSMSX104.ccr.corp.intel.com> Hi Ada, As we know, Titanium Cloud is OPNFV verified, so it should pass related test. I see this test spec below, not sure if we can add them to our test suit. Just FYI. https://docs.opnfv.org/en/stable-danube/submodules/dovetail/docs/testing/user/testspecification/ The OPNFV OVP provides a series or test areas aimed to evaluate the operation of an NFV system in accordance with carrier networking needs. Each test area contains a number of associated test cases which are described in detail in the associated test specification. All tests in the OVP are required to fulfill a specific set of criteria in order that the OVP is able to provide a fair assessment of the system under test. Test requirements are described in the 'Test Case Requirements'_ document. Thanks! Zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Wed Aug 8 01:47:09 2018 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Wed, 8 Aug 2018 01:47:09 +0000 Subject: [Starlingx-discuss] August 1/2: Face to Face notes Message-ID: <750A3D68-DC9B-440F-9607-FE522E2DDF69@windriver.com> The StarlingX core team had a F2F in Ottawa, at the Wind River site on August 1 and 2nd. We had a productive two days, below are the key outcomes: We established a list of identified sub-projects, along with the roles, can be found here: [0] [0] https://ethercalc.openstack.org/ctjc7vlbphm1 Project priorities have been reviewed and can be found here: [1] [1] https://wiki.openstack.org/wiki/StarlingX/Project_Priorities Project tracking: Release tracking will be done in StoryBoard Bugs will be tracked in Bugzilla - this needs to be set up. Until then StoryBoard will continue to be used. Project communication: Sub-projects will communicate by using one mailing list, with tags As Bruce sent out in an earlier email we modified the release cadence/model: Alpha release (Naming TBD - but sticking to calendar related, made sense to the team): Oct. 2018 Starting 2019: Three releases / year Will ship approximately: Mid March, Mid July, Mid Nov LTS: will be defined based on content, not time based Defect workflow: Screen for Priority and Criticality Assign to sub-teams Team lead would assign them to team members Socialize at weekly meeting Record and publish Escalations go to the Release Team Communicate with TSC Documentation: Align with OpenStack conventions Leverage the wiki for developer guides Create CLI and API documentation Iterate based on feedback Test and test infrastructure: ======================= Backlog - Need to prioritize to ensure we focus on high leverage test cases Infrastructure - Unit Test will reside in STX - Zuul - Will get a baseline once the repo re-structuring is ready - Each sub-team is responsible for running the jobs We will create a test team - The team delivering a feature will develop new test cases as well, and work with test team to ensure test level is met Sanity - Currently we have 30 test cases, based on Robot Goal is to have: - Daily sanity runs - Weekly regression - Auto install - Log capture Project baseline: ============== OpenStack rebase: - No rebase in 2018 - Planning in Q4, execute in Q1 /2019 for next rebase Component Upgrade plans: - CentOS 7.5: Q4 / 2018 - Ceph: Q4 / 2018 - Qemu: bleeding edge, Q4 2018 - Libvirt: bleeding edge, Q4 2018 - Drivers: Q4 2018 Pyhton 3 plan: - Sub-team created - New commits once backlog is complete - Ensure changes are tested Multi OS Ian, Brent, Saul, Dean to work jointly on a design proposal to determine a framework. Build - most critical item right now - this is/will be covered in other threads. Actions: ======= Action Owner Bug tracking: Start a Bugzilla instance and populate with the existing data Bruce Release Tracking: Define a set of requirements to improve the search and reporting capabilities in StoryBoard TBD Release: Hook up WR Tox in Zuul TBD Test: Intel to provide the list of 30TC developed Bruce - Done Test: WR to assign a prime to review the test cases developed to date Ian From Ghada.Khalil at windriver.com Wed Aug 8 01:55:58 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 8 Aug 2018 01:55:58 +0000 Subject: [Starlingx-discuss] [Release] Sub-Project Wiki Page Format & Tags Message-ID: <151EE31B9FCCA54397A757BC674650F0BA404E89@ALA-MBD.corp.ad.wrs.com> Hello all, In order to facilitate release planning and tracking across the sub-projects, I would like to propose that we align on a few common sections for the wikis of the sub-projects. Here are two examples I made: https://wiki.openstack.org/wiki/StarlingX/DistroOpenStack https://wiki.openstack.org/wiki/StarlingX/Config Please also review the proposed sub-project tags and update as needed. These will help every sub-project/team create their Story Board list of work items. This is just the minimal set only to help route the items. Sub-teams can have additional tags as well. https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes Feedback is welcome. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 04:28:27 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 04:28:27 +0000 Subject: [Starlingx-discuss] [Release] Sub-Project Wiki Page Format & Tags In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA404E89@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA404E89@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB575BA1@fmsmsx115.amr.corp.intel.com> Ghada, these pages look good. Nice. I suggest we pull the team members names out of the ethercalc, since we've had reports of people loosing data in it. Let's just put them on the wiki directly. Otherwise we should set up all the team pages this way. I'll do some tomorrow. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, August 7, 2018 6:56 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] Sub-Project Wiki Page Format & Tags Hello all, In order to facilitate release planning and tracking across the sub-projects, I would like to propose that we align on a few common sections for the wikis of the sub-projects. Here are two examples I made: https://wiki.openstack.org/wiki/StarlingX/DistroOpenStack https://wiki.openstack.org/wiki/StarlingX/Config Please also review the proposed sub-project tags and update as needed. These will help every sub-project/team create their Story Board list of work items. This is just the minimal set only to help route the items. Sub-teams can have additional tags as well. https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes Feedback is welcome. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 8 11:13:47 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 8 Aug 2018 11:13:47 +0000 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template Message-ID: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> Hi Abraham, Will the documentation at https://docs.openstack.org/starlingx be both Developer Documentation and User Documentation ? ( I’m assuming that it will NOT be API documentation, I see discussions on that elsewhere. And would be located at https://developer.openstack.org/api-ref/starlingx/ ??? ) I am the main reviewer for all existing User Documentation for Titanium Cloud, so I’ll volunteer on being a core reviewer for this StarlingX Documentation ... please add me to the appropriate email lists. Thanks, Greg. From: "Arce Moreno, Abraham" Date: Wednesday, August 1, 2018 at 11:54 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] StarlingX Documentation Initial Template Dean, Here you have the high level overview of tasks to get started with our " StarlingX Documentation". @Hayde has raised her hand to help in this short term not time consuming assignment. Objective: Create a first "Gold Initial Commit" based in "Stx-Docs" project including high level requirements from OpenStack Documentation Guidelines so it can ported into the rest of our StarlingX projects. Phase 1: 1. Learning Resources 1.1 Read "OpenStack Documentation Contributor Guide" https://docs.openstack.org/doc-contrib-guide/index.html 2. Initial Code 2.1 Understand existing "Stx-Docs" repository and "docs/" implementation https://review.openstack.org/#/q/project:openstack/stx-docs 3. Translate important topics from "OpenStack Documentation Contributor Guide" into "Stx-Docs" commits: 3.1 Project guide setup 3.2 Writing documentation 3.3 Writing style 3.4 Building documentation 3.5 Landing pages on docs.openstack.org 4. Get Final Gerrit Reviews on commits and make changes 5. Have our "Gold Initial Commit" ready Phase 2: Once the first interaction is done we can take another repository to test our "Gold Initial Commit" having only modifications at the content level. Phase 3: With 2 interactions we are ready to easily move what we have learned and implemented in 2 projects to the rest of our StarlingX projects. Happy to hear your thoughts. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 8 11:28:05 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 8 Aug 2018 11:28:05 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation Message-ID: <95F2DB92-CF49-4B88-B4A4-F41C1E69467D@windriver.com> See in-lined comments below, Greg. From: "Arce Moreno, Abraham" Date: Friday, August 3, 2018 at 12:39 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] StarlingX API Documentation A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? [Greg] The concepts in our APIs are pretty standard ... so I would prioritize the API Reference work higher. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects [Greg] Actually what we currently do ( https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi-doc ) is - Fully document New / StarlingX –specific APIs (i.e. sysinv APIs), and - Document ONLY (non-upstreamed) extensions to existing OpenStack APIs o i.e. we did NOT want to duplicate documentation for existing OpenStack APIs [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? [Greg] Yeah basically we would organize it based on the re-partitioned areas of sysinv .... Actually initially grouped by any new API Endpoints we are introducing due to the sysinv re-partitioning, and Secondly grouped by functional areas. [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [Greg] As mentioned ... our previous strategy has been to have a separate document chapter that describes ONLY the extensions that we have done to existing OpenStack APIs. With the intent being that we did NOT want to duplicate existing OpenStack API documentation [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [Greg] Not sure what you’re suggesting here ... Unit Tests for our APIs ? Greg. [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 8 11:32:49 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 8 Aug 2018 11:32:49 +0000 Subject: [Starlingx-discuss] StarlingX API Documentation In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> References: <9D4CA82D-7ACB-45D3-88B1-062C5F1A2F17@windriver.com> <151EE31B9FCCA54397A757BC674650F0BA40420C@ALA-MBD.corp.ad.wrs.com> Message-ID: ... yeah forgot to mention the point that Ghada makes below, we currently use a very very out-dated approach to API Documentation ... i.e. Grizzly timeframe ... which uses maven and wadl files ... very ugly. This approach also had the API documentation centralized in one spot ... whereas now the API documentation seems to live (correctly) in the same git as the code. So we additionally need to convert our API Documentation to the current format being used for OpenStack API Doc and should distribute the API documentation appropriately to the appropriate StarlingX sub-projects. Greg. From: "Khalil, Ghada" Date: Friday, August 3, 2018 at 6:41 PM To: "Jolliffe, Ian" , "Arce Moreno, Abraham" , "starlingx-discuss at lists.starlingx.io" Cc: Greg Waines Subject: RE: [Starlingx-discuss] StarlingX API Documentation Hi Abraham, (You may know this already) The StarlingX APIs (especially for sysinv) are currently documented at: https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi-doc You can use the content as a starting point. However, the mechanism used is outdated using maven and wadl files. So you need to use the more current approach. Greg Waines did some research on this. I strongly recommend you review with him when he's back from vacation (Tues Aug 7). Is this the story you are working on: https://storyboard.openstack.org/#!/story/2002712 ? If so, I'll add some of the details Greg has captured to the story. Regards, Ghada -----Original Message----- From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Friday, August 03, 2018 3:38 PM To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation Hi Abraham; Thanks for kicking this off. On 2018-08-03, 12:40 PM, "Arce Moreno, Abraham" > wrote: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? We should do the concepts and the ref at the same time. The new OpenStack approach allows for tags to go in the code. Let's start with this work. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects StarlingX should not document other OpenStack API's, would their documentation not the source of truth? [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? All projects in the Flock should be included. I think there is a dependency on some of the code restructuring activities that are underway, we need to make sure these activities don't collide. Ian [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 8 11:57:31 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 8 Aug 2018 11:57:31 +0000 Subject: [Starlingx-discuss] [Mirror]About mvn.repo.tgz In-Reply-To: References: <93814834B4855241994F290E959305C752F62DFF@SHSMSX103.ccr.corp.intel.com> Message-ID: <29069F3C-FA3C-4A38-AAF0-F1A44DFC6D58@windriver.com> I am pretty sure that maven is only being used for generating our API docs ... which like I mentioned is NOT the current OpenStack approach for API documentation ... its what they used back in Grizzly era. Greg. From: "Arce Moreno, Abraham" Date: Tuesday, August 7, 2018 at 4:19 PM To: "Liu, ZhipengS" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Mirror]About mvn.repo.tgz Hi Zhipeng! I have a question about this tgz. As we know, we did not need download this mvn.repo.tgz several months ago. Do you or anyone know the story that why we add it as a part of download mirror now. It usually take us around 1.5H to download the whole of this mvn folder. Yes, mvn.repo.tgz (Maven) version won't change and based in another thread related to API documentation [0] we learned from Ghada this file is use to generate the StarlingX API documentation. We are in the process to adopt OpenStack API documentation guidelines to generate ours so more to come. Let's wait if someone else know if Maven is used in another place, if not then we can probably remove mvn.repo.tgz Immediate solution is the next revision of tarball compressed file download process taking your feedback as input regarding the skip of download if file is present [1] [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000557.html [1] https://review.openstack.org/#/c/589333/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 8 12:06:14 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 8 Aug 2018 12:06:14 +0000 Subject: [Starlingx-discuss] Some test cases may be added to StarlingX test suit In-Reply-To: <93814834B4855241994F290E959305C752F698FE@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C752F698FE@SHSMSX104.ccr.corp.intel.com> Message-ID: <7CD95FFE-917B-4137-B23A-787113E37E06@windriver.com> As you mentioned, Titanium Cloud is OPNFV verified ... which means it passes all the mandatory dovetail tests. Internally we are in the process of adding this dovetail test suite to the automated tests that we internally run on TitaniumCloud / StarlingX. The gotcha about running Dovetail is that there is some specific networking requirements for the node running the dovetail test tool, it requires some modification of the standard Titanium configuration to allow root ssh access to nodes from dovetail tool and there are still some manual changes to dovetail itself required due to dovetail not being flexible enough yet to handle different types of implementations (specifically in the yardstick HA tests). Greg. From: "Liu, ZhipengS" Date: Tuesday, August 7, 2018 at 9:46 PM To: "Cabrales, Ada" , "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Some test cases may be added to StarlingX test suit Hi Ada, As we know, Titanium Cloud is OPNFV verified, so it should pass related test. I see this test spec below, not sure if we can add them to our test suit. Just FYI. https://docs.opnfv.org/en/stable-danube/submodules/dovetail/docs/testing/user/testspecification/ The OPNFV OVP provides a series or test areas aimed to evaluate the operation of an NFV system in accordance with carrier networking needs. Each test area contains a number of associated test cases which are described in detail in the associated test specification. All tests in the OVP are required to fulfill a specific set of criteria in order that the OVP is able to provide a fair assessment of the system under test. Test requirements are described in the ‘Test Case Requirements’_ document. Thanks! Zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 14:18:43 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 14:18:43 +0000 Subject: [Starlingx-discuss] Questions about VXLAN Provider Network feature for StarlingX upstreaming In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BAB575C9B@fmsmsx115.amr.corp.intel.com> + List From: Qin, Kailun Sent: Monday, August 6, 2018 12:02 AM To: Jolliffe, Ian Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Qin, Kailun ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing Subject: Questions about VXLAN Provider Network feature for StarlingX upstreaming Hi Ian, We are analyzing the VXLAN provider network feature for StarlingX upstreaming, in which case the patch 021ae1a firstly introduced VXLAN provider network and 509ea54, 1e368a3 added with the VXLAN dynamic/static mode. Different from StarlingX, the upstream neutron VXLAN provider networks do not support to be associated with physical networks. They assume that VXLAN creates overlay networks where they do not require the VNI space to be accessible by a particular interface on a node. Would you please kindly share some business use cases or user stories with us about the physical-network-constrained VXLAN provider network introduced? Let me know if any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 14:18:27 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 14:18:27 +0000 Subject: [Starlingx-discuss] Questions about custom setting and mac filter In-Reply-To: <2EE296D083DF2940BF4EBB91D39BB89F3BBCE4F1@shsmsx102.ccr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBCE4F1@shsmsx102.ccr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB575C8D@fmsmsx115.amr.corp.intel.com> + StarlingX list From: Guo, Ruijing Sent: Monday, August 6, 2018 10:05 AM To: Jolliffe, Ian Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Qin, Kailun Subject: Questions about custom setting and mac filter Hi, Ian, We are investigating tenant based custom setting and mac filter (d875491, c647127, b189392e, 28d6f56). The custom settings extension feature(d875491) is to allow the admin to manage settings on a per tenant basis. Currently only mac filtering is available as a settable value. Mac filter is alternative implementation of neutron port security. 28d6f56 is to enable the Neutron port security extension for ML2 plugin. by default, it overrides the existing mac filtering functionality. We can drop tenant based customer setting and mac filter features and ONLY support neutron port security with OVSDPDK. What do you think? Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 14:19:00 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 14:19:00 +0000 Subject: [Starlingx-discuss] Questions about Provider MTU feature for StarlingX upstreaming In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BAB575CAF@fmsmsx115.amr.corp.intel.com> + list From: Qin, Kailun Sent: Monday, August 6, 2018 12:01 AM To: Jolliffe, Ian Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Qin, Kailun ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing Subject: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, We are analyzing the provider MTU feature for StarlingX upstreaming, in which case the patch 021ae1a introduced providernet MTU and c647127 introduced the port granularity bindings for MTU. Since the upstream neutron already has the network granularity MTU implemented [1] and made it available to be created or updated [2], would you please kindly help check whether we need to upstream this feature? If so, would you please share some business use cases or user stories related with us? [1] https://review.openstack.org/#/c/480738/ [2] https://review.openstack.org/#/c/483091/ Let me know if any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Aug 8 14:20:04 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 8 Aug 2018 16:20:04 +0200 Subject: [Starlingx-discuss] Launchpad (and other) Sandbox projects in the OSF toolset Message-ID: Hi, During the weekly call it came up to do one more round of evaluation with Launchpad due to advantages from hosting and possible later StoryBoard migration perspective. You can find information on the Sandbox project in Launchpad and other tools we host here: https://docs.openstack.org/contributors/code-and-documentation/sandbox-house-rules.html Please let me know if you have any questions. Thanks and Best Regards, Ildikó From cindy.xie at intel.com Wed Aug 8 14:38:04 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 8 Aug 2018 14:38:04 +0000 Subject: [Starlingx-discuss] Brent Rowsell commented on "upgrade libvirt-python to newer version support... References: <2FD5DDB5A04D264C80D42CA35194914F2B30BBA9@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E3243@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B30CD96@SHSMSX104.ccr.corp.intel.com> + mailing list @chuy, I think this is the libvirt-python build issue you were reported out in the community meeting just now. Thx. - cindy -----Original Message----- From: Xie, Cindy Sent: Wednesday, August 8, 2018 9:54 PM To: 'Rowsell, Brent' Cc: Jones, Bruce E ; Wold, Saul ; Troyer, Dean Subject: RE: Brent Rowsell commented on "upgrade libvirt-python to newer version support... Brent, Just had side conversation w/ Dean, he also agrees w/ you that libvirt and libvirt-python version doesn't need to be exact version in CentOS 7.5 dist. So it's nice that we can have a choice of upgrading to something newer. There is a 4.1 version in Fedora 28 dist, please see if you want to check and decide which version is the one we'd like to upgrade to. Thanks. - cindy -----Original Message----- From: Xie, Cindy Sent: Wednesday, August 8, 2018 9:27 PM To: 'Rowsell, Brent' Cc: Jones, Bruce E ; Wold, Saul Subject: RE: Brent Rowsell commented on "upgrade libvirt-python to newer version support... Brent, Agree that we shall go w/ end-to-end solution, and actually this is already started by Shuicheng Lin, who is doing all .src.rpm package review and found out its dependencies and do upgrade to what CentOS 7.5 required. As for libvirt and libvirt-python, we shall follow the same approach, but this is more complicated as we have a forked libvirt repo, and we did analysis for the patches on top of it. They needs to be reviewed and rebased to a new libvirt version. If there are patches we cherry-picked from a newer libvirt, then we can drop them. I can create related stories to make sure the procedures are more transparent. Thx. - cindy -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, August 8, 2018 9:16 PM To: Xie, Cindy Cc: Jones, Bruce E ; Wold, Saul Subject: RE: Brent Rowsell commented on "upgrade libvirt-python to newer version support... We need an end to end plan to upgrade to 7.5 not a piece meal approach which is what we seem to be on. This is a recipe for breaking the load. We are likely to go to a newer version of libvirt compared to what is in centos 7.5 as 3.9 is quite old. The upversion of python-libvirt needs to be tied to libvirt upversion Brent -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 8, 2018 8:44 AM To: Rowsell, Brent Cc: Jones, Bruce E ; Wold, Saul Subject: RE: Brent Rowsell commented on "upgrade libvirt-python to newer version support... Hi, Brent, Understand the complexity. What if we upgrade libvirt-python and all its dependencies, this is part of CentOS 7.5 upgrade. I know we have many patches on libvirt because we forked a repo, I am thinking to rebase those patches to 3.9 as well. Let me know if there is any work going on already, but in my opinion this is something we shall include in Q4 release. Thx. - cindy -----Original Message----- From: storyboard at storyboard.openstack.org [mailto:storyboard at storyboard.openstack.org] Sent: Wednesday, August 8, 2018 8:04 PM To: Xie, Cindy Subject: Brent Rowsell commented on "upgrade libvirt-python to newer version support... Brent Rowsell added a comment to the story "upgrade libvirt-python to newer version supported by CentOS 7": "You cannot do this, This is tied to the version of libvirt that is used which is 3.5. Please do not make any changes here" URL: https://storyboard.openstack.org/#!/story/2003339 at 2018-08-08 12:04:02+00:00 From cindy.xie at intel.com Wed Aug 8 14:43:14 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 8 Aug 2018 14:43:14 +0000 Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B30CDD0@SHSMSX104.ccr.corp.intel.com> All, We talked about the 9 RPM/sRPM in: https://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst I submitted 5 stories and please comments on the way to handle them: - https://storyboard.openstack.org/#!/story/2003339 - https://storyboard.openstack.org/#!/story/2003340 - https://storyboard.openstack.org/#!/story/2003341 - https://storyboard.openstack.org/#!/story/2003342 - https://storyboard.openstack.org/#!/story/2003357 @chuy is building the RPMs from Intel Koji, please add the status into the stories as well. So we can make decision if we want to upgrade or to re-build them. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Wed Aug 8 14:26:42 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Wed, 8 Aug 2018 14:26:42 +0000 Subject: [Starlingx-discuss] Request for review of several StartlingX neutron upstream features In-Reply-To: References: Message-ID: <6345119E91D5C843A93D64F498ACFA13699B01E3@SHSMSX101.ccr.corp.intel.com> + StarlingX list From: Qin, Kailun Sent: Friday, July 27, 2018 10:39 AM To: Jolliffe, Ian Cc: Le, Huifeng ; Qin, Kailun ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing ; Troyer, Dean ; Jones, Bruce E ; Rowsell, Brent ; Peters, Matt Subject: RE: Request for review of several StartlingX neutron upstream features Hi Ian, Thanks a lot for the feedbacks! The commit IDs related w/ the features cited previously are listed below. [1] system host management: 566b640, 4aa521e, baa8264, c6849f, e6669ec [2] router & DHCP rescheduling: 00dd7cb, 05aed14 [3] provider network management: 021ae1a To be specific for the dependencies, we have: 1. 021ae1a[3] depends on 566b640[1], i.e. providernet management depends on system host management. 2. 566b640 etc.[1] is related w/ 00dd7cb[2], i.e. system host management is related w/ router & dhcp rescheduling and fault management (stx component). For provider network management, the feature is great and it would be nice if we can get the community’s support to stand alone. However, based on the following facts: 1) It brings about quite a lot new resources, DBs and new type drivers etc. which need to functionally align with the current neutron core components; 2) It depends on system host management, while host is NOT a network attribute and host management may have some other db level and deployment level alternatives; 3) It also relates to (re)scheduling and fault management etc.; which might raise some community discussions. Thus, we have also made some plan-Bs in order to better accommodate the features to the upstream in case of slow socialization progress. As mentioned in my last mail, we’ve prepared: 1) Host bindings of provider networks: https://review.openstack.org/579410/. 2) Dynamic segment range management of self-service networks: https://review.openstack.org/579411. 3) Managed provider networks: a full-functionality providernet management which should support dynamic providernet add/edit + segmentation edit (based on proposal 2). 4) Rescheduling of routers and DHCP servers: please kindly refer to the attached. 5) Fault management: the drafted idea is to only expose interfaces in the upstream neutron, but move the reference implementation to another project like stx-fault. 6) Host management: the drafted idea is to separate it to another standalone project if we do need that. 7) Router & DHCP (re)scheduling will be part of 4). The host-based scheduler will be covered in 1) w/ those 2 community bugs addressed. Let me know if any further question or anything unclear. Great thanks! BR, Kailun From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Friday, July 27, 2018 9:36 AM To: Qin, Kailun > Cc: OTC NST Edge Staff >; Guo, Ruijing >; Troyer, Dean >; Jones, Bruce E >; Rowsell, Brent >; Peters, Matt > Subject: Re: Request for review of several StartlingX neutron upstream features Hi Kailun; I don’t have access to the Intel GitHub any longer – can you provide the commit ID. I agree we need to unravel the dependencies on the StarlingX code base in this case. Let’s see which bits can be separated. Can you please clarify the dependency you are identifying? Which of these commits have dependencies on StarlingX code? I think the Provider network management will need some socialization with the community and be able to stand alone. This simplifies network configuration, which is a very nice usability feature. [4] and [5] should proceed as they are bugs. Regards; Ian From: "Qin, Kailun" > Date: Wednesday, July 25, 2018 at 5:54 AM To: Ian Jolliffe > Cc: OTC NST Edge Staff >, "Guo, Ruijing" >, "Troyer, Dean" >, "Jones, Bruce E" > Subject: Request for review of several StartlingX neutron upstream features Hi Ian, This is Kailun from NST/OTC/SSG working on the StarlingX feature upstream to OpenStack neutron. Based on a previous analysis on the existing patches [1][2][3], we found that “provider network management”[3] has dependency on “system host management”[1]. And “system host management”[1] is also related w/ “router & DHCP rescheduling”[2] along with the StarlingX fault management. In this context, we’d like to propose the following features. 1. For provider network management, there would be: 1.1 Host bindings of provider networks 1.2 Dynamic segment range management of self-service networks 1.3 Managed provider networks 2. For system host management, we’ll have: 2.1 Rescheduling of routers and DHCP servers 2.2 Fault management 3. For router & DHCP rescheduling, they will be part of (2.1). As for the host-based scheduler, we’ll make sure it is covered in (1.1), in which case the upstream DHCP/L3 scheduler can work correctly based on the host information and w/ multiple mechanism drivers by having the related bugs [4][5] fixed. [1] system host management: 0004-host-system-host-management-extension, etc. [2] router & DHCP rescheduling: 0070-CGTS-8120-host-use-interface-bindings-for-dhcp-sched, 0019-US75599-Introduce-rescheduling-of-routers-and-DHCP-s, etc. [3] provider network management: 0005-pnet-extension-to-expose-providernet-management-at-a [4] https://bugs.launchpad.net/neutron/+bug/1732448 [5] https://bugs.launchpad.net/neutron/+bug/1732445 Would you please kindly help review the features proposed and provide your suggestions on their priorities? Let me know if you have any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Aug 8 14:45:21 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 8 Aug 2018 14:45:21 +0000 Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B30CDD0@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B30CDD0@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E36C6@ALA-MBD.corp.ad.wrs.com> Cindy, What is the objective of this initiative ? Thanks, Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 8, 2018 10:43 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list All, We talked about the 9 RPM/sRPM in: https://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst I submitted 5 stories and please comments on the way to handle them: - https://storyboard.openstack.org/#!/story/2003339 - https://storyboard.openstack.org/#!/story/2003340 - https://storyboard.openstack.org/#!/story/2003341 - https://storyboard.openstack.org/#!/story/2003342 - https://storyboard.openstack.org/#!/story/2003357 @chuy is building the RPMs from Intel Koji, please add the status into the stories as well. So we can make decision if we want to upgrade or to re-build them. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Wed Aug 8 15:08:49 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 8 Aug 2018 15:08:49 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> Message-ID: > I don’t think we need a sub-project called Use Cases but I'd sure like to see a > document on that topic. I think it makes sense to add this to the work list for > the Documentation team. In fact, I've already done so. :) > > https://storyboard.openstack.org/#!/story/2003331 Thanks Bruce: StarlingX Use Cases [0] Initial idea behind this page is to create proposal drafts for use cases under StarlingX and once there is a proof of concept it can be taken to OpenStack Edge Computing Group [1] [0] https://wiki.openstack.org/wiki/StarlingX/Use_Cases [1] https://wiki.openstack.org/wiki/Edge_Computing_Group From sgw at linux.intel.com Wed Aug 8 15:16:22 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 8 Aug 2018 08:16:22 -0700 Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E36C6@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B30CDD0@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E36C6@ALA-MBD.corp.ad.wrs.com> Message-ID: On 08/08/2018 07:45 AM, Rowsell, Brent wrote: > Cindy, > > What is the objective of this initiative ? > Brent, The idea as mentioned in the past is to remove this set of packages that are coming from non-CentOS repos. This way we can reduce where we are finding (or not finding) some set of RPMs and make the build more consistent. 6 of 9 are python based. Kubernetes has a proposed fix. influxdb is the most problematic as it's a binary being served from s3, yes signed, but it would be better if either it was built during the build process (short term) or StarlingX project built it (longer term). Sau! > Thanks, > > Brent > > *From:* Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Wednesday, August 8, 2018 10:43 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the > 3rd party list > > All, > > We talked about the 9 RPM/sRPM in: > https://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst > > I submitted 5 stories and please comments on the way to handle them: > > -https://storyboard.openstack.org/#!/story/2003339 > > -https://storyboard.openstack.org/#!/story/2003340 > > -https://storyboard.openstack.org/#!/story/2003341 > > -https://storyboard.openstack.org/#!/story/2003342 > > -https://storyboard.openstack.org/#!/story/2003357 > > @chuy is building the RPMs from Intel Koji, please add the status into > the stories as well. So we can make decision if we want to upgrade or to > re-build them. > > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From cindy.xie at intel.com Wed Aug 8 15:19:16 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 8 Aug 2018 15:19:16 +0000 Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E36C6@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B30CDD0@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E36C6@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B30CEBE@SHSMSX104.ccr.corp.intel.com> Brent, Let me explain the thinking process: - Initially, this 3rd party list was used to put the RPMs which are not yum downloadable. - There are actually two reasons why they are not downloadable using yum from CentOS: o They are no longer supported by CentOS LTS (like those 7.4 packages are disappearing and moving to 7.5) o They are actually not included in CentOS dist. - As Bruce said in the community meeting, they are the initial 9 RPMs that our build team is trying to build from source. - If there is a trustable version that we can download and rely on, then we really do not need to build. Actually, we'd like to limit our build service to only limit to those patches we have patches on. - Thus we did one-by-one analysis for those 9 RPM/sRPM to see if the list can be shorten. So please review the story and add your comments. I know that they may not all upgradable but let limit the problem to a smaller scope. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, August 8, 2018 10:45 PM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list Cindy, What is the objective of this initiative ? Thanks, Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 8, 2018 10:43 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list All, We talked about the 9 RPM/sRPM in: https://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst I submitted 5 stories and please comments on the way to handle them: - https://storyboard.openstack.org/#!/story/2003339 - https://storyboard.openstack.org/#!/story/2003340 - https://storyboard.openstack.org/#!/story/2003341 - https://storyboard.openstack.org/#!/story/2003342 - https://storyboard.openstack.org/#!/story/2003357 @chuy is building the RPMs from Intel Koji, please add the status into the stories as well. So we can make decision if we want to upgrade or to re-build them. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Aug 8 15:45:58 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 8 Aug 2018 15:45:58 +0000 Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B30CEBE@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B30CDD0@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E36C6@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B30CEBE@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E38CD@ALA-MBD.corp.ad.wrs.com> Cindy, Thanks for the background. Will add comments to the stories Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 8, 2018 11:19 AM To: Rowsell, Brent ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list Brent, Let me explain the thinking process: - Initially, this 3rd party list was used to put the RPMs which are not yum downloadable. - There are actually two reasons why they are not downloadable using yum from CentOS: o They are no longer supported by CentOS LTS (like those 7.4 packages are disappearing and moving to 7.5) o They are actually not included in CentOS dist. - As Bruce said in the community meeting, they are the initial 9 RPMs that our build team is trying to build from source. - If there is a trustable version that we can download and rely on, then we really do not need to build. Actually, we'd like to limit our build service to only limit to those patches we have patches on. - Thus we did one-by-one analysis for those 9 RPM/sRPM to see if the list can be shorten. So please review the story and add your comments. I know that they may not all upgradable but let limit the problem to a smaller scope. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, August 8, 2018 10:45 PM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list Cindy, What is the objective of this initiative ? Thanks, Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 8, 2018 10:43 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] rebuild or upgrade the 9 RPM/sRPM in the 3rd party list All, We talked about the 9 RPM/sRPM in: https://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst I submitted 5 stories and please comments on the way to handle them: - https://storyboard.openstack.org/#!/story/2003339 - https://storyboard.openstack.org/#!/story/2003340 - https://storyboard.openstack.org/#!/story/2003341 - https://storyboard.openstack.org/#!/story/2003342 - https://storyboard.openstack.org/#!/story/2003357 @chuy is building the RPMs from Intel Koji, please add the status into the stories as well. So we can make decision if we want to upgrade or to re-build them. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From kailun.qin at intel.com Wed Aug 8 16:01:29 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Wed, 8 Aug 2018 16:01:29 +0000 Subject: [Starlingx-discuss] Questions about Provider MTU feature for StarlingX upstreaming In-Reply-To: References: Message-ID: Hi Matt, Thanks for the explanation. One thing that brings a little confusion is the terminology “provider network”: · In StarlingX, the so-called “provider network” is more related w/ physical network, like a superset of “upstream provider networks”, which addresses the values that are currently stored in configuration file parameters only; · while in upstream neutron, what distinguishes provider networks from tenant networks is who (admin/user) actually creates them and how. Just correct me if I’m wrong. If my understanding is correct, yes, I agree with Ian’s response. We’ll work on removing the Neutron configuration file based approach to MTU management and exposing it via a RESTful API as the benefits are clear to us. Thanks again! BR, Kailun From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 8, 2018 9:05 PM To: Qin, Kailun ; Jolliffe, Ian Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing ; Rowsell, Brent Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming HI Kailun, I’m not sure from your reply if you are agreeing or disagreeing with Ian’s response. The intent was to show that the business case for the provider network MTU configuration is to remove the Neutron configuration file based approach to MTU management and expose it via a RESTful API. This is a similar business case for the managed provider networks as a whole. It is understood that upstream neutron already supports MTU configuration at the tenant network level, but was trying to show that the provider network MTU configuration addresses the values that are current stored in configuration file parameters only. Regards, Matt From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Wednesday, August 08, 2018 1:34 AM To: Jolliffe, Ian Cc: Troyer, Dean; Jones, Bruce E; Chilcote Bacco, Derek A; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Guo, Ruijing; Rowsell, Brent; Peters, Matt Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, Thanks for the feedback. The upstream neutron supports: • Set/modify a specific MTU on a (provider/tenant) network via REST API. This was introduced via “net-mtu-writable” API extension [1][2]. The requested MTU will work together w/ the MTU configuration options (global_physnet_mtu, physical_network_mtus and path_mtu) to configure the network MTU [3]. However, it does *NOT* support: • Dynamic MTU configuration options (global_physnet_mtu, physical_network_mtus and path_mtu) set/modify via REST API. [1] https://bugs.launchpad.net/neutron/+bug/1671634 [2] https://review.openstack.org/#/c/483518/ [3] https://docs.openstack.org/neutron/latest/admin/config-mtu.html So with the current upstream neutron implementation, the MTU configurations options still play a part at the deployment level. They serve as global maximum permissible and default values for network MTUs (of different network types). Meanwhile, neutron does support a dynamic alternative for users to set/modify specific MTUs across their networks. What do you think? Any use case that the current neutron is not able to cover? Let me know if anything unclear. Great thanks! BR, Kailun From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Wednesday, August 8, 2018 9:52 AM To: Qin, Kailun > Cc: Troyer, Dean >; Jones, Bruce E >; Chilcote Bacco, Derek A >; Le, Huifeng >; Xu, Chenjie >; Zhao, Forrest >; Guo, Ruijing >; Rowsell, Brent >; Peters, Matt > Subject: Re: Questions about Provider MTU feature for StarlingX upstreaming Hi Kailun; The provider network MTU goes along with the business case for managed provider networks. The feature comparison should not be between provider network MTUs and tenant network MTUs, but more about whether we support MTU values via provider network configuration (REST API). The comparable feature in OpenStack is the support for the ML2 global_physnet_mtu, physical_network_mtus and path_mtu configuration options. https://blueprints.launchpad.net/neutron/+spec/mtu-selection-and-advertisement Regards; Ian From: "Qin, Kailun" > Date: Monday, August 6, 2018 at 3:01 AM To: Ian Jolliffe > Cc: "Troyer, Dean" >, "Jones, Bruce E" >, "Chilcote Bacco, Derek A" >, "Le, Huifeng" >, "Qin, Kailun" >, "Xu, Chenjie" >, "Zhao, Forrest" >, "Guo, Ruijing" > Subject: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, We are analyzing the provider MTU feature for StarlingX upstreaming, in which case the patch 021ae1a introduced providernet MTU and c647127 introduced the port granularity bindings for MTU. Since the upstream neutron already has the network granularity MTU implemented [1] and made it available to be created or updated [2], would you please kindly help check whether we need to upstream this feature? If so, would you please share some business use cases or user stories related with us? [1] https://review.openstack.org/#/c/480738/ [2] https://review.openstack.org/#/c/483091/ Let me know if any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Aug 8 16:13:00 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 8 Aug 2018 11:13:00 -0500 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template In-Reply-To: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> References: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> Message-ID: On Wed, Aug 8, 2018 at 6:13 AM, Waines, Greg wrote: > ( I’m assuming that it will NOT be API documentation, I see discussions on > that elsewhere. And would be located at > https://developer.openstack.org/api-ref/starlingx/ ??? ) The actual location has not been finalized yet. I imagine it to be something like docs.starlingx.io. > so I’ll volunteer on being a core reviewer for this StarlingX Documentation > ... please add me to the appropriate email lists. You're on it! We are trying really hard to stay on a single list and use subject tags for filtering. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Wed Aug 8 16:16:32 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 8 Aug 2018 11:16:32 -0500 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> Message-ID: On Wed, Aug 8, 2018 at 10:08 AM, Arce Moreno, Abraham wrote: > Initial idea behind this page is to create proposal drafts for use cases under StarlingX and > once there is a proof of concept it can be taken to OpenStack Edge Computing Group [1] Are you starting with the use cases that ECG has already developed? Do we have any that vary a lot from those? dt -- Dean Troyer dtroyer at gmail.com From scottx.rifenbark at intel.com Wed Aug 8 16:23:32 2018 From: scottx.rifenbark at intel.com (Rifenbark, ScottX) Date: Wed, 8 Aug 2018 16:23:32 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> Message-ID: Hi, Is there a trick to displaying the storyboard links? Still unable to get these links to resolve. Scott -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Tuesday, August 7, 2018 3:48 PM To: Arce Moreno, Abraham ; Khalil, Ghada ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects I don’t think we need a sub-project called Use Cases but I'd sure like to see a document on that topic. I think it makes sense to add this to the work list for the Documentation team. In fact, I've already done so. :) https://storyboard.openstack.org/#!/story/2003331 brucej -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Tuesday, August 7, 2018 10:43 AM To: Khalil, Ghada ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] StarlingX Release Sub-Projects > In the F2F meeting today, we worked jointly to define sub-project > teams to assist with bottom-up planning. > > https://ethercalc.openstack.org/ctjc7vlbphm1 > > (also linked from the main StarlingX wiki page) Ghada, Staff, Is it possible we can create a sub-project team called "Use Cases"? I understand this is a low priority task however bringing some proof of concepts early will allow our community to bring another level of validation. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Wed Aug 8 16:32:50 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 8 Aug 2018 16:32:50 +0000 Subject: [Starlingx-discuss] Questions about Provider MTU feature for StarlingX upstreaming In-Reply-To: References: Message-ID: Hi Kailun, I’m not aware of provider networks being referred to as “admin” networks. The provider network terminology comes from the neutron extension (“providernet”) that defines the attributes for the physical network (“provider:network_type”, “provider:physical_network”, etc). I agree that the StarlingX definition is of greater scope since it also covers the related configuration file values. The only distinction that I am aware of between “admin” and “tenant” for provider networks is that they are only exposed to a user with administrative rights (enforced by policy). I hope that clarifies what I refer to as a provider network. Regards, Matt From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Wednesday, August 08, 2018 12:01 PM To: Peters, Matt; Jolliffe, Ian; starlingx-discuss at lists.starlingx.io Cc: Troyer, Dean; Jones, Bruce E; Chilcote Bacco, Derek A; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Guo, Ruijing; Rowsell, Brent Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming Hi Matt, Thanks for the explanation. One thing that brings a little confusion is the terminology “provider network”: • In StarlingX, the so-called “provider network” is more related w/ physical network, like a superset of “upstream provider networks”, which addresses the values that are currently stored in configuration file parameters only; • while in upstream neutron, what distinguishes provider networks from tenant networks is who (admin/user) actually creates them and how. Just correct me if I’m wrong. If my understanding is correct, yes, I agree with Ian’s response. We’ll work on removing the Neutron configuration file based approach to MTU management and exposing it via a RESTful API as the benefits are clear to us. Thanks again! BR, Kailun From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 8, 2018 9:05 PM To: Qin, Kailun ; Jolliffe, Ian Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing ; Rowsell, Brent Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming HI Kailun, I’m not sure from your reply if you are agreeing or disagreeing with Ian’s response. The intent was to show that the business case for the provider network MTU configuration is to remove the Neutron configuration file based approach to MTU management and expose it via a RESTful API. This is a similar business case for the managed provider networks as a whole. It is understood that upstream neutron already supports MTU configuration at the tenant network level, but was trying to show that the provider network MTU configuration addresses the values that are current stored in configuration file parameters only. Regards, Matt From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Wednesday, August 08, 2018 1:34 AM To: Jolliffe, Ian Cc: Troyer, Dean; Jones, Bruce E; Chilcote Bacco, Derek A; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Guo, Ruijing; Rowsell, Brent; Peters, Matt Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, Thanks for the feedback. The upstream neutron supports: • Set/modify a specific MTU on a (provider/tenant) network via REST API. This was introduced via “net-mtu-writable” API extension [1][2]. The requested MTU will work together w/ the MTU configuration options (global_physnet_mtu, physical_network_mtus and path_mtu) to configure the network MTU [3]. However, it does *NOT* support: • Dynamic MTU configuration options (global_physnet_mtu, physical_network_mtus and path_mtu) set/modify via REST API. [1] https://bugs.launchpad.net/neutron/+bug/1671634 [2] https://review.openstack.org/#/c/483518/ [3] https://docs.openstack.org/neutron/latest/admin/config-mtu.html So with the current upstream neutron implementation, the MTU configurations options still play a part at the deployment level. They serve as global maximum permissible and default values for network MTUs (of different network types). Meanwhile, neutron does support a dynamic alternative for users to set/modify specific MTUs across their networks. What do you think? Any use case that the current neutron is not able to cover? Let me know if anything unclear. Great thanks! BR, Kailun From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Wednesday, August 8, 2018 9:52 AM To: Qin, Kailun > Cc: Troyer, Dean >; Jones, Bruce E >; Chilcote Bacco, Derek A >; Le, Huifeng >; Xu, Chenjie >; Zhao, Forrest >; Guo, Ruijing >; Rowsell, Brent >; Peters, Matt > Subject: Re: Questions about Provider MTU feature for StarlingX upstreaming Hi Kailun; The provider network MTU goes along with the business case for managed provider networks. The feature comparison should not be between provider network MTUs and tenant network MTUs, but more about whether we support MTU values via provider network configuration (REST API). The comparable feature in OpenStack is the support for the ML2 global_physnet_mtu, physical_network_mtus and path_mtu configuration options. https://blueprints.launchpad.net/neutron/+spec/mtu-selection-and-advertisement Regards; Ian From: "Qin, Kailun" > Date: Monday, August 6, 2018 at 3:01 AM To: Ian Jolliffe > Cc: "Troyer, Dean" >, "Jones, Bruce E" >, "Chilcote Bacco, Derek A" >, "Le, Huifeng" >, "Qin, Kailun" >, "Xu, Chenjie" >, "Zhao, Forrest" >, "Guo, Ruijing" > Subject: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, We are analyzing the provider MTU feature for StarlingX upstreaming, in which case the patch 021ae1a introduced providernet MTU and c647127 introduced the port granularity bindings for MTU. Since the upstream neutron already has the network granularity MTU implemented [1] and made it available to be created or updated [2], would you please kindly help check whether we need to upstream this feature? If so, would you please share some business use cases or user stories related with us? [1] https://review.openstack.org/#/c/480738/ [2] https://review.openstack.org/#/c/483091/ Let me know if any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Aug 8 16:50:35 2018 From: scott.little at windriver.com (Scott Little) Date: Wed, 8 Aug 2018 12:50:35 -0400 Subject: [Starlingx-discuss] build-pkg --parallel In-Reply-To: <93814834B4855241994F290E959305C752F62C88@SHSMSX103.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C752F62C88@SHSMSX103.ccr.corp.intel.com> Message-ID: <391da6b4-a96f-052f-3828-7da719d9e103@windriver.com> nr_inodes=0 is supplied by the mock prior to issuing the mount syscall. There seems to be a mismatch between the mock inside your docker (I assume) and the kernel you are running. On 18-08-06 10:58 PM, Liu, ZhipengS wrote: > > Hi Scott and all, > > I have an issue when I did parallel build and need your help > > It seems b1/b2/b3 could not mount to tmpfs.  Only b0 which not mount > to tmpfs can work. > > 00:09:08 ERROR: Command failed: > > 00:09:08 # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=5g > mock_chroot_tmpfs /localdisk/loadbuild/zhipengl/starlingx/std/mock/b1/root > > Root cause seems to be nr_inode=0, as I saw dmesg log as below. > > However, I could not find where or how I can change this nr_inode. > > [22719.688732] tmpfs: Bad value '0' for mount option 'nr_inodes' > > [22719.710907] tmpfs: Bad value '0' for mount option 'nr_inodes' > > [22726.037303] tmpfs: Bad value '0' for mount option 'nr_inodes' > > [22740.384578] tmpfs: Bad value '0' for mount option 'nr_inodes' > > [22740.385174] tmpfs: Bad value '0' for mount option 'nr_inodes' > > Thanks! > > Zhipeng > > *From:*Scott Little [mailto:scott.little at windriver.com] > *Sent:* 2018年8月1日3:01 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] build-pkg --parallel > > I had a successful parallel build (aka build-pkgs --parallel) inside > the docker container.   ~1h45m on 24 core, 64G ram > > The prerequisite was a populated $MY_REPO/cgcs-tis-repo/dependancy-cache. > > Currently we only generate the cache *after* the build in the > 'generate-cgcs-tis-repo' step. *I'd like to see the cache stored in > git and updated regularly by 'official' builds.* > > Note: The cache doesn't have to be perfect, so a cache that is out of > date by a day or a week is still very useful. build-pkgs/mockchain > just needs a rough guide on build dependencies and potential > dependency loops. > > Scott > -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.hernandez.gonzalez at intel.com Wed Aug 8 16:50:05 2018 From: fernando.hernandez.gonzalez at intel.com (Hernandez Gonzalez, Fernando) Date: Wed, 8 Aug 2018 16:50:05 +0000 Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. Message-ID: <03D458D5BAFF6041973594B00B4E58CE59102835@fmsmsx101.amr.corp.intel.com> Does anybody have configured Virtual Multinode 2 controllers + 2 computes? I have a question regarding nova services on controller-1. I think this is more a question for windriver guy who have previous experience with TiC. Please if you have any idea let me know... Context: In horizon I can see both controllers Available one active other standby but I cannot see nova services up on controller-1. Ctl-0 and Ctl-1 in Horizon [cid:image002.jpg at 01D42F0D.EF89E8E0] Ctl-0 and Ctl-1 same Port Interfaces in Horizon [cid:image004.jpg at 01D42F0D.EF89E8E0] [cid:image011.jpg at 01D42F0D.EF89E8E0] nova service-list executed in Ctl-0 with no Ctl-1 output. [cid:image012.jpg at 01D42F0D.EF89E8E0] **Question: nova services will be up on controller-1 once I 1) locked Controller-0, and 2) Swact to controller-1? I did a research on "wr_titanium_cloud_installation_for_systems_with_controller_storage_1803.pdf", page 74, step 7; documentation just says controller-0 but nothing is shown for Controller-1. : ~(keystone_admin)$ nova service-list +------------------+--------------+--------+---------+-------+ ... | Binary | Host | Zone | Status | State | ... +------------------+--------------+--------+---------+-------+ ... | nova-conductor | controller-0 | int... | enabled | up | ... | nova-consoleauth | controller-0 | int... | enabled | up | ... | nova-scheduler | controller-0 | int... | enabled | up | ... Thanks. Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 17813 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 23628 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.jpg Type: image/jpeg Size: 19976 bytes Desc: image011.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image012.jpg Type: image/jpeg Size: 45507 bytes Desc: image012.jpg URL: From Brent.Rowsell at windriver.com Wed Aug 8 17:25:05 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 8 Aug 2018 17:25:05 +0000 Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. References: <03D458D5BAFF6041973594B00B4E58CE59102835@fmsmsx101.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E3D17@ALA-MBD.corp.ad.wrs.com> The nova services will only run on the active controller. They will be moved to the other controller on a swact or failover. Note you cannot lock the active controller, the workflow to lock that controller would be to swact followed by a lock Brent From: Hernandez Gonzalez, Fernando [mailto:fernando.hernandez.gonzalez at intel.com] Sent: Wednesday, August 8, 2018 12:50 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. Does anybody have configured Virtual Multinode 2 controllers + 2 computes? I have a question regarding nova services on controller-1. I think this is more a question for windriver guy who have previous experience with TiC. Please if you have any idea let me know... -------------- next part -------------- An HTML attachment was scrubbed... URL: From fernando.hernandez.gonzalez at intel.com Wed Aug 8 17:29:12 2018 From: fernando.hernandez.gonzalez at intel.com (Hernandez Gonzalez, Fernando) Date: Wed, 8 Aug 2018 17:29:12 +0000 Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E3D17@ALA-MBD.corp.ad.wrs.com> References: <03D458D5BAFF6041973594B00B4E58CE59102835@fmsmsx101.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E3D17@ALA-MBD.corp.ad.wrs.com> Message-ID: <03D458D5BAFF6041973594B00B4E58CE591028A5@fmsmsx101.amr.corp.intel.com> Thanks guys! Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, August 8, 2018 12:25 PM To: Hernandez Gonzalez, Fernando ; starlingx-discuss at lists.starlingx.io Subject: RE: Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. The nova services will only run on the active controller. They will be moved to the other controller on a swact or failover. Note you cannot lock the active controller, the workflow to lock that controller would be to swact followed by a lock Brent From: Hernandez Gonzalez, Fernando [mailto:fernando.hernandez.gonzalez at intel.com] Sent: Wednesday, August 8, 2018 12:50 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. Does anybody have configured Virtual Multinode 2 controllers + 2 computes? I have a question regarding nova services on controller-1. I think this is more a question for windriver guy who have previous experience with TiC. Please if you have any idea let me know... -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Wed Aug 8 18:07:13 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 8 Aug 2018 18:07:13 +0000 Subject: [Starlingx-discuss] StarlingX Release Sub-Projects In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA403D2D@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB575A46@fmsmsx115.amr.corp.intel.com> Message-ID: > On Wed, Aug 8, 2018 at 10:08 AM, Arce Moreno, Abraham > wrote: > > Initial idea behind this page is to create proposal drafts for use > > cases under StarlingX and once there is a proof of concept it can be > > taken to OpenStack Edge Computing Group [1] > > Are you starting with the use cases that ECG has already developed? Yes from a definition perspective. > Do we have any that vary a lot from those? The one I am working on is a use case with Unmanned Aerial Vehicles and Image / Video processing, aligned to at least the following 2 Use Cases from ECG: - Smart City as Software-Defined closed-loop System - Remote Surveillance / Security This is part of the content of a workshop [0] submitted as a CFP to Berlin Summit. http://bit.ly/EdgeComputingSolutions From Bin.Qian at windriver.com Wed Aug 8 17:19:22 2018 From: Bin.Qian at windriver.com (Qian, Bin) Date: Wed, 8 Aug 2018 17:19:22 +0000 Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. In-Reply-To: <03D458D5BAFF6041973594B00B4E58CE59102835@fmsmsx101.amr.corp.intel.com> References: <03D458D5BAFF6041973594B00B4E58CE59102835@fmsmsx101.amr.corp.intel.com> Message-ID: In your horizon, it shows that the controller-1 is standby. Nova services run on the active controller, in your case controller-0. Bin From: Hernandez Gonzalez, Fernando [mailto:fernando.hernandez.gonzalez at intel.com] Sent: Wednesday, August 08, 2018 12:50 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Virtual Multinode 2 controllers + 2 computes. Nova services are not up in Controller-1. Does anybody have configured Virtual Multinode 2 controllers + 2 computes? I have a question regarding nova services on controller-1. I think this is more a question for windriver guy who have previous experience with TiC. Please if you have any idea let me know... Context: In horizon I can see both controllers Available one active other standby but I cannot see nova services up on controller-1. Ctl-0 and Ctl-1 in Horizon [cid:image001.jpg at 01D42F1A.6A8FA140] Ctl-0 and Ctl-1 same Port Interfaces in Horizon [cid:image003.jpg at 01D42F1A.6A8FA140] [cid:image005.jpg at 01D42F1A.6A8FA140] nova service-list executed in Ctl-0 with no Ctl-1 output. [cid:image006.jpg at 01D42F1A.6A8FA140] **Question: nova services will be up on controller-1 once I 1) locked Controller-0, and 2) Swact to controller-1? I did a research on "wr_titanium_cloud_installation_for_systems_with_controller_storage_1803.pdf", page 74, step 7; documentation just says controller-0 but nothing is shown for Controller-1. : ~(keystone_admin)$ nova service-list +------------------+--------------+--------+---------+-------+ ... | Binary | Host | Zone | Status | State | ... +------------------+--------------+--------+---------+-------+ ... | nova-conductor | controller-0 | int... | enabled | up | ... | nova-consoleauth | controller-0 | int... | enabled | up | ... | nova-scheduler | controller-0 | int... | enabled | up | ... Thanks. Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 12028 bytes Desc: image001.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 16070 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 15393 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 32421 bytes Desc: image006.jpg URL: From erich.cordoba.malibran at intel.com Wed Aug 8 18:57:10 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 8 Aug 2018 18:57:10 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy Message-ID: Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich From Brent.Rowsell at windriver.com Wed Aug 8 19:02:34 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 8 Aug 2018 19:02:34 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> These repos are being deleted. https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003373 : stx-utils Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Wednesday, August 8, 2018 2:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Effort to make zuul linters happy Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Wed Aug 8 19:43:25 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 8 Aug 2018 19:43:25 +0000 Subject: [Starlingx-discuss] [Docs] StarlingX API Documentation Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4052F9@ALA-MBD.corp.ad.wrs.com> Bruce/Abraham, Can you target the completion of this work ( tracked via https://storyboard.openstack.org/#!/story/2002712 ) in the August/early September time-frame? I’ve tagged the Story for the stx.2018.10 release as I believe we want to have the API docs ready for the October release. Please let me know if you disagree. Thanks, Ghada From: Waines, Greg Sent: Wednesday, August 08, 2018 7:33 AM To: Khalil, Ghada; Jolliffe, Ian; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation ... yeah forgot to mention the point that Ghada makes below, we currently use a very very out-dated approach to API Documentation ... i.e. Grizzly timeframe ... which uses maven and wadl files ... very ugly. This approach also had the API documentation centralized in one spot ... whereas now the API documentation seems to live (correctly) in the same git as the code. So we additionally need to convert our API Documentation to the current format being used for OpenStack API Doc and should distribute the API documentation appropriately to the appropriate StarlingX sub-projects. Greg. From: "Khalil, Ghada" > Date: Friday, August 3, 2018 at 6:41 PM To: "Jolliffe, Ian" >, "Arce Moreno, Abraham" >, "starlingx-discuss at lists.starlingx.io" > Cc: Greg Waines > Subject: RE: [Starlingx-discuss] StarlingX API Documentation Hi Abraham, (You may know this already) The StarlingX APIs (especially for sysinv) are currently documented at: https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi-doc You can use the content as a starting point. However, the mechanism used is outdated using maven and wadl files. So you need to use the more current approach. Greg Waines did some research on this. I strongly recommend you review with him when he's back from vacation (Tues Aug 7). Is this the story you are working on: https://storyboard.openstack.org/#!/story/2002712 ? If so, I'll add some of the details Greg has captured to the story. Regards, Ghada -----Original Message----- From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Friday, August 03, 2018 3:38 PM To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation Hi Abraham; Thanks for kicking this off. On 2018-08-03, 12:40 PM, "Arce Moreno, Abraham" > wrote: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? We should do the concepts and the ref at the same time. The new OpenStack approach allows for tags to go in the code. Let's start with this work. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects StarlingX should not document other OpenStack API's, would their documentation not the source of truth? [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? All projects in the Flock should be included. I think there is a dependency on some of the code restructuring activities that are underway, we need to make sure these activities don't collide. Ian [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 20:19:19 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 20:19:19 +0000 Subject: [Starlingx-discuss] Updated initial governance proposal Message-ID: <9A85D2917C58154C960D95352B22818BAB576139@fmsmsx115.amr.corp.intel.com> I have updated our draft Governance proposal [0] based on feedback from Ildiko. I've borrowed some language and concepts from the Kata Containers project [1] but adapted it to the structure that was agreed to our F2F last week. One of the changes is in terminology. Instead of Team Leads and Primes, I'm using the titles Project Leads and Technical Leads respectively, to help highlight that we are initially splitting the role typically performed by a Project Technical Lead. Please review the draft and share any feedback you have to this thread or in the document directly. [0] https://etherpad.openstack.org/p/stx-governance [1] https://github.com/kata-containers/community Thanks! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 20:22:43 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 20:22:43 +0000 Subject: [Starlingx-discuss] Notes from the project call today Message-ID: <9A85D2917C58154C960D95352B22818BAB576150@fmsmsx115.amr.corp.intel.com> Agenda and notes for the 8/8 meeting Recording: https://zoom.us/recording/share/8ko3BAnVqYpI-DvUlf5VsMXNucFFGDLpsDeUaEMVuQ-wIumekTziMw * Key results from last week's F2F o Release schedule changed - now Oct'18, Mar'19, Jul'19. 3 per year o Build infra - we agreed that package versions can float except for packages that are patched o Sub-projects were finalized with Team Leads and Primes assigned. o Initial governance and TSC proposal discussed for input to the Foundation. Bruce to post the proposal to the list. DONE. o The beer in Ottawa is quite tasty. * Bug tracking options o External bugzilla instance (hosted by whom?) Github.com issue tracking? o Launchpad? This is the option recommended by the Foundation, who supports a migration tool from LP to SB if we want such in the future. Need a small group (Bruce, Ghada, anyone else) to evaluate LP. * Networking / Neutron upstream questions (in emails on the list) o tenant based custom setting and mac filter (d875491, c647127, b189392e, 28d6f56) o VXLAN provider network feature o provider MTU feature o provider network management feature (021ae1a) o router & DHCP rescheduling feature (00dd7cb, 05aed14) o system host management feature (566b640, 4aa521e, baa8264, c6849f, e6669ec) * Influx DB version - the one we're using is 2 years old. Can it float? If not, can we update it to something newer? Abe to take to the mailing list. * Matt's analysis of the Nova patches - please review for his view on the Nova changes https://docs.google.com/spreadsheets/d/1ugp1FVWMsu4x3KgrmPf7HGX8Mh1n80v-KVzweSDZunU/edit#gid=0 * Default install login and password - change? Yes but this is not easy. Work should be part of the config team. Bruce to create a story for them. * Build team status o Mirror script changes for per-company mirrors o Koji POC o Bruce/Saul/Chuy/Abe to discuss hosting the 9 packages in Clear Linux. o Internally running jobs to check the mirror scripts for updates. Reporting to internal Slack. Report to IRC? email? o Also running an internal job to do full builds, seeing 3 daily builds succeeding. * Release team status o Update from Release/Test teams on last milestone branch (2018.07) (Requested by Ghada) * Dashboard to be posted this week * Daily testing requested - can we run/report Sanity each day? AR Ada to discuss on the list o Feedback on Release Plan wiki update o AR Dean to pull stx.m2018.08 for the August milestone branch o Sample sub-project wiki page / tags * Other teams status o stx-gui - unknown -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Aug 8 20:28:52 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 8 Aug 2018 22:28:52 +0200 Subject: [Starlingx-discuss] ECG Keystone Edge Architectures meeting on Thursday (Aug 9) Message-ID: Hi, The first meeting on Keystone Edge Architectures within the Edge Computing Group is scheduled for this Thursday (August 9) at 6am PDT / 1300 UTC. If you are interested in joining the discussions you can find the call details here: https://wiki.openstack.org/wiki/Edge_Computing_Group#Keystone Our previous discussions about this topic are captured on this wiki: https://wiki.openstack.org/wiki/Keystone_edge_architectures Further information on Keystone federation testing and joint activities with the OPNFV Edge Cloud Project can be found here: https://etherpad.openstack.org/p/ECG_Keystone_Testing Please let me know if you have any questions. Thanks and Best Regards, Ildikó From jesus.ornelas.aguayo at intel.com Wed Aug 8 20:53:50 2018 From: jesus.ornelas.aguayo at intel.com (Ornelas Aguayo, Jesus) Date: Wed, 8 Aug 2018 20:53:50 +0000 Subject: [Starlingx-discuss] Update rpms_from_3rd_parties.lst versions Message-ID: <46AFD5C9-88E9-456E-B95B-5046F74E9B58@intel.com> Hi All, Does anyone know if there's a reason to use exactly the influxdb version 0.9.5.1? The package influxdb-0.9.5.1-1.x86_64.rpm was last modified on the 8th December of 2105, this package is currently in the http://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst list file that downloads the rpm packages using wget. In order to build this package from the tagged version it requires multiple patches to update the source code pointing to newer urls, clone , and checkout multiple repositories to the time the package was built; This effort can be done, but I was wondering if wouldn't be better to ask if we can use the latest version instead, because maybe in the end, we might be updating the package to the latest version. I'm also wondering if there's an specific reason to use the specific versions from all the packages from the list: libvirt-python-3.5.0-1 *Reason: must match libvirt version (input from the starlingX weekly meeting) novnc-0.6.2-1 python2-httpbin-0.5.0-6 python2-pytest-httpbin-0.2.3-6 python2-pytest-mock-1.6.0-2 python2-storops-0.4.7-2. python-gunicorn-19.7.1-1 kubernetes-1.10.0-1 influxdb-0.9.5.1-1 Regards, Jesus Ornelas Aguayo (chuy) From dtroyer at gmail.com Wed Aug 8 21:24:58 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 8 Aug 2018 16:24:58 -0500 Subject: [Starlingx-discuss] Notes from the project call today In-Reply-To: <9A85D2917C58154C960D95352B22818BAB576150@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB576150@fmsmsx115.amr.corp.intel.com> Message-ID: On Wed, Aug 8, 2018 at 3:22 PM, Jones, Bruce E wrote: > o stx-gui - unknown I touched base with Eddie this afternoon, he says that reviews can be submitted to stx-gui now. Specific questions about intent or plans should coordinate with him and via the ML. dt -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Wed Aug 8 21:36:02 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 21:36:02 +0000 Subject: [Starlingx-discuss] [Docs] StarlingX API Documentation In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4052F9@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4052F9@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB5762C5@fmsmsx115.amr.corp.intel.com> Ghada, this makes sense. Docs team, can we hit this date? brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Wednesday, August 8, 2018 12:43 PM To: Waines, Greg ; Jolliffe, Ian ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Docs] StarlingX API Documentation Bruce/Abraham, Can you target the completion of this work ( tracked via https://storyboard.openstack.org/#!/story/2002712 ) in the August/early September time-frame? I’ve tagged the Story for the stx.2018.10 release as I believe we want to have the API docs ready for the October release. Please let me know if you disagree. Thanks, Ghada From: Waines, Greg Sent: Wednesday, August 08, 2018 7:33 AM To: Khalil, Ghada; Jolliffe, Ian; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation ... yeah forgot to mention the point that Ghada makes below, we currently use a very very out-dated approach to API Documentation ... i.e. Grizzly timeframe ... which uses maven and wadl files ... very ugly. This approach also had the API documentation centralized in one spot ... whereas now the API documentation seems to live (correctly) in the same git as the code. So we additionally need to convert our API Documentation to the current format being used for OpenStack API Doc and should distribute the API documentation appropriately to the appropriate StarlingX sub-projects. Greg. From: "Khalil, Ghada" > Date: Friday, August 3, 2018 at 6:41 PM To: "Jolliffe, Ian" >, "Arce Moreno, Abraham" >, "starlingx-discuss at lists.starlingx.io" > Cc: Greg Waines > Subject: RE: [Starlingx-discuss] StarlingX API Documentation Hi Abraham, (You may know this already) The StarlingX APIs (especially for sysinv) are currently documented at: https://git.openstack.org/cgit/openstack/stx-integ/tree/restapi-doc/restapi-doc You can use the content as a starting point. However, the mechanism used is outdated using maven and wadl files. So you need to use the more current approach. Greg Waines did some research on this. I strongly recommend you review with him when he's back from vacation (Tues Aug 7). Is this the story you are working on: https://storyboard.openstack.org/#!/story/2002712 ? If so, I'll add some of the details Greg has captured to the story. Regards, Ghada -----Original Message----- From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Friday, August 03, 2018 3:38 PM To: Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX API Documentation Hi Abraham; Thanks for kicking this off. On 2018-08-03, 12:40 PM, "Arce Moreno, Abraham" > wrote: A new goal in collaboration with our Tech Writing team is to document StarlingX APIs, so we did an initial research on what it means for StarlingX so your feedback is highly appreciated. [ OpenStack :: API ] For this activity we are initially be considering from API Documentation 2 separate efforts for each project: - API Guide .. the concepts in the API - API Ref .. a reference for the API Can we prioritize one over the other? We should do the concepts and the ref at the same time. The new OpenStack approach allows for tags to go in the code. Let's start with this work. [ StarlingX :: API ] It seems we can categorize the StarlingX APIs in 2: - Brand New APIs from StarlingX projects - Existing APIs from OpenStack projects StarlingX should not document other OpenStack API's, would their documentation not the source of truth? [ StarlingX :: API :: Brand New ] The projects falling into this category are the following: - [0] NFVI Orchestration - [1] High Availability/Process Monitoring/Service Management - [2] StarlingX System Configuration Management - [3] Horizon plugins for new StarlingX services - [4] Installation/Update/Patching/Backup/Restore Can we considered all the above to be included in this API documentation effort? Are we missing any other? All projects in the Flock should be included. I think there is a dependency on some of the code restructuring activities that are underway, we need to make sure these activities don't collide. Ian [ StarlingX :: API :: Existing ] All projects living under our starlingx-staging github organization [5] with upstream contributions [6] e.g. horizon, ceilometer, etc. We have not gone through a deeper review if we are modifying/adding new calls into the OpenStack projects however if we are and we need to document them: - There is official OpenStack API documentation, we can make references to them for the existing calls - What about the modifications/additions? Should we document them? What is the best place for this? We were talking in our weekly call about stx-docs is a good place for things without a repo, is this a good example? - Any easy way besides "find + grep" to get where those API modifications are happening? [ StarlingX :: API :: Unit Tests] OpenStack projects includes Unit Tests. Is this something we also need to consider for our StarlingX Bran New APIs? [0] http://git.openstack.org/cgit/openstack/stx-nfv/ [1] http://git.openstack.org/cgit/openstack/stx-ha/ [2] http://git.openstack.org/cgit/openstack/stx-config/ [3] http://git.openstack.org/cgit/openstack/stx-gui/ [4] http://git.openstack.org/cgit/openstack/stx-update/ [5] https://github.com/starlingx-staging [6] http://git.openstack.org/cgit/openstack/stx-upstream/tree/openstack _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at openstack.org Wed Aug 8 21:44:57 2018 From: james at openstack.org (James Cole) Date: Wed, 8 Aug 2018 14:44:57 -0700 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> Message-ID: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox ) shows two color combinations sampled from colors from this starling image . The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: StarlingX_Logo_Colors.pdf Type: application/pdf Size: 7681412 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 22:01:49 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 22:01:49 +0000 Subject: [Starlingx-discuss] [Docs] Weekly Docs and Infrastructure call Message-ID: <9A85D2917C58154C960D95352B22818BAB5762FE@fmsmsx115.amr.corp.intel.com> The Docs & Infra team will be holding a weekly call at 12:30 PST / 1930 UTC on Wednesdays each week. All are welcome. Details are on the wiki [0], with notes and agenda on an Etherpad[1]. Docs team, please add this call to your calendars! brucej [0] https://wiki.openstack.org/wiki/StarlingX/Docs_and_Infra [1] https://etherpad.openstack.org/p/stx-documentation -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 22:04:05 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 22:04:05 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <9A85D2917C58154C960D95352B22818BAB576315@fmsmsx115.amr.corp.intel.com> +1 to the purple and blue, it looks amazing! From: James Cole [mailto:james at openstack.org] Sent: Wednesday, August 8, 2018 2:45 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From hazzim.i.anaya.casas at intel.com Wed Aug 8 22:07:54 2018 From: hazzim.i.anaya.casas at intel.com (Anaya casas, Hazzim I) Date: Wed, 8 Aug 2018 22:07:54 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <2DFB1BD3-D9F6-47D3-8953-2C108FEFDA69@intel.com> +1 for purple. Best regards. On Aug 8, 2018, at 16:44, James Cole > wrote: Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From david.a.cobbley at intel.com Wed Aug 8 22:08:42 2018 From: david.a.cobbley at intel.com (Cobbley, David A) Date: Wed, 8 Aug 2018 22:08:42 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 purple/blue – very appealing From: James Cole [mailto:james at openstack.org] Sent: Wednesday, August 8, 2018 2:45 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Aug 8 22:11:06 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 8 Aug 2018 15:11:06 -0700 Subject: [Starlingx-discuss] Update rpms_from_3rd_parties.lst versions In-Reply-To: <46AFD5C9-88E9-456E-B95B-5046F74E9B58@intel.com> References: <46AFD5C9-88E9-456E-B95B-5046F74E9B58@intel.com> Message-ID: <41ae8487-2009-c777-849f-7ca111813a95@linux.intel.com> On 08/08/2018 01:53 PM, Ornelas Aguayo, Jesus wrote: > Hi All, > > Does anyone know if there's a reason to use exactly the influxdb version 0.9.5.1? > > The package influxdb-0.9.5.1-1.x86_64.rpm was last modified on the 8th December of 2105, this package is currently in the http://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst list file that downloads the rpm packages using wget. > > In order to build this package from the tagged version it requires multiple patches to update the source code pointing to newer urls, clone , and checkout multiple repositories to the time the package was built; This effort can be done, but I was wondering if wouldn't be better to ask if we can use the latest version instead, because maybe in the end, we might be updating the package to the latest version. > > I'm also wondering if there's an specific reason to use the specific versions from all the packages from the list: > Let's capture this in the existing StoryBoard entries that Cindy made recently > libvirt-python-3.5.0-1 *Reason: must match libvirt version (input from the starlingX weekly meeting) https://storyboard.openstack.org/#!/story/2003339 > novnc-0.6.2-1 https://storyboard.openstack.org/#!/story/2003340 > python2-httpbin-0.5.0-6 > python2-pytest-httpbin-0.2.3-6 > python2-pytest-mock-1.6.0-2 > python2-storops-0.4.7-2. > python-gunicorn-19.7.1-1 For the five python packages above: https://storyboard.openstack.org/#!/story/2003341 > kubernetes-1.10.0-1 https://storyboard.openstack.org/#!/story/2003342 > influxdb-0.9.5.1-1 > https://storyboard.openstack.org/#!/story/2003357 > > Regards, > Jesus Ornelas Aguayo (chuy) > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From fernando.hernandez.gonzalez at intel.com Wed Aug 8 22:11:57 2018 From: fernando.hernandez.gonzalez at intel.com (Hernandez Gonzalez, Fernando) Date: Wed, 8 Aug 2018 22:11:57 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <03D458D5BAFF6041973594B00B4E58CE59102A4C@fmsmsx101.amr.corp.intel.com> +1 purple/blue. Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 From: James Cole [mailto:james at openstack.org] Sent: Wednesday, August 8, 2018 4:45 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn.seiler at windriver.com Wed Aug 8 22:21:33 2018 From: glenn.seiler at windriver.com (Seiler, Glenn) Date: Wed, 8 Aug 2018 22:21:33 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 Purple/blue. When do we get the tshirts? From: James Cole [mailto:james at openstack.org] Sent: Wednesday, August 08, 2018 2:45 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 8 22:38:28 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 8 Aug 2018 22:38:28 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB576414@fmsmsx115.amr.corp.intel.com> 2003361 is a story for zuul linters in stx-fault, let's keep that one. It's shown twice in the list below. I've closed 2003363 and 2003373. The story for stx-gplv2 is 2003362, which I have also just closed. brucej -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, August 8, 2018 12:03 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy These repos are being deleted. https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003373 : stx-utils Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Wednesday, August 8, 2018 2:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Effort to make zuul linters happy Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Aug 8 23:07:31 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 8 Aug 2018 16:07:31 -0700 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <9A85D2917C58154C960D95352B22818BAB576315@fmsmsx115.amr.corp.intel.com> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> <9A85D2917C58154C960D95352B22818BAB576315@fmsmsx115.amr.corp.intel.com> Message-ID: <07008513-9d5b-30d1-3b48-008bbbdd89eb@linux.intel.com> Yup +1 to Purple/Blue Sau! On 08/08/2018 03:04 PM, Jones, Bruce E wrote: > +1 to the purple and blue, it looks amazing! > > *From:*James Cole [mailto:james at openstack.org] > *Sent:* Wednesday, August 8, 2018 2:45 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] StarlingX Logo Concepts > > Hi everybody, > > Thanks for your feedback on the logos last week! Concept 1 was the clear > winner based on your comments, so I’ve been playing around with colors. > The attached document (also on Dropbox > ) > shows two color combinations sampled from colors from this starling > image > . > The logo works in both one or two colors and on dark or light > backgrounds. There are a few mockups in the document as well. > > Please let me know if you like the purple or yellow versions better, or > if you think we should try any other colors (these don’t have to be the > final colors if you aren’t drawn to either of the options). > > Thank you! > > *James Cole* > > Graphic Designer > > OpenStack Foundation > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Brent.Rowsell at windriver.com Wed Aug 8 23:26:31 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 8 Aug 2018 23:26:31 +0000 Subject: [Starlingx-discuss] Update rpms_from_3rd_parties.lst versions In-Reply-To: <46AFD5C9-88E9-456E-B95B-5046F74E9B58@intel.com> References: <46AFD5C9-88E9-456E-B95B-5046F74E9B58@intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E45FF@ALA-MBD.corp.ad.wrs.com> Please see the note I added to the story regarding influxdb Brent -----Original Message----- From: Ornelas Aguayo, Jesus [mailto:jesus.ornelas.aguayo at intel.com] Sent: Wednesday, August 8, 2018 4:54 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Update rpms_from_3rd_parties.lst versions Hi All, Does anyone know if there's a reason to use exactly the influxdb version 0.9.5.1? The package influxdb-0.9.5.1-1.x86_64.rpm was last modified on the 8th December of 2105, this package is currently in the http://git.openstack.org/cgit/openstack/stx-tools/tree/centos-mirror-tools/rpms_from_3rd_parties.lst list file that downloads the rpm packages using wget. In order to build this package from the tagged version it requires multiple patches to update the source code pointing to newer urls, clone , and checkout multiple repositories to the time the package was built; This effort can be done, but I was wondering if wouldn't be better to ask if we can use the latest version instead, because maybe in the end, we might be updating the package to the latest version. I'm also wondering if there's an specific reason to use the specific versions from all the packages from the list: libvirt-python-3.5.0-1 *Reason: must match libvirt version (input from the starlingX weekly meeting) novnc-0.6.2-1 python2-httpbin-0.5.0-6 python2-pytest-httpbin-0.2.3-6 python2-pytest-mock-1.6.0-2 python2-storops-0.4.7-2. python-gunicorn-19.7.1-1 kubernetes-1.10.0-1 influxdb-0.9.5.1-1 Regards, Jesus Ornelas Aguayo (chuy) _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Thu Aug 9 00:46:45 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 9 Aug 2018 00:46:45 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: <9A85D2917C58154C960D95352B22818BAB576414@fmsmsx115.amr.corp.intel.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB576414@fmsmsx115.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA335B71@ALA-MBD.corp.ad.wrs.com> I've got an update to resolve bashate and pep8 warnings in stx-update, now out for review. This update also enables most of the pep8 errors that were being ignored, addressing the issues with those error types. I have left the zuul voting disabled for now, figuring that can be a separate update. https://review.openstack.org/#/c/590064/ -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 08, 2018 6:38 PM To: Rowsell, Brent; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy 2003361 is a story for zuul linters in stx-fault, let's keep that one. It's shown twice in the list below. I've closed 2003363 and 2003373. The story for stx-gplv2 is 2003362, which I have also just closed. brucej -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, August 8, 2018 12:03 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy These repos are being deleted. https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003373 : stx-utils Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Wednesday, August 8, 2018 2:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Effort to make zuul linters happy Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Aug 9 01:34:33 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 8 Aug 2018 18:34:33 -0700 Subject: [Starlingx-discuss] Creating new packages for Initialization / Configuration files Message-ID: Brent, et al: There are a number of packages that contain modified configuration files that bring in alternate default files and in some cases modified initialization scripts. Currently there are puppet packages that do some configuration management. We could continue with puppet for these configurations that we want to disengage from the upstream patches, or we can use RPM package. Thoughts? Examples of configuration patches from stx-integ/base are: centos-release (issue files) iptables (iptables rules) dhcp vim (vimrc!) lighttp pam sanlock shadow sudo util-linux Regarding centos-release Issue files: As you saw today, I proposed removing the issue* files from a otherwise unmodified centos-release package, is there a reason that we need to restore those issue files for an Open Source OS Independent project? Those modified issue files contain legalize that seems appropriate for a commercial product, but not sure if makes sense for an Open Source project that a downstream OSV or other company would likely modify for their use anyway. Sau! From Don.Penney at windriver.com Thu Aug 9 01:40:44 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 9 Aug 2018 01:40:44 +0000 Subject: [Starlingx-discuss] Creating new packages for Initialization / Configuration files In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA335B99@ALA-MBD.corp.ad.wrs.com> For many of these, using puppet templates will be a viable alternative. There may be cases where a change is needed during installation, and we'd have a couple of options there. In some cases, we may be able to package an override file. Alternatively, we could use the kickstarts to make changes during postinstall, if absolutely necessary. We'd need to look at them case by case to decide what the best option would be. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, August 08, 2018 9:35 PM To: Rowsell, Brent; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Creating new packages for Initialization / Configuration files Brent, et al: There are a number of packages that contain modified configuration files that bring in alternate default files and in some cases modified initialization scripts. Currently there are puppet packages that do some configuration management. We could continue with puppet for these configurations that we want to disengage from the upstream patches, or we can use RPM package. Thoughts? Examples of configuration patches from stx-integ/base are: centos-release (issue files) iptables (iptables rules) dhcp vim (vimrc!) lighttp pam sanlock shadow sudo util-linux Regarding centos-release Issue files: As you saw today, I proposed removing the issue* files from a otherwise unmodified centos-release package, is there a reason that we need to restore those issue files for an Open Source OS Independent project? Those modified issue files contain legalize that seems appropriate for a commercial product, but not sure if makes sense for an Open Source project that a downstream OSV or other company would likely modify for their use anyway. Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From kailun.qin at intel.com Thu Aug 9 01:41:53 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Thu, 9 Aug 2018 01:41:53 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 for purple/blue, awesome! BR, Kailun From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 5:45 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Aug 9 02:46:01 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 9 Aug 2018 02:46:01 +0000 Subject: [Starlingx-discuss] build-pkg --parallel In-Reply-To: <391da6b4-a96f-052f-3828-7da719d9e103@windriver.com> References: <93814834B4855241994F290E959305C752F62C88@SHSMSX103.ccr.corp.intel.com> <391da6b4-a96f-052f-3828-7da719d9e103@windriver.com> Message-ID: <93814834B4855241994F290E959305C752F723C8@SHSMSX104.ccr.corp.intel.com> Thanks Scott! I also thought it should be related to the kernel I’m running, I will check further. Zhipeng From: Scott Little [mailto:scott.little at windriver.com] Sent: 2018年8月9日 0:51 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] build-pkg --parallel nr_inodes=0 is supplied by the mock prior to issuing the mount syscall. There seems to be a mismatch between the mock inside your docker (I assume) and the kernel you are running. On 18-08-06 10:58 PM, Liu, ZhipengS wrote: Hi Scott and all, I have an issue when I did parallel build and need your help It seems b1/b2/b3 could not mount to tmpfs. Only b0 which not mount to tmpfs can work. 00:09:08 ERROR: Command failed: 00:09:08 # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=5g mock_chroot_tmpfs /localdisk/loadbuild/zhipengl/starlingx/std/mock/b1/root Root cause seems to be nr_inode=0, as I saw dmesg log as below. However, I could not find where or how I can change this nr_inode. [22719.688732] tmpfs: Bad value '0' for mount option 'nr_inodes' [22719.710907] tmpfs: Bad value '0' for mount option 'nr_inodes' [22726.037303] tmpfs: Bad value '0' for mount option 'nr_inodes' [22740.384578] tmpfs: Bad value '0' for mount option 'nr_inodes' [22740.385174] tmpfs: Bad value '0' for mount option 'nr_inodes' Thanks! Zhipeng From: Scott Little [mailto:scott.little at windriver.com] Sent: 2018年8月1日 3:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] build-pkg --parallel I had a successful parallel build (aka build-pkgs --parallel) inside the docker container. ~1h45m on 24 core, 64G ram The prerequisite was a populated $MY_REPO/cgcs-tis-repo/dependancy-cache. Currently we only generate the cache after the build in the 'generate-cgcs-tis-repo' step. I'd like to see the cache stored in git and updated regularly by 'official' builds. Note: The cache doesn't have to be perfect, so a cache that is out of date by a day or a week is still very useful. build-pkgs/mockchain just needs a rough guide on build dependencies and potential dependency loops. Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchang77 at chinaunicom.cn Thu Aug 9 03:03:31 2018 From: liuchang77 at chinaunicom.cn (liuchang77 at chinaunicom.cn) Date: Thu, 9 Aug 2018 11:03:31 +0800 Subject: [Starlingx-discuss] So many packages are missing Message-ID: <2018080911033096975810@chinaunicom.cn> Hi,all I am referring to the link https://wiki.openstack.org/wiki/StarlingX/Installation_Guide/Simplex for StarlingX environment deployment. And when I went to setp "Download Packages". There are more than 100 packages missing in the end. It's hard for me to download them manually. So I want to ask if there is an easy way to solve the problem. Or, can I drop this step and download the iso file directly? Thanks! Chang -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Thu Aug 9 03:28:49 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 9 Aug 2018 03:28:49 +0000 Subject: [Starlingx-discuss] So many packages are missing In-Reply-To: <2018080911033096975810@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn> Message-ID: <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com> Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich From abraham.arce.moreno at intel.com Thu Aug 9 05:07:26 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 9 Aug 2018 05:07:26 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: > The attached document (also on Dropbox > dl=0> ) shows two color combinations sampled from colors from this starling > image > rnis_hildebrandti_-Tanzania-8-2c.jpg> . The logo works in both one or two > colors and on dark or light backgrounds. There are a few mockups in the > document as well. > > Please let me know if you like the purple or yellow versions better, or if you > think we should try any other colors (these don’t have to be the final colors if > you aren’t drawn to either of the options). Purple! From abraham.arce.moreno at intel.com Thu Aug 9 05:38:19 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 9 Aug 2018 05:38:19 +0000 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template In-Reply-To: References: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> Message-ID: Thanks Greg, Dean! > > ( I’m assuming that it will NOT be API documentation, I see > > discussions on that elsewhere. And would be located at > > https://developer.openstack.org/api-ref/starlingx/ ??? ) > > The actual location has not been finalized yet. I imagine it to be something > like docs.starlingx.io. Taking stx-metal as our initial repository, please find both patches to review: [0] [Doc] OpenStack Documentation Contributor Guide [1] [Doc] OpenStack API Reference Guide Under Documentation and Infrastructure Sub-project [2] I am pointing to 2 wiki pages to document our analysis, learnings, tasks and updates: [3] StarlingX/Documentation [4] StarlingX/Developer_Guide/API_Documentation I will work on enabling API Guide and Release Notes for stx-metal to to enable the "gold commit pack" to easily take to the rest of the projects where needed. > > so I’ll volunteer on being a core reviewer for this StarlingX > > Documentation ... please add me to the appropriate email lists. > > You're on it! We are trying really hard to stay on a single list and use subject > tags for filtering. Welcome Greg! [0] https://review.openstack.org/#/c/590094 [1] https://review.openstack.org/#/c/590097 [2] https://wiki.openstack.org/wiki/StarlingX/Docs_and_Infra [3] https://wiki.openstack.org/wiki/StarlingX/Documentation [4] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation From kailun.qin at intel.com Wed Aug 8 23:18:40 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Wed, 8 Aug 2018 23:18:40 +0000 Subject: [Starlingx-discuss] Questions about Provider MTU feature for StarlingX upstreaming In-Reply-To: References: Message-ID: Hi Matt, Thanks a lot for the detailed clarification. It is clear enough to me now. BR, Kailun From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 9, 2018 12:33 AM To: Qin, Kailun ; starlingx-discuss at lists.starlingx.io Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Jolliffe, Ian ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing ; Rowsell, Brent Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming Hi Kailun, I’m not aware of provider networks being referred to as “admin” networks. The provider network terminology comes from the neutron extension (“providernet”) that defines the attributes for the physical network (“provider:network_type”, “provider:physical_network”, etc). I agree that the StarlingX definition is of greater scope since it also covers the related configuration file values. The only distinction that I am aware of between “admin” and “tenant” for provider networks is that they are only exposed to a user with administrative rights (enforced by policy). I hope that clarifies what I refer to as a provider network. Regards, Matt From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Wednesday, August 08, 2018 12:01 PM To: Peters, Matt; Jolliffe, Ian; starlingx-discuss at lists.starlingx.io Cc: Troyer, Dean; Jones, Bruce E; Chilcote Bacco, Derek A; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Guo, Ruijing; Rowsell, Brent Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming Hi Matt, Thanks for the explanation. One thing that brings a little confusion is the terminology “provider network”: • In StarlingX, the so-called “provider network” is more related w/ physical network, like a superset of “upstream provider networks”, which addresses the values that are currently stored in configuration file parameters only; • while in upstream neutron, what distinguishes provider networks from tenant networks is who (admin/user) actually creates them and how. Just correct me if I’m wrong. If my understanding is correct, yes, I agree with Ian’s response. We’ll work on removing the Neutron configuration file based approach to MTU management and exposing it via a RESTful API as the benefits are clear to us. Thanks again! BR, Kailun From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 8, 2018 9:05 PM To: Qin, Kailun >; Jolliffe, Ian > Cc: Troyer, Dean >; Jones, Bruce E >; Chilcote Bacco, Derek A >; Le, Huifeng >; Xu, Chenjie >; Zhao, Forrest >; Guo, Ruijing >; Rowsell, Brent > Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming HI Kailun, I’m not sure from your reply if you are agreeing or disagreeing with Ian’s response. The intent was to show that the business case for the provider network MTU configuration is to remove the Neutron configuration file based approach to MTU management and expose it via a RESTful API. This is a similar business case for the managed provider networks as a whole. It is understood that upstream neutron already supports MTU configuration at the tenant network level, but was trying to show that the provider network MTU configuration addresses the values that are current stored in configuration file parameters only. Regards, Matt From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Wednesday, August 08, 2018 1:34 AM To: Jolliffe, Ian Cc: Troyer, Dean; Jones, Bruce E; Chilcote Bacco, Derek A; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Guo, Ruijing; Rowsell, Brent; Peters, Matt Subject: RE: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, Thanks for the feedback. The upstream neutron supports: • Set/modify a specific MTU on a (provider/tenant) network via REST API. This was introduced via “net-mtu-writable” API extension [1][2]. The requested MTU will work together w/ the MTU configuration options (global_physnet_mtu, physical_network_mtus and path_mtu) to configure the network MTU [3]. However, it does *NOT* support: • Dynamic MTU configuration options (global_physnet_mtu, physical_network_mtus and path_mtu) set/modify via REST API. [1] https://bugs.launchpad.net/neutron/+bug/1671634 [2] https://review.openstack.org/#/c/483518/ [3] https://docs.openstack.org/neutron/latest/admin/config-mtu.html So with the current upstream neutron implementation, the MTU configurations options still play a part at the deployment level. They serve as global maximum permissible and default values for network MTUs (of different network types). Meanwhile, neutron does support a dynamic alternative for users to set/modify specific MTUs across their networks. What do you think? Any use case that the current neutron is not able to cover? Let me know if anything unclear. Great thanks! BR, Kailun From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Wednesday, August 8, 2018 9:52 AM To: Qin, Kailun > Cc: Troyer, Dean >; Jones, Bruce E >; Chilcote Bacco, Derek A >; Le, Huifeng >; Xu, Chenjie >; Zhao, Forrest >; Guo, Ruijing >; Rowsell, Brent >; Peters, Matt > Subject: Re: Questions about Provider MTU feature for StarlingX upstreaming Hi Kailun; The provider network MTU goes along with the business case for managed provider networks. The feature comparison should not be between provider network MTUs and tenant network MTUs, but more about whether we support MTU values via provider network configuration (REST API). The comparable feature in OpenStack is the support for the ML2 global_physnet_mtu, physical_network_mtus and path_mtu configuration options. https://blueprints.launchpad.net/neutron/+spec/mtu-selection-and-advertisement Regards; Ian From: "Qin, Kailun" > Date: Monday, August 6, 2018 at 3:01 AM To: Ian Jolliffe > Cc: "Troyer, Dean" >, "Jones, Bruce E" >, "Chilcote Bacco, Derek A" >, "Le, Huifeng" >, "Qin, Kailun" >, "Xu, Chenjie" >, "Zhao, Forrest" >, "Guo, Ruijing" > Subject: Questions about Provider MTU feature for StarlingX upstreaming Hi Ian, We are analyzing the provider MTU feature for StarlingX upstreaming, in which case the patch 021ae1a introduced providernet MTU and c647127 introduced the port granularity bindings for MTU. Since the upstream neutron already has the network granularity MTU implemented [1] and made it available to be created or updated [2], would you please kindly help check whether we need to upstream this feature? If so, would you please share some business use cases or user stories related with us? [1] https://review.openstack.org/#/c/480738/ [2] https://review.openstack.org/#/c/483091/ Let me know if any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From kailun.qin at intel.com Wed Aug 8 23:28:18 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Wed, 8 Aug 2018 23:28:18 +0000 Subject: [Starlingx-discuss] Questions about VXLAN Provider Network feature for StarlingX upstreaming In-Reply-To: <8F5F129C-747B-4E0B-BE8F-B6506422BFDB@windriver.com> References: <8F5F129C-747B-4E0B-BE8F-B6506422BFDB@windriver.com> Message-ID: Hi Ian, Thanks a lot for your comments. I agree with the deprecation. Let’s focus on the managed provider network extensions and track the work required to make this change within StarlingX via StoryBoard. BR, Kailun From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Thursday, August 9, 2018 5:16 AM To: Qin, Kailun Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Guo, Ruijing ; Rowsell, Brent ; Peters, Matt ; Khalil, Ghada Subject: Re: Questions about VXLAN Provider Network feature for StarlingX upstreaming HI Kailun; I think we should deprecate this functionality and align with upstream. We should remove the scoping of the VxLAN networks to physical networks and treat them as global. This would reduce the amount of changes required to the ML2 type managers, and ease integration / adoption of the managed provider network extensions. This item has a linkage to the provider network story or there is more work to do. We need a corresponding StoryBoard in StarlingX to track the work required to make this change within StarlingX. Regards; Ian From: "Qin, Kailun" > Date: Monday, August 6, 2018 at 3:03 AM To: Ian Jolliffe > Cc: "Troyer, Dean" >, "Jones, Bruce E" >, "Chilcote Bacco, Derek A" >, "Le, Huifeng" >, "Qin, Kailun" >, "Xu, Chenjie" >, "Zhao, Forrest" >, "Guo, Ruijing" > Subject: Questions about VXLAN Provider Network feature for StarlingX upstreaming Hi Ian, We are analyzing the VXLAN provider network feature for StarlingX upstreaming, in which case the patch 021ae1a firstly introduced VXLAN provider network and 509ea54, 1e368a3 added with the VXLAN dynamic/static mode. Different from StarlingX, the upstream neutron VXLAN provider networks do not support to be associated with physical networks. They assume that VXLAN creates overlay networks where they do not require the VNI space to be accessible by a particular interface on a node. Would you please kindly share some business use cases or user stories with us about the physical-network-constrained VXLAN provider network introduced? Let me know if any question. Thanks a lot! BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchang77 at chinaunicom.cn Thu Aug 9 07:28:23 2018 From: liuchang77 at chinaunicom.cn (liuchang77 at chinaunicom.cn) Date: Thu, 9 Aug 2018 15:28:23 +0800 Subject: [Starlingx-discuss] So many packages are missing References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com> Message-ID: <2018080915282283292923@chinaunicom.cn> Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang 刘 畅 云计算实验室 中国联合网络通信有限公司研究院 移动电话:18610741986 地址:北京市亦庄经济开发区北环东路1号 From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: my_log.txt Type: application/octet-stream Size: 14275 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: my_missing.lst Type: application/octet-stream Size: 3502 bytes Desc: not available URL: From shuicheng.lin at intel.com Thu Aug 9 08:05:23 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 9 Aug 2018 08:05:23 +0000 Subject: [Starlingx-discuss] So many packages are missing In-Reply-To: <2018080915282283292923@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com> <2018080915282283292923@chinaunicom.cn> Message-ID: <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com> Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang ________________________________ 刘 畅 云计算实验室 中国联合网络通信有限公司研究院 移动电话:18610741986 地址:北京市亦庄经济开发区北环东路1号 From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Aug 9 12:13:16 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 9 Aug 2018 14:13:16 +0200 Subject: [Starlingx-discuss] Keystone Edge Architectures meeting Message-ID: <5DDECD3A-EA55-413A-B54D-34CBFEBD08AA@gmail.com> Hi, This is a friendly reminder that the Keystone Edge Architectures meeting will start in less than 50 minutes. In case you are interested in joining you can find further information here: https://wiki.openstack.org/wiki/Edge_Computing_Group#Keystone Thanks and Best Regards, Ildikó From guillermo.a.ponce.castaneda at intel.com Thu Aug 9 12:45:46 2018 From: guillermo.a.ponce.castaneda at intel.com (Ponce Castaneda, Guillermo A) Date: Thu, 9 Aug 2018 12:45:46 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 to yellow/black! From: James Cole Date: Wednesday, August 8, 2018 at 4:47 PM To: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody,  Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (https://www.dropbox.com/s/mgiu2qwtdiyrvqg/StarlingX_Logo_Colors.pdf?dl=0) shows two color combinations sampled from colors from https://en.wikipedia.org/wiki/Hildebrandt's_starling#/media/File:Lamprotornis_hildebrandti_-Tanzania-8-2c.jpg. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well.  Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options).  Thank you! James Cole Graphic Designer  OpenStack Foundation From Matt.Peters at windriver.com Thu Aug 9 13:13:50 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 9 Aug 2018 13:13:50 +0000 Subject: [Starlingx-discuss] Questions about custom setting and mac filter In-Reply-To: <9A85D2917C58154C960D95352B22818BAB575C8D@fmsmsx115.amr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F3BBCE4F1@shsmsx102.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB575C8D@fmsmsx115.amr.corp.intel.com> Message-ID: Hi Ruijing, I agree that the tenant based custom settings and mac filter setting can be dropped since the port security extension supports equivalent functionality for ovs-dpdk. Regards, Matt From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 08, 2018 10:18 AM To: Guo, Ruijing; Jolliffe, Ian; starlingx-discuss at lists.starlingx.io Cc: Troyer, Dean; Chilcote Bacco, Derek A Subject: Re: [Starlingx-discuss] Questions about custom setting and mac filter + StarlingX list From: Guo, Ruijing Sent: Monday, August 6, 2018 10:05 AM To: Jolliffe, Ian Cc: Troyer, Dean ; Jones, Bruce E ; Chilcote Bacco, Derek A ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Qin, Kailun Subject: Questions about custom setting and mac filter Hi, Ian, We are investigating tenant based custom setting and mac filter (d875491, c647127, b189392e, 28d6f56). The custom settings extension feature(d875491) is to allow the admin to manage settings on a per tenant basis. Currently only mac filtering is available as a settable value. Mac filter is alternative implementation of neutron port security. 28d6f56 is to enable the Neutron port security extension for ML2 plugin. by default, it overrides the existing mac filtering functionality. We can drop tenant based customer setting and mac filter features and ONLY support neutron port security with OVSDPDK. What do you think? Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dariush.Eslimi at windriver.com Thu Aug 9 14:31:02 2018 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Thu, 9 Aug 2018 14:31:02 +0000 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking Message-ID: Hi All, Launchpad seems to be more mature compare to Storyboard for bug tracking, here is list of advantages I found in few minutes looking at it: 1: Advance search 2: More meta data : Status, Importance, Milestone 3: Attachments 4: subscriber management 4: Ildiko mentioned there would migration path to new platform So my question is for people who know the history and have more experience using it, Why Openstack is trying to replace this and move to Storyboard? What are the shortcomings for day to day use? Basically why we should not be using it to manage our bugs for StarlingX? Thanks, Dariush From cindy.xie at intel.com Thu Aug 9 15:00:35 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 9 Aug 2018 15:00:35 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> Brent, Saul, Shuicheng, Let's initiate the discussion about how we'd like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we've already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: 7.5 upgrade.xlsx Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Size: 10924 bytes Desc: 7.5 upgrade.xlsx URL: From hayde.martinez.landa at intel.com Thu Aug 9 15:10:07 2018 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Thu, 9 Aug 2018 15:10:07 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Message-ID: <0DED3564-0549-4EDA-82E4-8A5195E28C38@intel.com> Hi All, I recently joined Shuicheng on this task, Here is the patch that shows all the rpm/srpm that must be updated: https://review.openstack.org/#/c/589037/ So far 5 packages have been updated: bash-4.2.46-30.el7.src.rpm python-2.7.5-69.el7_5.src.rpm systemd-219-57.el7.src.rpm puppet-stdlib-4.18.0-2.el7.src.rpm openldap-2.4.44-15.el7_5.src.rpm And this is the process we are following in order to do this (This was written by Shuicheng so we could follow same process): 1. 1.Go to "Source" folder, delete the old src rpm, and download the new src rpm. 2. 2.Find the corresponding patch folder for the srpm package. Then update the “srpm_path” to point to the new package. 3. 3.Create a tmp folder. And extract the new src rpm. Then you will get the source code and spec file. 4. 4.Extract the zip file for source code, and create git for it. 5. 5.Then try to apply the patches in patch folder. If there is conflict, you need manually solve it and update the patch. 6. 6.Create tmp folder (SPEC/SPECS) for spec file. And create git for it. 7. 7.Try to apply the meta_patches. If there is conflict, solve it, and generate new meta_patch. 8. 8.Try to build code, build-srpm should pass if the patch is rebased. 9. 9.Try to upgrade rpm package list to solve the dependency check when build rpm. 10. 10.After success build iso, do basic deploy test, to make sure basic function is not broken. 11. 11.If the deploy pass, summarize the change your made, and generate patch for it. On 8/9/18, 10:01 AM, "Xie, Cindy" wrote: Brent, Saul, Shuicheng, Let’s initiate the discussion about how we’d like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we’ve already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Aug 9 15:17:00 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 9 Aug 2018 15:17:00 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> I think we can safely drop the "vim" modification. It has a long history back to where we were inheriting code customizations from another layer, which was providing its own customized vimrc that had features enabled that were frustrating. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 11:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Brent, Saul, Shuicheng, Let's initiate the discussion about how we'd like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we've already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Thu Aug 9 16:03:12 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Thu, 9 Aug 2018 16:03:12 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> , <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 on the yellow Regards Cesar Lara Sent from my mobile phone ________________________________ From: James Cole Sent: Wednesday, August 8, 2018 4:47 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 9 16:27:38 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 9 Aug 2018 16:27:38 +0000 Subject: [Starlingx-discuss] Notes from today's Core team call Message-ID: <9A85D2917C58154C960D95352B22818BAB576713@fmsmsx115.amr.corp.intel.com> Agenda and notes for the 8/9 call: * Should we change this to a Team Lead call or a TL + Primes call? Keep for all the Cores? * Coordination for the Denver PTG - cross project collaboration and spec pushes o We agreed that we should have strong StarlingX presence at the Edge WG session. * We discussed Airship and Akraino. We plan to integrate some of the same components for StarlingX (e.g. Helm charts, Armada). * Governance draft feedback - please review the draft and thanks! * Configuration / installation o Using puppet vs packaging to configure the system will need to be looked at on a case by case basis - boot time vs run time. Will need to build a list and decide for each the best solution. * How do we use Storyboard vs the mailing list? In general we'd like discussions on the mailing list and decisions recorded in Storyboard. * We agreed to start defining the spec process sooner in addition to discussing it at the PTG. Brent to post a proposal to the list. * Centos 7.5 rebase o We'd like to see an end-to-end plan - what versions to move to, which patches to keep, which need to be re-based, etc... o We agreed to use the Master patch spreadsheet and add additional columns o Focus first on the source RPMs and their dep's, then the binary RPMs. o Make the changes on main (after the analysis) and coordinate the check ins. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Aug 9 17:05:47 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 9 Aug 2018 12:05:47 -0500 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking In-Reply-To: References: Message-ID: On Thu, Aug 9, 2018 at 9:31 AM, Eslimi, Dariush wrote: > So my question is for people who know the history and have more experience using it, Why Openstack is trying to replace this and move to Storyboard? What are the shortcomings for day to day use? OpenStack's reasons for leaving LP include: * needing to get to a single authentication authority (LP requires Ubuntu One accounts) to allow using a DCO rather than ICLA * limitations on task tracking across projects * persistent issues with API response and performance Thierry Carrez's post with more details is at https://storyboard-blog.io/why-storyboard-for-openstack.html dt -- Dean Troyer dtroyer at gmail.com From Lachlan.Plant at windriver.com Thu Aug 9 18:09:28 2018 From: Lachlan.Plant at windriver.com (Plant, Lachlan) Date: Thu, 9 Aug 2018 18:09:28 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> Message-ID: I am interested in helping out with this effort. I have free cycles starting on Monday, please keep me in the loop. Lachlan -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: August-08-18 3:03 PM To: Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy These repos are being deleted. https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003373 : stx-utils Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Wednesday, August 8, 2018 2:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Effort to make zuul linters happy Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Dariush.Eslimi at windriver.com Thu Aug 9 18:10:11 2018 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Thu, 9 Aug 2018 18:10:11 +0000 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking In-Reply-To: References: Message-ID: Thanks for response, but as is today not all that is mentioned in the blog is addressed by Storyboard: * We will depend on UbuntuOne regardless of Launchpad, as we need it for gerrit. * The limit of after 10 task per bug and performance, have this been discussed and raised with Launchpad team? Storyboard is not very speedy today, and this would not be an issue for StarlingX, limit of 10 is workable. * API-first : Storybard suffers from same issue, rich API, UI not representation of API and backend. So for short term, I suggest we use for Launchpad until Storyboard catches up and becomes a viable option for bug tracking. Thanks, Dariush -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: August-09-18 1:06 PM To: Eslimi, Dariush Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking On Thu, Aug 9, 2018 at 9:31 AM, Eslimi, Dariush wrote: > So my question is for people who know the history and have more experience using it, Why Openstack is trying to replace this and move to Storyboard? What are the shortcomings for day to day use? OpenStack's reasons for leaving LP include: * needing to get to a single authentication authority (LP requires Ubuntu One accounts) to allow using a DCO rather than ICLA * limitations on task tracking across projects * persistent issues with API response and performance Thierry Carrez's post with more details is at https://storyboard-blog.io/why-storyboard-for-openstack.html dt -- Dean Troyer dtroyer at gmail.com From sgw at linux.intel.com Thu Aug 9 18:14:45 2018 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 9 Aug 2018 11:14:45 -0700 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> Message-ID: <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> On 08/09/2018 08:17 AM, Penney, Don wrote: > I think we can safely drop the “vim” modification. It has a long history > back to where we were inheriting code customizations from another layer, > which was providing its own customized vimrc that had features enabled > that were frustrating. > Yeah, +1 to that, thanks for the confirmation! > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Thursday, August 09, 2018 11:01 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] CentOS 7.5 upgrade discussion > > Brent, Saul, Shuicheng, > > Let’s initiate the discussion about how we’d like to handle CentOS 7.5 > upgrade, we have a master xls sheet online for all non-openStack patches > analysis (@Saul, I only have Google doc link but not accessible by WR). > We should be able to provide a link to anyone that has a gmail account, I think (Bruce is that correct?) > And here is the SRPM files we’ve already looked into, and believe they > need upgrade. I put some columns in to fill-in more data (Shuicheng > should have most of the data available). We can start from here. > Can we please merge this into the existing master spreadsheet. We are going to continue to winnow down the list of patches and configuration from there. Sau! > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From bruce.e.jones at intel.com Thu Aug 9 18:33:55 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 9 Aug 2018 18:33:55 +0000 Subject: [Starlingx-discuss] Draft governance doc - further review... Message-ID: <9A85D2917C58154C960D95352B22818BAB5767B1@fmsmsx115.amr.corp.intel.com> My thanks to the folks who have reviewed and provided feedback on the draft governance document [0]. I believe I've addressed all of the feedback. I'd like to move the document off the etherpad soon and onto the wiki directly (and long term into stx-docs), so please take a look at the updated doc and confirm I've addressed your feedback. If you haven't looked at it yet, more feedback is always welcome. brucej [0] https://etherpad.openstack.org/p/stx-governance -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 9 18:36:41 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 9 Aug 2018 18:36:41 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> See below.... -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, August 9, 2018 11:15 AM To: starlingx-discuss at lists.starlingx.io; Jones, Bruce E Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion On 08/09/2018 08:17 AM, Penney, Don wrote: > I think we can safely drop the "vim" modification. It has a long > history back to where we were inheriting code customizations from > another layer, which was providing its own customized vimrc that had > features enabled that were frustrating. > Yeah, +1 to that, thanks for the confirmation! [brucej] Remember the process here is to create a SB Story for the changes needed to the code and the build to remove the patch, and put the link to the story in the spreadsheet. The distro.not-openstack team should do that. > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Thursday, August 09, 2018 11:01 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] CentOS 7.5 upgrade discussion > > Brent, Saul, Shuicheng, > > Let's initiate the discussion about how we'd like to handle CentOS 7.5 > upgrade, we have a master xls sheet online for all non-openStack > patches analysis (@Saul, I only have Google doc link but not accessible by WR). > We should be able to provide a link to anyone that has a gmail account, I think (Bruce is that correct?) [brucej] Documents like this that are within the Intel Enterprise Google drive can be shared with anyone who has a gmail address. > And here is the SRPM files we've already looked into, and believe they > need upgrade. I put some columns in to fill-in more data (Shuicheng > should have most of the data available). We can start from here. > Can we please merge this into the existing master spreadsheet. We are going to continue to winnow down the list of patches and configuration from there. [brucej] +1! Sau! > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Brent.Rowsell at windriver.com Thu Aug 9 18:39:02 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 9 Aug 2018 18:39:02 +0000 Subject: [Starlingx-discuss] Draft governance doc - further review... In-Reply-To: <9A85D2917C58154C960D95352B22818BAB5767B1@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB5767B1@fmsmsx115.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E5F34@ALA-MBD.corp.ad.wrs.com> Bruce, Is moving to the wiki considered closed, if so I think you need to provide more time for folks to weigh in Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, August 9, 2018 2:34 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Draft governance doc - further review... My thanks to the folks who have reviewed and provided feedback on the draft governance document [0]. I believe I've addressed all of the feedback. I'd like to move the document off the etherpad soon and onto the wiki directly (and long term into stx-docs), so please take a look at the updated doc and confirm I've addressed your feedback. If you haven't looked at it yet, more feedback is always welcome. brucej [0] https://etherpad.openstack.org/p/stx-governance -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Aug 9 18:41:09 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 9 Aug 2018 18:41:09 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E5F7E@ALA-MBD.corp.ad.wrs.com> Bruce, Please clarify " and the build to remove the patch" Brent -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, August 9, 2018 2:37 PM To: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion See below.... -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, August 9, 2018 11:15 AM To: starlingx-discuss at lists.starlingx.io; Jones, Bruce E Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion On 08/09/2018 08:17 AM, Penney, Don wrote: > I think we can safely drop the "vim" modification. It has a long > history back to where we were inheriting code customizations from > another layer, which was providing its own customized vimrc that had > features enabled that were frustrating. > Yeah, +1 to that, thanks for the confirmation! [brucej] Remember the process here is to create a SB Story for the changes needed to the code and the build to remove the patch, and put the link to the story in the spreadsheet. The distro.not-openstack team should do that. > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Thursday, August 09, 2018 11:01 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] CentOS 7.5 upgrade discussion > > Brent, Saul, Shuicheng, > > Let's initiate the discussion about how we'd like to handle CentOS 7.5 > upgrade, we have a master xls sheet online for all non-openStack > patches analysis (@Saul, I only have Google doc link but not accessible by WR). > We should be able to provide a link to anyone that has a gmail account, I think (Bruce is that correct?) [brucej] Documents like this that are within the Intel Enterprise Google drive can be shared with anyone who has a gmail address. > And here is the SRPM files we've already looked into, and believe they > need upgrade. I put some columns in to fill-in more data (Shuicheng > should have most of the data available). We can start from here. > Can we please merge this into the existing master spreadsheet. We are going to continue to winnow down the list of patches and configuration from there. [brucej] +1! Sau! > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Aug 9 18:44:51 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 9 Aug 2018 18:44:51 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA405BEA@ALA-MBD.corp.ad.wrs.com> Hi Bruce, There is a story already for tracking the CentOS 7.5 upgrade. Shuicheng created it earlier (he's part of the Distro Non-Openstack team). https://storyboard.openstack.org/#!/story/2003389 Dealing with the vim patch (i.e. removing it) should be a task under this story. Regards, Ghada -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, August 09, 2018 2:37 PM To: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion See below.... -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, August 9, 2018 11:15 AM To: starlingx-discuss at lists.starlingx.io; Jones, Bruce E Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion On 08/09/2018 08:17 AM, Penney, Don wrote: > I think we can safely drop the "vim" modification. It has a long > history back to where we were inheriting code customizations from > another layer, which was providing its own customized vimrc that had > features enabled that were frustrating. > Yeah, +1 to that, thanks for the confirmation! [brucej] Remember the process here is to create a SB Story for the changes needed to the code and the build to remove the patch, and put the link to the story in the spreadsheet. The distro.not-openstack team should do that. > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Thursday, August 09, 2018 11:01 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] CentOS 7.5 upgrade discussion > > Brent, Saul, Shuicheng, > > Let's initiate the discussion about how we'd like to handle CentOS 7.5 > upgrade, we have a master xls sheet online for all non-openStack > patches analysis (@Saul, I only have Google doc link but not accessible by WR). > We should be able to provide a link to anyone that has a gmail account, I think (Bruce is that correct?) [brucej] Documents like this that are within the Intel Enterprise Google drive can be shared with anyone who has a gmail address. > And here is the SRPM files we've already looked into, and believe they > need upgrade. I put some columns in to fill-in more data (Shuicheng > should have most of the data available). We can start from here. > Can we please merge this into the existing master spreadsheet. We are going to continue to winnow down the list of patches and configuration from there. [brucej] +1! Sau! > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Thu Aug 9 19:01:48 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 9 Aug 2018 21:01:48 +0200 Subject: [Starlingx-discuss] Draft governance doc - further review... In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E5F34@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BAB5767B1@fmsmsx115.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E5F34@ALA-MBD.corp.ad.wrs.com> Message-ID: <7D608574-6B7A-450E-B995-38BC32227D5F@gmail.com> Hi, As the initial draft was introduced on the mailing list only yesterday I agree to keep the review window open longer to give everyone a chance to digest and respond. As it is still somewhat vacation season and people are also in different time zones we need to be more conscious about time given to review items concerning the whole community. As governance is a crucial item we plan to have a discussion about it on the PTG face to face before we finalize it. I suggest to leave it open for reviews and discussion until the event and move to its final place soon after. For reviews one option is to keep it on the etherpad or move it to Gerrit and iterate over there so we can keep track of the history of the comments and changes. Which way would you prefer? Thanks and Best Regards, Ildikó > On 2018. Aug 9., at 20:39, Rowsell, Brent wrote: > > Bruce, > > Is moving to the wiki considered closed, if so I think you need to provide more time for folks to weigh in > > Brent > > From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] > Sent: Thursday, August 9, 2018 2:34 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Draft governance doc - further review... > > My thanks to the folks who have reviewed and provided feedback on the draft governance document [0]. I believe I’ve addressed all of the feedback. > > I’d like to move the document off the etherpad soon and onto the wiki directly (and long term into stx-docs), so please take a look at the updated doc and confirm I’ve addressed your feedback. If you haven’t looked at it yet, more feedback is always welcome. > > brucej > > [0] https://etherpad.openstack.org/p/stx-governance > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Aug 9 19:06:12 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 9 Aug 2018 19:06:12 +0000 Subject: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu Message-ID: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> Saul/Brent/Cindy, In order to remain current, I'd like to propose that we upgrade libvirt and qemu to the latest versions. The last upversion of these packages was almost a year ago, so I think it's time. Let me know if you have any concerns. I have created two stories: https://storyboard.openstack.org/#!/story/2003396 https://storyboard.openstack.org/#!/story/2003395 Jim Somerville has volunteered to do tis. If anyone else from the Distro-Non-Openstack team wants to contribute, please reach out to Jim. We'll target this for the October release. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Aug 9 19:12:57 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 9 Aug 2018 19:12:57 +0000 Subject: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB1E611A@ALA-MBD.corp.ad.wrs.com> Thanks Ghada. I added a note to the libvirt story, python-libvirt will need to be updated to 4.6 as well. Brent From: Khalil, Ghada Sent: Thursday, August 9, 2018 3:06 PM To: starlingx-discuss at lists.starlingx.io; Cindy Xie ; Saul Wold ; Rowsell, Brent ; Somerville, Jim Subject: [Distro-Non-Openstack] Upgrading libvirt and qemu Saul/Brent/Cindy, In order to remain current, I'd like to propose that we upgrade libvirt and qemu to the latest versions. The last upversion of these packages was almost a year ago, so I think it's time. Let me know if you have any concerns. I have created two stories: https://storyboard.openstack.org/#!/story/2003396 https://storyboard.openstack.org/#!/story/2003395 Jim Somerville has volunteered to do tis. If anyone else from the Distro-Non-Openstack team wants to contribute, please reach out to Jim. We'll target this for the October release. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 9 19:34:05 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 9 Aug 2018 19:34:05 +0000 Subject: [Starlingx-discuss] Draft governance doc - further review... In-Reply-To: <7D608574-6B7A-450E-B995-38BC32227D5F@gmail.com> References: <9A85D2917C58154C960D95352B22818BAB5767B1@fmsmsx115.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E5F34@ALA-MBD.corp.ad.wrs.com> <7D608574-6B7A-450E-B995-38BC32227D5F@gmail.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB576BC5@fmsmsx115.amr.corp.intel.com> Thanks, I'm not trying to close debate or review at all. There's been a lot of good feedback and the document is becoming somewhat hard to read. If there's a way to post it to git/gerrit and still make it linkable from the wiki that would be perfect I think. -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Thursday, August 9, 2018 12:02 PM To: Rowsell, Brent ; Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Draft governance doc - further review... Hi, As the initial draft was introduced on the mailing list only yesterday I agree to keep the review window open longer to give everyone a chance to digest and respond. As it is still somewhat vacation season and people are also in different time zones we need to be more conscious about time given to review items concerning the whole community. As governance is a crucial item we plan to have a discussion about it on the PTG face to face before we finalize it. I suggest to leave it open for reviews and discussion until the event and move to its final place soon after. For reviews one option is to keep it on the etherpad or move it to Gerrit and iterate over there so we can keep track of the history of the comments and changes. Which way would you prefer? Thanks and Best Regards, Ildikó > On 2018. Aug 9., at 20:39, Rowsell, Brent wrote: > > Bruce, > > Is moving to the wiki considered closed, if so I think you need to provide more time for folks to weigh in > > Brent > > From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] > Sent: Thursday, August 9, 2018 2:34 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Draft governance doc - further review... > > My thanks to the folks who have reviewed and provided feedback on the draft governance document [0]. I believe I’ve addressed all of the feedback. > > I’d like to move the document off the etherpad soon and onto the wiki directly (and long term into stx-docs), so please take a look at the updated doc and confirm I’ve addressed your feedback. If you haven’t looked at it yet, more feedback is always welcome. > > brucej > > [0] https://etherpad.openstack.org/p/stx-governance > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Aug 9 19:41:07 2018 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 9 Aug 2018 12:41:07 -0700 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA405BEA@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA405BEA@ALA-MBD.corp.ad.wrs.com> Message-ID: On 08/09/2018 11:44 AM, Khalil, Ghada wrote: > Hi Bruce, > There is a story already for tracking the CentOS 7.5 upgrade. Shuicheng created it earlier (he's part of the Distro Non-Openstack team). > > https://storyboard.openstack.org/#!/story/2003389 > > Dealing with the vim patch (i.e. removing it) should be a task under this story. > I thought we were doing 1 story / package, not 1 task / package? Can someone please clarify this, as I am also working on patches and they need stories or tasks. Sau! > Regards, > Ghada > > -----Original Message----- > From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] > Sent: Thursday, August 09, 2018 2:37 PM > To: Saul Wold; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion > > See below.... > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Thursday, August 9, 2018 11:15 AM > To: starlingx-discuss at lists.starlingx.io; Jones, Bruce E > Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion > > > > On 08/09/2018 08:17 AM, Penney, Don wrote: >> I think we can safely drop the "vim" modification. It has a long >> history back to where we were inheriting code customizations from >> another layer, which was providing its own customized vimrc that had >> features enabled that were frustrating. >> > Yeah, +1 to that, thanks for the confirmation! > > [brucej] Remember the process here is to create a SB Story for the changes needed to the code and the build to remove the patch, and put the link to the story in the spreadsheet. The distro.not-openstack team should do that. > > >> *From:*Xie, Cindy [mailto:cindy.xie at intel.com] >> *Sent:* Thursday, August 09, 2018 11:01 AM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] CentOS 7.5 upgrade discussion >> >> Brent, Saul, Shuicheng, >> >> Let's initiate the discussion about how we'd like to handle CentOS 7.5 >> upgrade, we have a master xls sheet online for all non-openStack >> patches analysis (@Saul, I only have Google doc link but not accessible by WR). >> > We should be able to provide a link to anyone that has a gmail account, I think (Bruce is that correct?) > > [brucej] Documents like this that are within the Intel Enterprise Google drive can be shared with anyone who has a gmail address. > > >> And here is the SRPM files we've already looked into, and believe they >> need upgrade. I put some columns in to fill-in more data (Shuicheng >> should have most of the data available). We can start from here. >> > Can we please merge this into the existing master spreadsheet. We are > going to continue to winnow down the list of patches and configuration > from there. > > [brucej] +1! > > Sau! > > >> Thx. - cindy >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From bruce.e.jones at intel.com Thu Aug 9 19:52:14 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 9 Aug 2018 19:52:14 +0000 Subject: [Starlingx-discuss] [Docs] StarlingX Overview live on the web Message-ID: <9A85D2917C58154C960D95352B22818BAB576C3C@fmsmsx115.amr.corp.intel.com> Thanks to the folks at the Foundation and to the authors from Wind River, we now have a nice StarlingX overview presentation linked off the project’s main web page. Plus we have a way to make PRs for any other changes to the page. Once we finalize on the cool new logos, both will be updated to use them. brucej From: Jimmy McArthur [mailto:jimmy at openstack.org] Sent: Thursday, August 9, 2018 11:47 AM To: Jones, Bruce E Cc: Kinder, David B ; Claire Massey ; Rifenbark, ScottX ; Tullis, Michael L Subject: Re: StarlingX Web site questions The PDF is now up on the site and linked to from the homepage. Additionally, you can make pull requests now against the repo (https://github.com/iamweswilson/starling-landing). We'll plan on moving this to a different owner when we launch the Netlify CMS. http://www.starlingx.io/ http://www.starlingx.io/assets/StarlingX-Overview-Presentation.pdf Please let me know if we can be of further assistance! Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Aug 9 19:54:59 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 9 Aug 2018 21:54:59 +0200 Subject: [Starlingx-discuss] Draft governance doc - further review... In-Reply-To: <9A85D2917C58154C960D95352B22818BAB576BC5@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB5767B1@fmsmsx115.amr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E5F34@ALA-MBD.corp.ad.wrs.com> <7D608574-6B7A-450E-B995-38BC32227D5F@gmail.com> <9A85D2917C58154C960D95352B22818BAB576BC5@fmsmsx115.amr.corp.intel.com> Message-ID: <859A30FE-526D-473C-8977-593B2964FF1B@gmail.com> Hi Bruce, To mention one example the OpenStack Community has a separate governance repository: https://github.com/openstack/governance If we go down that route here and choose to create a governance repository for StarlingX than beyond storing documents that keep history of the TSC’s decisions we can store the document that describes the ‘Initial Governance’ there as well. We can then publish documents from this repository the same way as stx-docs to provide more visibility. What do you think? Thanks, Ildikó > On 2018. Aug 9., at 21:34, Jones, Bruce E wrote: > > Thanks, I'm not trying to close debate or review at all. There's been a lot of good feedback and the document is becoming somewhat hard to read. If there's a way to post it to git/gerrit and still make it linkable from the wiki that would be perfect I think. > > -----Original Message----- > From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] > Sent: Thursday, August 9, 2018 12:02 PM > To: Rowsell, Brent ; Jones, Bruce E > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Draft governance doc - further review... > > Hi, > > As the initial draft was introduced on the mailing list only yesterday I agree to keep the review window open longer to give everyone a chance to digest and respond. > > As it is still somewhat vacation season and people are also in different time zones we need to be more conscious about time given to review items concerning the whole community. > > As governance is a crucial item we plan to have a discussion about it on the PTG face to face before we finalize it. I suggest to leave it open for reviews and discussion until the event and move to its final place soon after. > > For reviews one option is to keep it on the etherpad or move it to Gerrit and iterate over there so we can keep track of the history of the comments and changes. Which way would you prefer? > > Thanks and Best Regards, > Ildikó > > >> On 2018. Aug 9., at 20:39, Rowsell, Brent wrote: >> >> Bruce, >> >> Is moving to the wiki considered closed, if so I think you need to provide more time for folks to weigh in >> >> Brent >> >> From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] >> Sent: Thursday, August 9, 2018 2:34 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] Draft governance doc - further review... >> >> My thanks to the folks who have reviewed and provided feedback on the draft governance document [0]. I believe I’ve addressed all of the feedback. >> >> I’d like to move the document off the etherpad soon and onto the wiki directly (and long term into stx-docs), so please take a look at the updated doc and confirm I’ve addressed your feedback. If you haven’t looked at it yet, more feedback is always welcome. >> >> brucej >> >> [0] https://etherpad.openstack.org/p/stx-governance >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From dtroyer at gmail.com Thu Aug 9 20:06:43 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 9 Aug 2018 15:06:43 -0500 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking In-Reply-To: References: Message-ID: On Thu, Aug 9, 2018 at 1:10 PM, Eslimi, Dariush wrote: > * We will depend on UbuntuOne regardless of Launchpad, as we need it for gerrit. OpenStack will move Gerrit and other tooling to the OpenStack Foundation auth authority when we are free of LaunchPad, this tie is why it is an issue. That change will also affect StarlingX and any other project using OpenStack Foundation resources. > * API-first : Storybard suffers from same issue, rich API, UI not representation of API and backend. You have this backward. LaunchPad does not expose everything in its API that it can do in the web UI. This has hampered some automation efforts and in fact was one of the motivators for the SB teams position of API first. The situation with Storyboard allows users with API capabilities to solve their own problems regarding UI limitations. This is not possible with LaunchPad. I am not arguing a position one way or the other, just clarifying the stated reasons and why a specific organization has wanted to leave LaunchPad for over 6 years now. dt -- Dean Troyer dtroyer at gmail.com From Ghada.Khalil at windriver.com Thu Aug 9 20:13:06 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 9 Aug 2018 20:13:06 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA405BEA@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA405CEC@ALA-MBD.corp.ad.wrs.com> On 08/09/2018 11:44 AM, Khalil, Ghada wrote: > Hi Bruce, > There is a story already for tracking the CentOS 7.5 upgrade. Shuicheng created it earlier (he's part of the Distro Non-Openstack team). > > https://storyboard.openstack.org/#!/story/2003389 > > Dealing with the vim patch (i.e. removing it) should be a task under this story. > I thought we were doing 1 story / package, not 1 task / package? Can someone please clarify this, as I am also working on patches and they need stories or tasks. [[GK]] Sorry Saul I didn't know about this decision as I was not part of the discussion. I personally feel if work items can be logically combined into one story with multiple tasks, that would be beneficial. It's a way to avoid story explosion. That being said, in the end, it's up to you and the sub-project team. If it makes more sense to have a story/package, that's fine as well. PS: I am a bit biased as I have been trying to tag the backlog, so the less stories the better for me. But that's a short term issue as each sub-project starts to tag their own. Ghada From Dariush.Eslimi at windriver.com Thu Aug 9 20:19:06 2018 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Thu, 9 Aug 2018 20:19:06 +0000 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking In-Reply-To: References: Message-ID: Thanks for clarification, I understand the motivations, all I am trying to say Storyboard is not ready yet for bugs. So we can join others in Openstack and move when all move to new platform. Dariush -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: August-09-18 4:07 PM To: Eslimi, Dariush Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking On Thu, Aug 9, 2018 at 1:10 PM, Eslimi, Dariush wrote: > * We will depend on UbuntuOne regardless of Launchpad, as we need it for gerrit. OpenStack will move Gerrit and other tooling to the OpenStack Foundation auth authority when we are free of LaunchPad, this tie is why it is an issue. That change will also affect StarlingX and any other project using OpenStack Foundation resources. > * API-first : Storybard suffers from same issue, rich API, UI not representation of API and backend. You have this backward. LaunchPad does not expose everything in its API that it can do in the web UI. This has hampered some automation efforts and in fact was one of the motivators for the SB teams position of API first. The situation with Storyboard allows users with API capabilities to solve their own problems regarding UI limitations. This is not possible with LaunchPad. I am not arguing a position one way or the other, just clarifying the stated reasons and why a specific organization has wanted to leave LaunchPad for over 6 years now. dt -- Dean Troyer dtroyer at gmail.com From ildiko.vancsa at gmail.com Thu Aug 9 20:56:31 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 9 Aug 2018 22:56:31 +0200 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking In-Reply-To: References: Message-ID: Hi, Thanks Dariush for doing the evaluation work. While Launchpad is not perfect, I would like to confirm on this thread too what Dariush mentioned as well, there is a migration script and process to move from Launchpad to StoryBoard which I think is a big advantage. Thanks and Best Regards, Ildikó > On 2018. Aug 9., at 22:19, Eslimi, Dariush wrote: > > Thanks for clarification, I understand the motivations, all I am trying to say Storyboard is not ready yet for bugs. > So we can join others in Openstack and move when all move to new platform. > > Dariush > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: August-09-18 4:07 PM > To: Eslimi, Dariush > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking > > On Thu, Aug 9, 2018 at 1:10 PM, Eslimi, Dariush wrote: >> * We will depend on UbuntuOne regardless of Launchpad, as we need it for gerrit. > > OpenStack will move Gerrit and other tooling to the OpenStack Foundation auth authority when we are free of LaunchPad, this tie is why it is an issue. That change will also affect StarlingX and any other project using OpenStack Foundation resources. > >> * API-first : Storybard suffers from same issue, rich API, UI not representation of API and backend. > > You have this backward. LaunchPad does not expose everything in its API that it can do in the web UI. This has hampered some automation efforts and in fact was one of the motivators for the SB teams position of API first. The situation with Storyboard allows users with API capabilities to solve their own problems regarding UI limitations. This is not possible with LaunchPad. > > I am not arguing a position one way or the other, just clarifying the stated reasons and why a specific organization has wanted to leave LaunchPad for over 6 years now. > > dt > > -- > > Dean Troyer > dtroyer at gmail.com > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From abraham.arce.moreno at intel.com Thu Aug 9 21:33:40 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 9 Aug 2018 21:33:40 +0000 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template In-Reply-To: References: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> Message-ID: > On Wed, Aug 8, 2018 at 6:13 AM, Waines, Greg > wrote: > > ( I’m assuming that it will NOT be API documentation, I see > > discussions on that elsewhere. And would be located at > > https://developer.openstack.org/api-ref/starlingx/ ??? ) > > The actual location has not been finalized yet. I imagine it to be something > like docs.starlingx.io. How to decide where to land the documentation? 2 options: - docs.openstack.com - docs.starlingx.io [ OpenStack Documentation ] For each project we create source code directories to enable specific functionality: /docs/ General Documentation /api-ref/ API Reference /api-guide/ API Guide /releasenotes/ Release Notes Help Needed! Can you please help me to fill out ethercalc [0] which has the list of projects and the functionality that could be implemented with a Yes or No? This is in preparation to enable our Tech Writing team for the next release. [ docs.starlingx.io ] This option will allows us to take advantage of the infrastructure not only to create but also to land our documentation as follows, depending on what is the functionality available from the project: /docs/ -> docs.openstack.org/ /api-ref/ -> developer.openstack.org/api-ref/ /api-guide/ -> developer.openstack.org/api-guide/ /releasenotes/ -> docs.openstack.org/releasenotes/ [ docs.starlingx.io ] Source code directory structure can be kept however, what could be the landing process since it will be out of OpenStack infrastructure? How about the layout? Option 1 /docs/ -> docs.starlingx.io/ /api-ref/ -> docs.starlingx.io/api-ref/ /api-guide/ -> docs.starlingx.io/api-guide/ /releasenotes/ -> docs.starlingx.io/releasenotes/ Option 2 /docs/ -> docs.starlingx.io/ /api-ref/ -> api-ref.starlingx.io/ /api-guide/ -> api-guide.starlingx.io/ /releasenotes/ -> releaseanotes.starlingx.io/ Option 3 Any other :) [0] https://ethercalc.openstack.org/sifnpbvze9lb From cindy.xie at intel.com Thu Aug 9 22:55:29 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 9 Aug 2018 22:55:29 +0000 Subject: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB1E611A@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E611A@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B31121D@SHSMSX104.ccr.corp.intel.com> HI, Ghada and Brent, Yes, I will have one engineer join Jim working on libvirt, qemu and libvirt-python upgrade. Will get back to you w/ name. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, August 10, 2018 3:13 AM To: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io; Xie, Cindy ; Saul Wold ; Somerville, Jim Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu Thanks Ghada. I added a note to the libvirt story, python-libvirt will need to be updated to 4.6 as well. Brent From: Khalil, Ghada Sent: Thursday, August 9, 2018 3:06 PM To: starlingx-discuss at lists.starlingx.io; Cindy Xie >; Saul Wold >; Rowsell, Brent >; Somerville, Jim > Subject: [Distro-Non-Openstack] Upgrading libvirt and qemu Saul/Brent/Cindy, In order to remain current, I'd like to propose that we upgrade libvirt and qemu to the latest versions. The last upversion of these packages was almost a year ago, so I think it's time. Let me know if you have any concerns. I have created two stories: https://storyboard.openstack.org/#!/story/2003396 https://storyboard.openstack.org/#!/story/2003395 Jim Somerville has volunteered to do tis. If anyone else from the Distro-Non-Openstack team wants to contribute, please reach out to Jim. We'll target this for the October release. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 10 00:08:15 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 10 Aug 2018 00:08:15 +0000 Subject: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu References: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E611A@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B311419@SHSMSX104.ccr.corp.intel.com> I re-opened the story for libvirt-python to upgrade to 4.6 (instead of 3.9): https://storyboard.openstack.org/#!/story/2003339. Zhipeng was assigned to this story and he will continue working on this. He can also join Jim for libvirt and qemu story as well. Thx. - cindy From: Xie, Cindy Sent: Friday, August 10, 2018 6:55 AM To: 'Rowsell, Brent' ; Khalil, Ghada ; starlingx-discuss at lists.starlingx.io; Saul Wold ; Somerville, Jim Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu HI, Ghada and Brent, Yes, I will have one engineer join Jim working on libvirt, qemu and libvirt-python upgrade. Will get back to you w/ name. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, August 10, 2018 3:13 AM To: Khalil, Ghada >; starlingx-discuss at lists.starlingx.io; Xie, Cindy >; Saul Wold >; Somerville, Jim > Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu Thanks Ghada. I added a note to the libvirt story, python-libvirt will need to be updated to 4.6 as well. Brent From: Khalil, Ghada Sent: Thursday, August 9, 2018 3:06 PM To: starlingx-discuss at lists.starlingx.io; Cindy Xie >; Saul Wold >; Rowsell, Brent >; Somerville, Jim > Subject: [Distro-Non-Openstack] Upgrading libvirt and qemu Saul/Brent/Cindy, In order to remain current, I'd like to propose that we upgrade libvirt and qemu to the latest versions. The last upversion of these packages was almost a year ago, so I think it's time. Let me know if you have any concerns. I have created two stories: https://storyboard.openstack.org/#!/story/2003396 https://storyboard.openstack.org/#!/story/2003395 Jim Somerville has volunteered to do tis. If anyone else from the Distro-Non-Openstack team wants to contribute, please reach out to Jim. We'll target this for the October release. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 10 00:11:55 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 10 Aug 2018 00:11:55 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA405CEC@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <368de51a-f36c-014f-6de8-2604c7fd9faf@linux.intel.com> <9A85D2917C58154C960D95352B22818BAB5767C6@fmsmsx115.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA405BEA@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA405CEC@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B311440@SHSMSX104.ccr.corp.intel.com> Let's stick w/ one story we already have at this moment: https://storyboard.openstack.org/#!/story/2003389. I've already tagged as stx-distro.other. we need to target it at Oct's release, thus I am adding stx.2018.10 tag as well. Thx. - cindy -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 10, 2018 4:13 AM To: Saul Wold ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion On 08/09/2018 11:44 AM, Khalil, Ghada wrote: > Hi Bruce, > There is a story already for tracking the CentOS 7.5 upgrade. Shuicheng created it earlier (he's part of the Distro Non-Openstack team). > > https://storyboard.openstack.org/#!/story/2003389 > > Dealing with the vim patch (i.e. removing it) should be a task under this story. > I thought we were doing 1 story / package, not 1 task / package? Can someone please clarify this, as I am also working on patches and they need stories or tasks. [[GK]] Sorry Saul I didn't know about this decision as I was not part of the discussion. I personally feel if work items can be logically combined into one story with multiple tasks, that would be beneficial. It's a way to avoid story explosion. That being said, in the end, it's up to you and the sub-project team. If it makes more sense to have a story/package, that's fine as well. PS: I am a bit biased as I have been trying to tag the backlog, so the less stories the better for me. But that's a short term issue as each sub-project starts to tag their own. Ghada _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Fri Aug 10 00:44:37 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 10 Aug 2018 00:44:37 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C7655350A48@SHSMSX101.ccr.corp.intel.com> Hi Don, I also was thinking drop the vim srpm. Thanks for the confirmation. Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, August 9, 2018 11:17 PM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion I think we can safely drop the "vim" modification. It has a long history back to where we were inheriting code customizations from another layer, which was providing its own customized vimrc that had features enabled that were frustrating. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 11:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Brent, Saul, Shuicheng, Let's initiate the discussion about how we'd like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we've already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 10 00:55:06 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 10 Aug 2018 00:55:06 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <9700A18779F35F49AF027300A49E7C7655350A48@SHSMSX101.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C7655350A48@SHSMSX101.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B3117CE@SHSMSX104.ccr.corp.intel.com> There is a separate story created by Saul on this one: https://storyboard.openstack.org/#!/story/2003389 Please sync-up with Saul if he already has patch available or you want to create one. Thx. - cindy From: Lin, Shuicheng Sent: Friday, August 10, 2018 8:45 AM To: Penney, Don ; Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.5 upgrade discussion Hi Don, I also was thinking drop the vim srpm. Thanks for the confirmation. Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, August 9, 2018 11:17 PM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion I think we can safely drop the "vim" modification. It has a long history back to where we were inheriting code customizations from another layer, which was providing its own customized vimrc that had features enabled that were frustrating. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 11:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Brent, Saul, Shuicheng, Let's initiate the discussion about how we'd like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we've already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 10 00:56:57 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 10 Aug 2018 00:56:57 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C7655350A48@SHSMSX101.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B3117F7@SHSMSX104.ccr.corp.intel.com> Sorry wrong link: https://storyboard.openstack.org/#!/story/2003398 From: Xie, Cindy Sent: Friday, August 10, 2018 8:55 AM To: Lin, Shuicheng ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.5 upgrade discussion There is a separate story created by Saul on this one: https://storyboard.openstack.org/#!/story/2003389 Please sync-up with Saul if he already has patch available or you want to create one. Thx. - cindy From: Lin, Shuicheng Sent: Friday, August 10, 2018 8:45 AM To: Penney, Don >; Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.5 upgrade discussion Hi Don, I also was thinking drop the vim srpm. Thanks for the confirmation. Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, August 9, 2018 11:17 PM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion I think we can safely drop the "vim" modification. It has a long history back to where we were inheriting code customizations from another layer, which was providing its own customized vimrc that had features enabled that were frustrating. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 11:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Brent, Saul, Shuicheng, Let's initiate the discussion about how we'd like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we've already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Fri Aug 10 01:07:07 2018 From: yi.c.wang at intel.com (Wang, Yi C) Date: Fri, 10 Aug 2018 01:07:07 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 on the yellow I like it more. From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 5:45 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Fri Aug 10 01:12:33 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Fri, 10 Aug 2018 01:12:33 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <6345119E91D5C843A93D64F498ACFA13699B3C17@SHSMSX101.ccr.corp.intel.com> +1 to Purple/Blue, especially the logo on T-shirt ☺ From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 5:45 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchang77 at chinaunicom.cn Fri Aug 10 02:16:12 2018 From: liuchang77 at chinaunicom.cn (liuchang77 at chinaunicom.cn) Date: Fri, 10 Aug 2018 10:16:12 +0800 Subject: [Starlingx-discuss] So many packages are missing References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com> Message-ID: <2018081010161215532816@chinaunicom.cn> Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang 刘 畅 云计算实验室 中国联合网络通信有限公司研究院 移动电话:18610741986 地址:北京市亦庄经济开发区北环东路1号 From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivian.zhu at intel.com Fri Aug 10 02:20:38 2018 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Fri, 10 Aug 2018 02:20:38 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> , <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <371DF9A763E9F44F924F4A821FC070264C46CFA6@SHSMSX104.ccr.corp.intel.com> Also +1 on the yellow, it is brighter. :) - Vivian SSG OTC NST Storage Tel: (8621)61167437 From: Lara, Cesar [mailto:cesar.lara at intel.com] Sent: Friday, August 10, 2018 12:03 AM To: James Cole ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts +1 on the yellow Regards Cesar Lara Sent from my mobile phone ________________________________ From: James Cole > Sent: Wednesday, August 8, 2018 4:47 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I've been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don't have to be the final colors if you aren't drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From yan.chen at intel.com Fri Aug 10 02:26:32 2018 From: yan.chen at intel.com (Chen, Yan) Date: Fri, 10 Aug 2018 02:26:32 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> Message-ID: <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> Hi, I just checked the zuul script under stx-fault, and here's what I found: The linters job will do 2 checks: 1. check *.sh files under this project with bashate. But we don't have shell file in this project, so the find cmd will return nothing and the bashate will fail. 2. check *.yaml files with yamllint. But in the cmd, it will find yaml files with "-name middleware/io-monitor/recipes-common/io-monitor/io-monitor/io_monitor/test-tools/yaml/*" which is not a right folder under this project. So it will always fail. As most of the linters jobs in different projects are the same (copied as default), I guess we can fix them with one synced script. Yan -----Original Message----- From: Plant, Lachlan [mailto:Lachlan.Plant at windriver.com] Sent: Friday, August 10, 2018 02:09 To: Rowsell, Brent ; Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy I am interested in helping out with this effort. I have free cycles starting on Monday, please keep me in the loop. Lachlan -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: August-08-18 3:03 PM To: Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy These repos are being deleted. https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003373 : stx-utils Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Wednesday, August 8, 2018 2:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Effort to make zuul linters happy Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Fri Aug 10 02:54:02 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 9 Aug 2018 21:54:02 -0500 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template In-Reply-To: References: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> Message-ID: On Thu, Aug 9, 2018 at 4:33 PM, Arce Moreno, Abraham wrote: > How to decide where to land the documentation? 2 options: > > - docs.openstack.com We will not be publishing things under openstack.org > - docs.starlingx.io This or something else under starlingx.io > [ OpenStack Documentation ] > For each project we create source code directories to enable specific > functionality: > > /docs/ General Documentation /doc > /api-ref/ API Reference > /api-guide/ API Guide > /releasenotes/ Release Notes yes > [ docs.starlingx.io ] > > This option will allows us to take advantage of the infrastructure not only > to create but also to land our documentation as follows, depending on > what is the functionality available from the project: > > /docs/ -> docs.openstack.org/ > /api-ref/ -> developer.openstack.org/api-ref/ > /api-guide/ -> developer.openstack.org/api-guide/ > /releasenotes/ -> docs.openstack.org/releasenotes/ FWIW OpenStack did some of the separation here due to the different teams responsible for the content. We may not have that need and want to stick to a simpler format with everything under a single site. > [ docs.starlingx.io ] > > Source code directory structure can be kept however, what could be the landing > process since it will be out of OpenStack infrastructure? We need to get the server set up for this (via OpenStack Infra team) and set up the publishing jobs to populate it > How about the layout? > > Option 1 > /docs/ -> docs.starlingx.io/ > /api-ref/ -> docs.starlingx.io/api-ref/ > /api-guide/ -> docs.starlingx.io/api-guide/ > /releasenotes/ -> docs.starlingx.io/releasenotes/ I prefer this or something similar... dt -- Dean Troyer dtroyer at gmail.com From Don.Penney at windriver.com Fri Aug 10 03:00:24 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 10 Aug 2018 03:00:24 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA3368FA@ALA-MBD.corp.ad.wrs.com> 1. The xargs commands in the tox.ini files should be updated to include --no-run-if-empty 2. While the io-monitor path is no longer valid, it looks like it's an exclusion argument to the find commands, to skip any yaml files found under that directory. It doesn't cause the find command to fail. It should be removed from the tox.ini files, I agree, but it seems to have no effect. -----Original Message----- From: Chen, Yan [mailto:yan.chen at intel.com] Sent: Thursday, August 09, 2018 10:27 PM To: Plant, Lachlan; Rowsell, Brent; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy Hi, I just checked the zuul script under stx-fault, and here's what I found: The linters job will do 2 checks: 1. check *.sh files under this project with bashate. But we don't have shell file in this project, so the find cmd will return nothing and the bashate will fail. 2. check *.yaml files with yamllint. But in the cmd, it will find yaml files with "-name middleware/io-monitor/recipes-common/io-monitor/io-monitor/io_monitor/test-tools/yaml/*" which is not a right folder under this project. So it will always fail. As most of the linters jobs in different projects are the same (copied as default), I guess we can fix them with one synced script. Yan -----Original Message----- From: Plant, Lachlan [mailto:Lachlan.Plant at windriver.com] Sent: Friday, August 10, 2018 02:09 To: Rowsell, Brent ; Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy I am interested in helping out with this effort. I have free cycles starting on Monday, please keep me in the loop. Lachlan -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: August-08-18 3:03 PM To: Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy These repos are being deleted. https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003373 : stx-utils Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Wednesday, August 8, 2018 2:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Effort to make zuul linters happy Hi all, Currently the zuul check only verifies linters and some of them fails. The zuul gate stage is disabled to make the integration possible. As it is desirable to use Zuul for gating code merges we need first to solve the linter problems in every repository. I've created a set of stories for each repository to start this effort. Now the only tasks created there is to go into Zuul logs and review what needs to be done to solve the issues and create the related tasks. In case someone wants to join this effort, here is the list of stories. https://storyboard.openstack.org/#!/story/2003359 : stx-clients https://storyboard.openstack.org/#!/story/2003360 : stx-config https://storyboard.openstack.org/#!/story/2003361 : stx-fault https://storyboard.openstack.org/#!/story/2003361 : stx-gplv2 https://storyboard.openstack.org/#!/story/2003363 : stx-gplv3 https://storyboard.openstack.org/#!/story/2003364 : stx-gui https://storyboard.openstack.org/#!/story/2003365 : stx-ha https://storyboard.openstack.org/#!/story/2003366 : stx-integ https://storyboard.openstack.org/#!/story/2003367 : stx-manifest https://storyboard.openstack.org/#!/story/2003368 : stx-metal https://storyboard.openstack.org/#!/story/2003369 : stx-nfv https://storyboard.openstack.org/#!/story/2003370 : stx-root https://storyboard.openstack.org/#!/story/2003371 : stx-update https://storyboard.openstack.org/#!/story/2003372 : stx-upstream https://storyboard.openstack.org/#!/story/2003373 : stx-utils Thanks -Erich _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Fri Aug 10 03:01:46 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 9 Aug 2018 22:01:46 -0500 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> Message-ID: On Thu, Aug 9, 2018 at 9:26 PM, Chen, Yan wrote: > 1. check *.sh files under this project with bashate. > But we don't have shell file in this project, so the find cmd will return nothing and the bashate will fail. If there are no shell scripts then we can skip the bashate run. > 2. check *.yaml files with yamllint. > But in the cmd, it will find yaml files with "-name middleware/io-monitor/recipes-common/io-monitor/io-monitor/io_monitor/test-tools/yaml/*" which is not a right folder under this project. So it will always fail. We should add an exception for paths like that to skip them. The yamllint check is useful to validate the Zuul config files. > As most of the linters jobs in different projects are the same (copied as default), I guess we can fix them with one synced script. They started from the same template, yes, and should be adjusted to fit the requirements of the repo they are in. The majority of the shuffling of things around has settled down so this is a good time to start looking at these, we still do need to be mindful of the WRS backlog yet to me submitted, gratuitous formatting changes will cause those to need to be rebased, so this should be done with that in mind. The defined interface here is the tox environments, ie 'tox -e linters' should always run an appropriate set of lint jobs for that repo. These should always be usable both in the CI jobs and locally. dt -- Dean Troyer dtroyer at gmail.com From Don.Penney at windriver.com Fri Aug 10 03:18:40 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 10 Aug 2018 03:18:40 +0000 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA33691A@ALA-MBD.corp.ad.wrs.com> Leaving the bashate in place with xargs --no-run-if-empty would cover the case where someone introduces a shell script where none previously existed. Assuming people follow the convention of naming shell scripts with a .sh extension, at least. -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, August 09, 2018 11:02 PM To: Chen, Yan Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Effort to make zuul linters happy On Thu, Aug 9, 2018 at 9:26 PM, Chen, Yan wrote: > 1. check *.sh files under this project with bashate. > But we don't have shell file in this project, so the find cmd will return nothing and the bashate will fail. If there are no shell scripts then we can skip the bashate run. > 2. check *.yaml files with yamllint. > But in the cmd, it will find yaml files with "-name middleware/io-monitor/recipes-common/io-monitor/io-monitor/io_monitor/test-tools/yaml/*" which is not a right folder under this project. So it will always fail. We should add an exception for paths like that to skip them. The yamllint check is useful to validate the Zuul config files. > As most of the linters jobs in different projects are the same (copied as default), I guess we can fix them with one synced script. They started from the same template, yes, and should be adjusted to fit the requirements of the repo they are in. The majority of the shuffling of things around has settled down so this is a good time to start looking at these, we still do need to be mindful of the WRS backlog yet to me submitted, gratuitous formatting changes will cause those to need to be rebased, so this should be done with that in mind. The defined interface here is the tox environments, ie 'tox -e linters' should always run an appropriate set of lint jobs for that repo. These should always be usable both in the CI jobs and locally. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Fri Aug 10 03:26:30 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 9 Aug 2018 22:26:30 -0500 Subject: [Starlingx-discuss] Effort to make zuul linters happy In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA33691A@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB1E4016@ALA-MBD.corp.ad.wrs.com> <72AD03D27224C74982BE13246D75B39739934888@SHSMSX103.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA33691A@ALA-MBD.corp.ad.wrs.com> Message-ID: On Thu, Aug 9, 2018 at 10:18 PM, Penney, Don wrote: > Leaving the bashate in place with xargs --no-run-if-empty would cover the case where someone introduces a shell script where none previously existed. Assuming people follow the convention of naming shell scripts with a .sh extension, at least. ++, that is a better solution. In places that intentionally do not have shell scripts with a .sh ending we added explicit inclusions. DevStack is the pathological example of this...[0] dt [0] https://git.openstack.org/cgit/openstack-dev/devstack/tree/tox.ini#n10 -- Dean Troyer dtroyer at gmail.com From shuicheng.lin at intel.com Fri Aug 10 03:58:24 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 10 Aug 2018 03:58:24 +0000 Subject: [Starlingx-discuss] So many packages are missing In-Reply-To: <2018081010161215532816@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com> <2018081010161215532816@chinaunicom.cn> Message-ID: <9700A18779F35F49AF027300A49E7C7655350AB9@SHSMSX101.ccr.corp.intel.com> Hi Chang, Yumdownloader will stop download the package if speed is lower than 1K/S. To work around it, I think you could use ��sudo �CE yumdownloader �CC --url �� to get the url of the package. Then use wget to download it directly. To setup the mirror will take some time depends on the network. But the good news is you just need do it one time. And do incremental downloader later which will be much easier. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Friday, August 10, 2018 10:16 AM To: Lin, Shuicheng ; Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks��Erich�� According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang ________________________________ �� �� �Ƽ���ʵ���� �й���������ͨ�����޹�˾�о�Ժ �ƶ��绰:18610741986 ��ַ:��������ׯ���ÿ�����������·1�� From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It��s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run ��yum makecache�� to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Fri Aug 10 04:00:21 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 10 Aug 2018 04:00:21 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B3117F7@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C7655350A48@SHSMSX101.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B3117F7@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C7655350AFB@SHSMSX101.ccr.corp.intel.com> Get it. Thanks Cindy. Best Regards Shuicheng From: Xie, Cindy Sent: Friday, August 10, 2018 8:57 AM To: Lin, Shuicheng ; Penney, Don ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.5 upgrade discussion Sorry wrong link: https://storyboard.openstack.org/#!/story/2003398 From: Xie, Cindy Sent: Friday, August 10, 2018 8:55 AM To: Lin, Shuicheng >; Penney, Don >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.5 upgrade discussion There is a separate story created by Saul on this one: https://storyboard.openstack.org/#!/story/2003389 Please sync-up with Saul if he already has patch available or you want to create one. Thx. - cindy From: Lin, Shuicheng Sent: Friday, August 10, 2018 8:45 AM To: Penney, Don >; Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.5 upgrade discussion Hi Don, I also was thinking drop the vim srpm. Thanks for the confirmation. Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, August 9, 2018 11:17 PM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion I think we can safely drop the "vim" modification. It has a long history back to where we were inheriting code customizations from another layer, which was providing its own customized vimrc that had features enabled that were frustrating. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 11:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion Brent, Saul, Shuicheng, Let's initiate the discussion about how we'd like to handle CentOS 7.5 upgrade, we have a master xls sheet online for all non-openStack patches analysis (@Saul, I only have Google doc link but not accessible by WR). And here is the SRPM files we've already looked into, and believe they need upgrade. I put some columns in to fill-in more data (Shuicheng should have most of the data available). We can start from here. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 10 04:16:01 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 9 Aug 2018 23:16:01 -0500 Subject: [Starlingx-discuss] Milestone branch m/2018.08 created Message-ID: I have created the milestone branch m/2018.08 in all StarlingX repos listed in the current manifest file. The repos in Gerrit have a review in the branch to update .gitreview, please review and merge these before any other changes are merged in the branch. The stx-manifest repo also has a m/2018.08 branch review with the default.xml manifest set to pull the branch directly rather than master. The only things that should be considered for backport to this milestone are bugs that need to be addressed for the testing or stability of the milestone. dt -- Dean Troyer dtroyer at gmail.com From mingyuan.qi at intel.com Fri Aug 10 06:36:39 2018 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Fri, 10 Aug 2018 06:36:39 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: +1 on yellow & black Thanks, Mingyuan From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 5:45 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From ran1.an at intel.com Fri Aug 10 07:22:51 2018 From: ran1.an at intel.com (An, Ran1) Date: Fri, 10 Aug 2018 07:22:51 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <9BAB5B7CAF57C3459E4636391F1071CECAA5F0@shsmsx102.ccr.corp.intel.com> +1 for purple, it is comfortable Thanks Ran From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 5:45 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From yan.chen at intel.com Fri Aug 10 07:59:08 2018 From: yan.chen at intel.com (Chen, Yan) Date: Fri, 10 Aug 2018 07:59:08 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <72AD03D27224C74982BE13246D75B39739934983@SHSMSX103.ccr.corp.intel.com> +1 for yellow/black. The blue one is also good, but the yellow/black one is more impressive. Yan From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 05:45 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From yan.chen at intel.com Fri Aug 10 08:22:27 2018 From: yan.chen at intel.com (Chen, Yan) Date: Fri, 10 Aug 2018 08:22:27 +0000 Subject: [Starlingx-discuss] Please help to review Python 2to3 patches for stx-fault project. Message-ID: <72AD03D27224C74982BE13246D75B397399349A4@SHSMSX103.ccr.corp.intel.com> Hi, Please help to review Python 2to3 patches for stx-fault project. I submitted one patch for each of Python 3 compatible issues I found in this project, to make them easier to review. And #590102 is the patch to enable flake8 for PEP-8 check, which will modify many python files, please check if it will impact your development. Story: https://storyboard.openstack.org/#!/story/2003310 Patches: https://review.openstack.org/#/c/588483 (fix print function issue) https://review.openstack.org/#/c/590099 (fix relative import issue) https://review.openstack.org/#/c/590100 (fix dict related issues) https://review.openstack.org/#/c/590101 (replace filter() ) https://review.openstack.org/#/c/590102 (add flake8 as pep8 style check. BIG CHANGE.) Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Fri Aug 10 13:18:23 2018 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Fri, 10 Aug 2018 13:18:23 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <72AD03D27224C74982BE13246D75B39739934983@SHSMSX103.ccr.corp.intel.com> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org>, <72AD03D27224C74982BE13246D75B39739934983@SHSMSX103.ccr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D55E595@ALA-MBD.corp.ad.wrs.com> +1 for yellow/something-else-not-black. Try yellow/blue. The first concept with blue/blue looks better but is...hmm... to "tired". Yellow is full of life yet it would look better without black. :) Thanks, Ovidiu ________________________________ From: Chen, Yan [yan.chen at intel.com] Sent: Friday, August 10, 2018 10:59 AM To: James Cole; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts +1 for yellow/black. The blue one is also good, but the yellow/black one is more impressive. Yan From: James Cole [mailto:james at openstack.org] Sent: Thursday, August 9, 2018 05:45 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi everybody, Thanks for your feedback on the logos last week! Concept 1 was the clear winner based on your comments, so I’ve been playing around with colors. The attached document (also on Dropbox) shows two color combinations sampled from colors from this starling image. The logo works in both one or two colors and on dark or light backgrounds. There are a few mockups in the document as well. Please let me know if you like the purple or yellow versions better, or if you think we should try any other colors (these don’t have to be the final colors if you aren’t drawn to either of the options). Thank you! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Aug 10 13:54:25 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 10 Aug 2018 13:54:25 +0000 Subject: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B311419@SHSMSX104.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E611A@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B311419@SHSMSX104.ccr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA405EDD@ALA-MBD.corp.ad.wrs.com> Sounds good. Thanks Cindy. Zhipeng will need to coordinate with Jim as both libvirt and python-libvirt will need to be merged at the same time. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 8:08 PM To: Rowsell, Brent; Khalil, Ghada; starlingx-discuss at lists.starlingx.io; Saul Wold; Somerville, Jim Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu I re-opened the story for libvirt-python to upgrade to 4.6 (instead of 3.9): https://storyboard.openstack.org/#!/story/2003339. Zhipeng was assigned to this story and he will continue working on this. He can also join Jim for libvirt and qemu story as well. Thx. - cindy From: Xie, Cindy Sent: Friday, August 10, 2018 6:55 AM To: 'Rowsell, Brent' >; Khalil, Ghada >; starlingx-discuss at lists.starlingx.io; Saul Wold >; Somerville, Jim > Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu HI, Ghada and Brent, Yes, I will have one engineer join Jim working on libvirt, qemu and libvirt-python upgrade. Will get back to you w/ name. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, August 10, 2018 3:13 AM To: Khalil, Ghada >; starlingx-discuss at lists.starlingx.io; Xie, Cindy >; Saul Wold >; Somerville, Jim > Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu Thanks Ghada. I added a note to the libvirt story, python-libvirt will need to be updated to 4.6 as well. Brent From: Khalil, Ghada Sent: Thursday, August 9, 2018 3:06 PM To: starlingx-discuss at lists.starlingx.io; Cindy Xie >; Saul Wold >; Rowsell, Brent >; Somerville, Jim > Subject: [Distro-Non-Openstack] Upgrading libvirt and qemu Saul/Brent/Cindy, In order to remain current, I'd like to propose that we upgrade libvirt and qemu to the latest versions. The last upversion of these packages was almost a year ago, so I think it's time. Let me know if you have any concerns. I have created two stories: https://storyboard.openstack.org/#!/story/2003396 https://storyboard.openstack.org/#!/story/2003395 Jim Somerville has volunteered to do tis. If anyone else from the Distro-Non-Openstack team wants to contribute, please reach out to Jim. We'll target this for the October release. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Fri Aug 10 16:40:26 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Fri, 10 Aug 2018 16:40:26 +0000 Subject: [Starlingx-discuss] StarlingX Documentation Initial Template In-Reply-To: References: <9B07CC56-51EF-4701-8837-9B97131D7B92@windriver.com> Message-ID: > > How to decide where to land the documentation? 2 options: > > > > - docs.openstack.com > > We will not be publishing things under openstack.org Understood > > - docs.starlingx.io > > This or something else under starlingx.io Understood > > [ OpenStack Documentation ] > > For each project we create source code directories to enable specific > > functionality: > > > > /docs/ General Documentation > > /doc Yep, typo > > /api-ref/ API Reference > > /api-guide/ API Guide > > /releasenotes/ Release Notes > > yes :) > > [ docs.starlingx.io ] > > > > /docs/ -> docs.openstack.org/ > > /api-ref/ -> developer.openstack.org/api-ref/ project> > > /api-guide/ -> developer.openstack.org/api-guide/ project> > > /releasenotes/ -> > > docs.openstack.org/releasenotes/ > > FWIW OpenStack did some of the separation here due to the different teams > responsible for the content. We may not have that need and want to stick to > a simpler format with everything under a single site. Let me get into the next level of detail with respect to our source code structure and theme: [ Structure: Option 1 ] Aligned to OpenStack project conventions /docs/ /api-ref/ /api-guide/ /releasenotes/ [ Structure: Option 2 ] /docs/source/ /docs/api-ref/ /docs/api-guide/ /docs/releasenotes/ [ Theme ] Do we have a short and long term goal with respect to the theme? I assume we continue using sphinx and about its theme we have the following options: - default - continue with openstackdocstheme - a custom starlingx? > > [ docs.starlingx.io ] > > > > Source code directory structure can be kept however, what could be the > > landing process since it will be out of OpenStack infrastructure? > > We need to get the server set up for this (via OpenStack Infra team) and set > up the publishing jobs to populate it Understood, any example in mailing lists or through a gerrit review which requests a non OpenStack landing site to learn from and start with this activity? > > How about the layout? > > > > Option 1 > > /docs/ -> docs.starlingx.io/ > > /api-ref/ -> docs.starlingx.io/api-ref/ > > /api-guide/ -> docs.starlingx.io/api-guide/ > > /releasenotes/ -> > > docs.starlingx.io/releasenotes/ > > I prefer this or something similar... Landing layout then to be defined after we work with OpenStack Infra team. From scott.little at windriver.com Fri Aug 10 17:35:26 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 10 Aug 2018 13:35:26 -0400 Subject: [Starlingx-discuss] kojipkgs.fedoraproject.org as a backup rpm source Message-ID: <56ac6a07-f408-2c22-cfef-812a4d9784d8@windriver.com> As mentioned on the conference call, fedoras koji can be used as an alternate source of EPEL rpms that have aged out of the repo. It retains copies of past build products longer than the repo itself. In light of recent issues seen by the folks in China, I thought I'd post some candidate code on how we might exploit this.     https://review.openstack.org/591022 Use kojipkgs.fedoraproject.org as a backup rpm source. Rather than asking folks to manually download missing files, this attempts to do so automatically.  Files downloaded through this mechanism would be listed in .../output/centos_(s)rpms_found_K1.txt.  Such downloads should still be investigated, and possibly flagged on storyboard as requiring upgrade or a new source. From marcela.a.rosales.jimenez at intel.com Fri Aug 10 17:55:42 2018 From: marcela.a.rosales.jimenez at intel.com (Rosales Jimenez, Marcela A) Date: Fri, 10 Aug 2018 17:55:42 +0000 Subject: [Starlingx-discuss] question about dl_rpms.sh generating 8 logs Message-ID: Hi team, I’m reviewing download_mirror.sh and dl_rpms.sh, because I’m working on setting up the mirror download on Jenkins daily. And I got a question: Why does dl_rpms.sh script generates 8 logs each time it is executed? For example, if we execute: $ ./dl_rpms.sh rpms_from_centos_repo.lst L1 centos We will get: centos_rpms_fail_move_L1.txt centos_rpms_missing_L1.txt centos_rpms_found_L1.txt centos_rpms_urls_L1.txt centos_srpms_fail_move_L1.txt centos_srpms_missing_L1.txt centos_srpms_found_L1.txt centos_srpms_urls_L1.txt Could we have four instead of eight? (let’s say centos_pkgs_fail_move_L1.txt, etc) The information about whether a package is noarch, x86_64 or src is already in its name. So for me it seems that we could leave four, but I don't know if in the past there was an intention for having this information like this. Thanks. Marcela -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Aug 10 18:15:22 2018 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 10 Aug 2018 11:15:22 -0700 Subject: [Starlingx-discuss] CentOS 7.5 upgrade discussion In-Reply-To: <9700A18779F35F49AF027300A49E7C7655350AFB@SHSMSX101.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3104AD@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3362C2@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C7655350A48@SHSMSX101.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B3117F7@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C7655350AFB@SHSMSX101.ccr.corp.intel.com> Message-ID: I do not have a patch for this yet, so go ahead an work on it. On another subject, I am looking into options for our different configuration / initialization files changes. Using RPM vs Puppet, which based on a discussion will depend on the type of configuration or initialization changes required. I will move that discussion to existing thread here: http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000615.html Sau! On 08/09/2018 09:00 PM, Lin, Shuicheng wrote: > Get it. Thanks Cindy. > > Best Regards > > Shuicheng > > *From:* Xie, Cindy > *Sent:* Friday, August 10, 2018 8:57 AM > *To:* Lin, Shuicheng ; Penney, Don > ; starlingx-discuss at lists.starlingx.io > *Subject:* RE: CentOS 7.5 upgrade discussion > > Sorry wrong link: https://storyboard.openstack.org/#!/story/2003398 > > *From:* Xie, Cindy > *Sent:* Friday, August 10, 2018 8:55 AM > *To:* Lin, Shuicheng >; Penney, Don >; starlingx-discuss at lists.starlingx.io > > *Subject:* RE: CentOS 7.5 upgrade discussion > > There is a separate story created by Saul on this one: > https://storyboard.openstack.org/#!/story/2003389 > > Please sync-up with Saul if he already has patch available or you want > to create one. > > Thx. - cindy > > *From:* Lin, Shuicheng > *Sent:* Friday, August 10, 2018 8:45 AM > *To:* Penney, Don >; Xie, Cindy >; starlingx-discuss at lists.starlingx.io > > *Subject:* RE: CentOS 7.5 upgrade discussion > > Hi Don, > > I also was thinking drop the vim srpm. Thanks for the confirmation. > > Best Regards > > Shuicheng > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Thursday, August 9, 2018 11:17 PM > *To:* Xie, Cindy >; > starlingx-discuss at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] CentOS 7.5 upgrade discussion > > I think we can safely drop the “vim” modification. It has a long history > back to where we were inheriting code customizations from another layer, > which was providing its own customized vimrc that had features enabled > that were frustrating. > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Thursday, August 09, 2018 11:01 AM > *To:* starlingx-discuss at lists.starlingx.io > > *Subject:* [Starlingx-discuss] CentOS 7.5 upgrade discussion > > Brent, Saul, Shuicheng, > > Let’s initiate the discussion about how we’d like to handle CentOS 7.5 > upgrade, we have a master xls sheet online for all non-openStack patches > analysis (@Saul, I only have Google doc link but not accessible by WR). > > And here is the SRPM files we’ve already looked into, and believe they > need upgrade. I put some columns in to fill-in more data (Shuicheng > should have most of the data available). We can start from here. > > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From bruce.e.jones at intel.com Fri Aug 10 20:07:12 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 10 Aug 2018 20:07:12 +0000 Subject: [Starlingx-discuss] EdgeX Foundry Message-ID: <9A85D2917C58154C960D95352B22818BAB577466@fmsmsx115.amr.corp.intel.com> I'm getting some pings on our position / status in regard to EdgeX Foundry. We should think about how/when we'd like to dive more deeply into that, if we haven't already. Or has someone already done so? brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Fri Aug 10 20:10:14 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 10 Aug 2018 20:10:14 +0000 Subject: [Starlingx-discuss] [Testing] Test Framework Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A8C37D6@fmsmsx101.amr.corp.intel.com> Hello, We are currently working on automated tests for StarlingX Deployment, as the base of the automation we are using Robot Framework [1]. If any of you have experience or have read about this framework we would like to hear your feedback of this approach. 1- http://robotframework.org/ Regards, José From bruce.e.jones at intel.com Fri Aug 10 21:36:53 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 10 Aug 2018 21:36:53 +0000 Subject: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BAB577532@fmsmsx115.amr.corp.intel.com> I spent some time looking at LaunchPad today. It seems like it should meet our needs for bug handling. We might want to use it as a place to work on specs/blueprints. I suggest we spin up an instance and start using it. We'll know pretty quickly if the issues identified in this thread are serious enough that we'd want to go to Plan B. bruecj -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Thursday, August 9, 2018 1:57 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking Hi, Thanks Dariush for doing the evaluation work. While Launchpad is not perfect, I would like to confirm on this thread too what Dariush mentioned as well, there is a migration script and process to move from Launchpad to StoryBoard which I think is a big advantage. Thanks and Best Regards, Ildikó > On 2018. Aug 9., at 22:19, Eslimi, Dariush wrote: > > Thanks for clarification, I understand the motivations, all I am trying to say Storyboard is not ready yet for bugs. > So we can join others in Openstack and move when all move to new platform. > > Dariush > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: August-09-18 4:07 PM > To: Eslimi, Dariush > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] LaunchPad vs Storyboard for bug tracking > > On Thu, Aug 9, 2018 at 1:10 PM, Eslimi, Dariush wrote: >> * We will depend on UbuntuOne regardless of Launchpad, as we need it for gerrit. > > OpenStack will move Gerrit and other tooling to the OpenStack Foundation auth authority when we are free of LaunchPad, this tie is why it is an issue. That change will also affect StarlingX and any other project using OpenStack Foundation resources. > >> * API-first : Storybard suffers from same issue, rich API, UI not representation of API and backend. > > You have this backward. LaunchPad does not expose everything in its API that it can do in the web UI. This has hampered some automation efforts and in fact was one of the motivators for the SB teams position of API first. The situation with Storyboard allows users with API capabilities to solve their own problems regarding UI limitations. This is not possible with LaunchPad. > > I am not arguing a position one way or the other, just clarifying the stated reasons and why a specific organization has wanted to leave LaunchPad for over 6 years now. > > dt > > -- > > Dean Troyer > dtroyer at gmail.com > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Sat Aug 11 03:28:15 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Sat, 11 Aug 2018 03:28:15 +0000 Subject: [Starlingx-discuss] [Release] Sub-Project wiki pages are updated Message-ID: <151EE31B9FCCA54397A757BC674650F0BA40DB2C@ALA-MBD.corp.ad.wrs.com> Hello all, I've updated all the sub-project wiki pages and linked them to the main wiki. https://wiki.openstack.org/wiki/StarlingX#Sub-projects As suggested by Bruce, each page now lists the contributors to the sub-project (I put a link to each page in ethercalc). I also added the core reviewers to help everyone add the right people to their gerrit reviews. Each page also has a number of story board links for applicable stories. These are based on the tags defined here: https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes Note: I have tagged some of the existing stories, but I didn't have time to go thru the whole backlog. I encourage authors of stories to tag them upon creation if they know the right sub-project. Each sub-project team can customize their page as they see fit. I also encourage each team to tag the stories they are targeting for the October release with stx.2018.10. This will give us good data to pull the release plan. As a reference point, there are 182 active stories; only 40 are tagged for the October release as of now. Just a reminder, tags must be added one at a time. (stx.bug stx.config -> Add >> results in a new tag "stx.bug stx.config" being added, not two individual tags) Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Sat Aug 11 16:41:29 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Sat, 11 Aug 2018 16:41:29 +0000 Subject: [Starlingx-discuss] [Doc] Release Notes Management Message-ID: Release Notes are ready to get incorporated into our StarlingX projects. It is worth to spent some time reading the 15 minute Reno documentation [0]. [ Baseline ] Release Notes Management baseline is ready for review [1] [Doc] Release Notes Management Once this is approved it will be ported to the rest of our projects. [ Demo ] To generate our Release Note and Report a small amount of effort is required from both our Developers and our Release team. Here you have a demo for Milestone branch m/2018.08 [2] Includeing both efforts: [Doc] [Demo] Release Notes m/2018.08 [ Grouping ] There is a convention to follow how Release Notes are grouped: - features - issues - upgrade - deprecations - critical - security - fixes - other See [3] for an example. [ Call to Action ] In my limited understanding a typical flow would be as follows: Developer [4] 1. Start common development workflow to create your change: "Hello My Change" 2. New! Create its release notes in reStructuredText, no major effort since title and content might be reused from git commit information: tox -e venv -- reno new hello-my-change 3. Submit your change for review. Release Team 1. Start development work to prepare the release, this might include git tag. 2. Create to generate the Reno Report tox -e releasenotes 3. Submit your change for review. In OpenStack it seems OpenStack Release Bot takes care of the Release Process. See Nova Release Notes [5] for how it looks like. [0] https://docs.openstack.org/reno/latest/ [1] https://review.openstack.org/#/c/590798/5 [2] https://review.openstack.org/#/c/591157/2 [3] https://review.openstack.org/#/c/591157/2/releasenotes/notes/milestone-branch-m201808-717218b5976a529e.yaml [4] https://review.openstack.org/#/c/282520/ [5] https://docs.openstack.org/releasenotes/nova/rocky.html From ildiko.vancsa at gmail.com Sun Aug 12 07:21:02 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 12 Aug 2018 09:21:02 +0200 Subject: [Starlingx-discuss] Cross-project discussions about Keystone for Edge at the PTG Message-ID: Hi, The Keystone, Edge Computing Group and StarlingX teams are planning to have follow up discussions about using Keystone in edge scenarios including discussing requirements, architecture options and currently ongoing activities. If you are interested in participating you can find further information here: http://lists.openstack.org/pipermail/edge-computing/2018-August/000394.html Please let me know if you have any questions. Thanks and Best Regards, Ildikó From ildiko.vancsa at gmail.com Sun Aug 12 07:33:57 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Sun, 12 Aug 2018 09:33:57 +0200 Subject: [Starlingx-discuss] 'Project Ask me Anything' room at the PTG - infrastructure and testing Message-ID: Hi, I would like to draw your attention to the 'Project Ask me Anything’ room at the PTG, which is scheduled for Monday-Tuesday: https://www.openstack.org/ptg#tab_schedule I encourage all of you who plan to attend the PTG to take the time and talk to people if you have any questions about any of the participating OpenStack projects or need guidance. As stress testing came up on earlier occasions, I think this is a great opportunity to learn more about how the OpenStack Infrastructure team is operating, about the HW that the tests are running on and to explore the options for StarlingX testing activities as well. If you are planning to attend the PTG please add your name to this etherpad: https://etherpad.openstack.org/p/stx-PTG-agenda You can find the registration link on the PTG website: https://www.openstack.org/ptg Please let me know if you have any questions. Thanks and Best Regards, Ildikó From zhipengs.liu at intel.com Mon Aug 13 01:26:22 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 13 Aug 2018 01:26:22 +0000 Subject: [Starlingx-discuss] question about dl_rpms.sh generating 8 logs In-Reply-To: References: Message-ID: <93814834B4855241994F290E959305C752F77301@SHSMSX104.ccr.corp.intel.com> From my point, it should be better to generate a collected report and also print out them after finishing mirror download. Otherwise you need to see all these files every time. Zhipeng From: Rosales Jimenez, Marcela A [mailto:marcela.a.rosales.jimenez at intel.com] Sent: 2018年8月11日 1:56 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] question about dl_rpms.sh generating 8 logs Hi team, I’m reviewing download_mirror.sh and dl_rpms.sh, because I’m working on setting up the mirror download on Jenkins daily. And I got a question: Why does dl_rpms.sh script generates 8 logs each time it is executed? For example, if we execute: $ ./dl_rpms.sh rpms_from_centos_repo.lst L1 centos We will get: centos_rpms_fail_move_L1.txt centos_rpms_missing_L1.txt centos_rpms_found_L1.txt centos_rpms_urls_L1.txt centos_srpms_fail_move_L1.txt centos_srpms_missing_L1.txt centos_srpms_found_L1.txt centos_srpms_urls_L1.txt Could we have four instead of eight? (let’s say centos_pkgs_fail_move_L1.txt, etc) The information about whether a package is noarch, x86_64 or src is already in its name. So for me it seems that we could leave four, but I don't know if in the past there was an intention for having this information like this. Thanks. Marcela -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Mon Aug 13 02:02:56 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Mon, 13 Aug 2018 02:02:56 +0000 Subject: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA405EDD@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA405C4A@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB1E611A@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B311419@SHSMSX104.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA405EDD@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C752F77462@SHSMSX104.ccr.corp.intel.com> Got it, I will sync with Jim after python-libvirt patch is ready! Thanks! Zhipeng From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: 2018年8月10日 21:54 To: Xie, Cindy ; Rowsell, Brent ; starlingx-discuss at lists.starlingx.io; Saul Wold ; Somerville, Jim Subject: Re: [Starlingx-discuss] [Distro-Non-Openstack] Upgrading libvirt and qemu Sounds good. Thanks Cindy. Zhipeng will need to coordinate with Jim as both libvirt and python-libvirt will need to be merged at the same time. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 09, 2018 8:08 PM To: Rowsell, Brent; Khalil, Ghada; starlingx-discuss at lists.starlingx.io; Saul Wold; Somerville, Jim Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu I re-opened the story for libvirt-python to upgrade to 4.6 (instead of 3.9): https://storyboard.openstack.org/#!/story/2003339. Zhipeng was assigned to this story and he will continue working on this. He can also join Jim for libvirt and qemu story as well. Thx. - cindy From: Xie, Cindy Sent: Friday, August 10, 2018 6:55 AM To: 'Rowsell, Brent' >; Khalil, Ghada >; starlingx-discuss at lists.starlingx.io; Saul Wold >; Somerville, Jim > Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu HI, Ghada and Brent, Yes, I will have one engineer join Jim working on libvirt, qemu and libvirt-python upgrade. Will get back to you w/ name. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, August 10, 2018 3:13 AM To: Khalil, Ghada >; starlingx-discuss at lists.starlingx.io; Xie, Cindy >; Saul Wold >; Somerville, Jim > Subject: RE: [Distro-Non-Openstack] Upgrading libvirt and qemu Thanks Ghada. I added a note to the libvirt story, python-libvirt will need to be updated to 4.6 as well. Brent From: Khalil, Ghada Sent: Thursday, August 9, 2018 3:06 PM To: starlingx-discuss at lists.starlingx.io; Cindy Xie >; Saul Wold >; Rowsell, Brent >; Somerville, Jim > Subject: [Distro-Non-Openstack] Upgrading libvirt and qemu Saul/Brent/Cindy, In order to remain current, I’d like to propose that we upgrade libvirt and qemu to the latest versions. The last upversion of these packages was almost a year ago, so I think it’s time. Let me know if you have any concerns. I have created two stories: https://storyboard.openstack.org/#!/story/2003396 https://storyboard.openstack.org/#!/story/2003395 Jim Somerville has volunteered to do tis. If anyone else from the Distro-Non-Openstack team wants to contribute, please reach out to Jim. We’ll target this for the October release. Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From huifeng.le at intel.com Mon Aug 13 05:51:42 2018 From: huifeng.le at intel.com (Le, Huifeng) Date: Mon, 13 Aug 2018 05:51:42 +0000 Subject: [Starlingx-discuss] Analysis report about Network Trunk feature for StartlingX upstreaming Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D4EB222@SHSMSX104.ccr.corp.intel.com> Ian/Brent/Matt, We did analysis about the Network trunk related patches for StartingX upstream, below are the suggestions for upstreaming, could you please help to review and comment? Thanks much! 1. ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates Function: sent notification to the agent when a trunk is updated Analysis: (1)Trunk’s AFTER_UPDATE event is generated for API call: PUT /v2.0/trunks/{trunk-id} The update request is only for changing fields like name, description or admin_state_up. Setting the admin_state_up to False locks the trunk in that it prevents operations such as adding/removing subports. In Neutron upstream, admin_state_up is used in server side, e.g. add_subports, remove subports, delete_trunk and not used in agent side (2)OVS trunk agent driver uses OVSDB event to handle trunk event, no need to manually trigger trunk update event (3)Linux trunk agent driver will handle trunk update event triggered by server, while it will need apply the patch only in case admin_state_up update need to be handled Suggestion: Not a bug for Neutron upstream, suggest not to upstream 2. 6955351c5eca6e37061fb0140d11ea53693fe0e1: Add support to delete bound network Function: enable delete trunk if it is can_be_trunked (not bounded or driver’s can_trunk_bound_port=true) Analysis: Applied for LinuxBridge Driver and AVS bridge Driver (can_trunk_bound_port=True), no impact for OVSTrunkDriver (can_trunk_bound_port=False). workaround also available for linux bridge (e.g. unbind the port first then delete the trunk) Suggestion: it is a low priority bug for Neutron upstream (only applied for linux bridge and workround available), suggest not to upstream 3. 43a684946e781a25d21a4f50b8dc67d61be42809: Enable trunk service by default Function: add “trunk” in DEFAULT_SERVICE_PLUGINS Analysis: It is a deploy configuration for downstream product Suggestion: Not a bug for Neutron upstream, suggest not to upstream 4. c54d804792f10b7f505de6794274c4df4768f6f0: Include trunk presence in port details Function: add trunk_port (bool) flag in port_details to identify whether this port is a parent port for a trunk Analysis: It is a performance improvement for AVS agent by reducing RPC call from agent to server. OVS agent has different implementation with no improvement by introducing this field Suggestion: Not a bug for Neutron upstream, suggest not to upstream 5. 3eed837ebd236e6b1959ea88d9ab5322c9eef6b9: Ignore trunk subports on same vlan as vlan-subnet ports Function: Ignore trunk subports on same vlan as vlan-subnet ports Analysis: It is a bug fix for AVS agent Suggestion: Not a bug for Neutron upstream, suggest not to upstream Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Aug 13 14:18:27 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 13 Aug 2018 14:18:27 +0000 Subject: [Starlingx-discuss] 'Ensure all branding, UI and logos say "StarlingX" Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B3152A9@SHSMSX104.ccr.corp.intel.com> All, I just see this feature was tracked in an obsolete Ethercalc for stx.distro.others. Anybody know if we already have a story to track this? I am assuming that Eddie Ramirez is going to own this? Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 13 15:33:18 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 15:33:18 +0000 Subject: [Starlingx-discuss] 'Ensure all branding, UI and logos say "StarlingX" In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B3152A9@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B3152A9@SHSMSX104.ccr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB577E98@fmsmsx115.amr.corp.intel.com> I was planning on creating the stories/tasks for this once we receive the final logos and graphics. And yes, I'm hoping Eddie can do the work. brucej From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Monday, August 13, 2018 7:18 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 'Ensure all branding, UI and logos say "StarlingX" All, I just see this feature was tracked in an obsolete Ethercalc for stx.distro.others. Anybody know if we already have a story to track this? I am assuming that Eddie Ramirez is going to own this? Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 13 15:50:09 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 15:50:09 +0000 Subject: [Starlingx-discuss] Analysis report about Network Trunk feature for StartlingX upstreaming In-Reply-To: <76647BD697F40748B1FA4F56DA02AA0B4D4EB222@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D4EB222@SHSMSX104.ccr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB577F1B@fmsmsx115.amr.corp.intel.com> Huifeng, thank you for this doing this analysis. What are the next steps for these patches? You suggest that we not upstream them, but do we keep them and carry them going forward, do we remove them – if so, is there any impact? brucej From: Le, Huifeng Sent: Sunday, August 12, 2018 10:52 PM To: Jolliffe, Ian ; Rowsell, Brent ; Peters, Matt Cc: Zhao, Forrest ; Troyer, Dean ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Analysis report about Network Trunk feature for StartlingX upstreaming Ian/Brent/Matt, We did analysis about the Network trunk related patches for StartingX upstream, below are the suggestions for upstreaming, could you please help to review and comment? Thanks much! 1. ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates Function: sent notification to the agent when a trunk is updated Analysis: (1)Trunk’s AFTER_UPDATE event is generated for API call: PUT /v2.0/trunks/{trunk-id} The update request is only for changing fields like name, description or admin_state_up. Setting the admin_state_up to False locks the trunk in that it prevents operations such as adding/removing subports. In Neutron upstream, admin_state_up is used in server side, e.g. add_subports, remove subports, delete_trunk and not used in agent side (2)OVS trunk agent driver uses OVSDB event to handle trunk event, no need to manually trigger trunk update event (3)Linux trunk agent driver will handle trunk update event triggered by server, while it will need apply the patch only in case admin_state_up update need to be handled Suggestion: Not a bug for Neutron upstream, suggest not to upstream 2. 6955351c5eca6e37061fb0140d11ea53693fe0e1: Add support to delete bound network Function: enable delete trunk if it is can_be_trunked (not bounded or driver’s can_trunk_bound_port=true) Analysis: Applied for LinuxBridge Driver and AVS bridge Driver (can_trunk_bound_port=True), no impact for OVSTrunkDriver (can_trunk_bound_port=False). workaround also available for linux bridge (e.g. unbind the port first then delete the trunk) Suggestion: it is a low priority bug for Neutron upstream (only applied for linux bridge and workround available), suggest not to upstream 3. 43a684946e781a25d21a4f50b8dc67d61be42809: Enable trunk service by default Function: add “trunk” in DEFAULT_SERVICE_PLUGINS Analysis: It is a deploy configuration for downstream product Suggestion: Not a bug for Neutron upstream, suggest not to upstream 4. c54d804792f10b7f505de6794274c4df4768f6f0: Include trunk presence in port details Function: add trunk_port (bool) flag in port_details to identify whether this port is a parent port for a trunk Analysis: It is a performance improvement for AVS agent by reducing RPC call from agent to server. OVS agent has different implementation with no improvement by introducing this field Suggestion: Not a bug for Neutron upstream, suggest not to upstream 5. 3eed837ebd236e6b1959ea88d9ab5322c9eef6b9: Ignore trunk subports on same vlan as vlan-subnet ports Function: Ignore trunk subports on same vlan as vlan-subnet ports Analysis: It is a bug fix for AVS agent Suggestion: Not a bug for Neutron upstream, suggest not to upstream Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Aug 13 15:49:40 2018 From: scott.little at windriver.com (Scott Little) Date: Mon, 13 Aug 2018 11:49:40 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> Message-ID: <53519f80-cdd7-9972-8676-9985b1ea32d0@windriver.com> Two late additions to the restructuring list ... Move ceph and ceph-manager from stx-upstream to stx-integ/ceph/ Reviews have been posted.  These are history preserving moves. ceph: https://review.openstack.org/591435 https://review.openstack.org/591428 https://review.openstack.org/591429 ceph-manager: https://review.openstack.org/591436 https://review.openstack.org/591430 https://review.openstack.org/591431 https://review.openstack.org/591432 https://review.openstack.org/591433 https://review.openstack.org/591434 Scott On 18-08-01 05:03 PM, Scott Little wrote: > 99% of the reviews are now available.  I've held back the manifest > changes for tomorrow. > > The relocation updates come in sets for each package, that attempt to > preserve the update history found at the original location.  One > update removes a package from stx-utils, stx-gplv2, stx-gplv3.  A > second adds it to stx-integ or stx-updates in it's StarlingX day zero > form (author changes from Dean to Me).  Then there may be 0-N updates > replaying the subsequent commit history of that package (author and > commit text preserved).  Finally there might be a follow up commit by > me to fix a build path.  The final result is a glorified 'mv' > operation.  The content should be unchanged, So all the code has been > reviewed before. > > Reviews should focus one subject only, was the move executed correctly? > > Please do not workflow +1!    I couldn't get the scripts to manage > Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. > > Scott > > > > > On 18-07-31 11:26 AM, Scott Little wrote: >> Revised timeline is August 1 or 2. >> >> Scott >> >> >> On 18-07-17 11:07 AM, Scott Little wrote: >>> >>> Story: https://storyboard.openstack.org/#!/story/2002801 >>> >>> *Goals:* >>> >>> 1) Consolidate the following repo’s under stx-integ. >>> • stx-gplv2 >>> • stx-gplv3 >>> • stx-utils >>> >>> 2) Restructure the directories under which packages are to be found. >>> >>> Currently stx-gplv2/3 are largely without structure. Parts of the >>> stx-integ structure were inherited from WRLinux and make little >>> sense.  stx-utils is just i mess of stuff that never found a home >>> when StarlingX was first set up. >>> >>> Directories should descriptive of the class of packages to be found >>> within. >>> >>> Intent is to preserve update history as best is is possible. >>> >>> >>> *Timeline: * >>> >>> Probably around July 23 unless there are strong objections.  We >>> should probably have a freeze on submissions to the affected repos >>> until it is all completed. >>> >>> >>> *Code Reviews: * >>> >>> Most of this is just moving code around.  A few path corrections, >>> but no new code.  The number and size of the reviews will be huge, >>> and the code should all have been inspected once before.  Is there a >>> way to fast track this? Would there be strong objections to me just >>> doing a +2/+1 without waiting for independent review? >>> >>> >>> *Details of directories/groups ...* >>> >>> >>> Create new directories under stx-integ (logical groupings for files): >>>    ceph >>>    config >>>    config-files >>>    database >>>    filesystem >>>    filesystem/drbd >>>    grub >>>    kernel >>>    kernel/kernel-modules >>>    ldap >>>    logging >>>    strorage-drivers >>>    tools >>>    utilities >>>    virt >>> >>> Retained directories under stx-integ (additional logical groupings >>> for files): >>>    base >>>    mellanox >>>    monitoring >>>    networking >>>    python >>>    restapi-doc >>>    security >>> >>> Retire directories under stx-integ (non-descriptive or ambiguous >>> grouping we will retire): >>>    connectivity >>>    core >>>    devtools >>>    extended >>>    support >>> >>> >>> *Details of packages ...* >>> >>> Relocated packages (internal to stx-integ): >>>    base/ >>>       dhcp >>>       initscripts >>>       libevent >>>       lighttpd >>>       memcached >>>       net-snmp >>>       novnc >>>       ntp >>>       openssh >>>       pam >>>       procps >>>       sanlock >>>       shadow >>>       sudo >>>       systemd >>>       util-linux >>>       vim >>>       watchdog >>> >>>    ceph/ >>>       python-cephclient >>> >>>    config/ >>>       e2fsprogs >>>       facter >>>       nfs-utils >>>       nfscheck >>>       puppet-4.8.2 >>>       puppet-modules >>> >>>    kernel/ >>>       kernel-std >>>       kernel-rt >>> >>>    kernel/kernel-modules/ >>>       mlnx-ofa_kernel >>> >>>    ldap/ >>>       nss-pam-ldapd >>>       openldap >>> >>>    logging/ >>>       syslog-ng >>>       logrotate >>> >>>    networking/ >>>       lldpd >>>       iproute >>>       mellanox >>>       python-ryu >>>       mlx4-config >>> >>>    python/ >>>       python-2.7.5 >>>       python-django >>>       python-gunicorn >>>       python-setuptools >>>       python-smartpm >>> >>>    security/ >>>       shim-signed >>>       shim-unsigned >>>       tboot >>> >>>    strorage-drivers/ >>>       python-3parclient >>>       python-lefthandclient >>> >>>    virt/ >>>       cloud-init >>>       libvirt >>>       libvirt-python >>>       qemu >>> >>>    tools/ >>>       storage-topology >>>       vm-topology >>> >>>    utilities/ >>>       tis-extensions >>>       namespace-utils >>>       nova-utils >>>       update-motd >>> >>> >>> >>> Relocated packages (stx-utils to stx-update): >>>     enable-dev-patch >>> >>> >>> >>> Relocated packages (stx-utils to stx-integ): >>> >>>     config-files/ >>>         io-scheduler >>> >>>     filesystem/ >>>         filesystem-scripts >>> >>>     grub/ >>>         grubby >>> >>>     logging/ >>>         logmgmt >>> >>>     tools/ >>>         collector >>>         monitor-tools >>> >>>     tools/engtools/ >>>         hostdata-collectors >>>         parsers >>> >>>     utilities/ >>>         build-info >>>         branding   (formerly wrs-branding) >>>         platform-util >>> >>> >>> >>> Relocated packages (stx-gpl2 to stx-integ): >>>     base/ >>>         bash >>>         cgcs-users >>>         cluster-resource-agents >>>         dpkg >>>         haproxy >>>         libfdt >>>         netpbm >>>         rpm >>> >>>     database/ >>>         mariadb >>> >>>     filesystem/ >>>         iscsi-initiator-utils >>> >>>     filesystem/drbd/ >>>         drbd-tools >>> >>>     kernel/kernel-modules/ >>>         drbd >>>         integrity >>>         intel-e1000e >>>         intel-i40e >>>         intel-i40evf >>>         intel-ixgbe >>>         intel-ixgbevf >>>         qat17 >>>         tpmdd >>> >>>     ldap/ >>>         ldapscripts >>> >>>     networking/ >>>         iptables >>>         net-tools >>> >>> >>> >>> Relocated packages (stx-gpl3 to stx-integ): >>>     base/ >>>         anaconda >>>         crontabs >>>         dnsmasq >>>         rsync >>> >>>     database/ >>>         python-psycopg2 >>> >>>     filesystem/ >>>         parted >>> >>>     grub/ >>>         grub2 >>> >>>     security/ >>>         python-keyring >>> >>> >>> >>> Delete two packages from stx-integ: >>>    tgt >>>    irqbalance >>> >>> Delete two packages from stx-gplv3: >>>    seabios >>>    sysvinit >>> >>> Delete one package from stx-utils: >>>    io-monitor >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Aug 13 15:51:26 2018 From: scott.little at windriver.com (Scott Little) Date: Mon, 13 Aug 2018 11:51:26 -0400 Subject: [Starlingx-discuss] Restructuring round 2 In-Reply-To: <53519f80-cdd7-9972-8676-9985b1ea32d0@windriver.com> References: <2ab7ffb1-aa63-daab-d79d-60875d72527a@windriver.com> <471451e5-640b-4348-82d0-ea984ff021a3@windriver.com> <53519f80-cdd7-9972-8676-9985b1ea32d0@windriver.com> Message-ID: <5a819311-29dc-5d82-d474-03b67f6d7f17@windriver.com> PS:  Please leave the workflow +1 to me. On 18-08-13 11:49 AM, Scott Little wrote: > Two late additions to the restructuring list ... > > Move ceph and ceph-manager from stx-upstream to stx-integ/ceph/ > > Reviews have been posted.  These are history preserving moves. > > ceph: > https://review.openstack.org/591435 > https://review.openstack.org/591428 > https://review.openstack.org/591429 > > ceph-manager: > https://review.openstack.org/591436 > https://review.openstack.org/591430 > https://review.openstack.org/591431 > https://review.openstack.org/591432 > https://review.openstack.org/591433 > https://review.openstack.org/591434 > > Scott > > > On 18-08-01 05:03 PM, Scott Little wrote: >> 99% of the reviews are now available.  I've held back the manifest >> changes for tomorrow. >> >> The relocation updates come in sets for each package, that attempt to >> preserve the update history found at the original location.  One >> update removes a package from stx-utils, stx-gplv2, stx-gplv3.  A >> second adds it to stx-integ or stx-updates in it's StarlingX day zero >> form (author changes from Dean to Me).  Then there may be 0-N updates >> replaying the subsequent commit history of that package (author and >> commit text preserved).  Finally there might be a follow up commit by >> me to fix a build path.  The final result is a glorified 'mv' >> operation.  The content should be unchanged, So all the code has been >> reviewed before. >> >> Reviews should focus one subject only, was the move executed correctly? >> >> Please do not workflow +1!    I couldn't get the scripts to manage >> Depends-On relationships satisfactorily, so I'll hand manage it tomorrow. >> >> Scott >> >> >> >> >> On 18-07-31 11:26 AM, Scott Little wrote: >>> Revised timeline is August 1 or 2. >>> >>> Scott >>> >>> >>> On 18-07-17 11:07 AM, Scott Little wrote: >>>> >>>> Story: https://storyboard.openstack.org/#!/story/2002801 >>>> >>>> *Goals:* >>>> >>>> 1) Consolidate the following repo’s under stx-integ. >>>> • stx-gplv2 >>>> • stx-gplv3 >>>> • stx-utils >>>> >>>> 2) Restructure the directories under which packages are to be found. >>>> >>>> Currently stx-gplv2/3 are largely without structure. Parts of the >>>> stx-integ structure were inherited from WRLinux and make little >>>> sense.  stx-utils is just i mess of stuff that never found a home >>>> when StarlingX was first set up. >>>> >>>> Directories should descriptive of the class of packages to be found >>>> within. >>>> >>>> Intent is to preserve update history as best is is possible. >>>> >>>> >>>> *Timeline: * >>>> >>>> Probably around July 23 unless there are strong objections.  We >>>> should probably have a freeze on submissions to the affected repos >>>> until it is all completed. >>>> >>>> >>>> *Code Reviews: * >>>> >>>> Most of this is just moving code around.  A few path corrections, >>>> but no new code.  The number and size of the reviews will be huge, >>>> and the code should all have been inspected once before.  Is there >>>> a way to fast track this?  Would there be strong objections to me >>>> just doing a +2/+1 without waiting for independent review? >>>> >>>> >>>> *Details of directories/groups ...* >>>> >>>> >>>> Create new directories under stx-integ (logical groupings for files): >>>>    ceph >>>>    config >>>>    config-files >>>>    database >>>>    filesystem >>>>    filesystem/drbd >>>>    grub >>>>    kernel >>>>    kernel/kernel-modules >>>>    ldap >>>>    logging >>>>    strorage-drivers >>>>    tools >>>>    utilities >>>>    virt >>>> >>>> Retained directories under stx-integ (additional logical groupings >>>> for files): >>>>    base >>>>    mellanox >>>>    monitoring >>>>    networking >>>>    python >>>>    restapi-doc >>>>    security >>>> >>>> Retire directories under stx-integ (non-descriptive or ambiguous >>>> grouping we will retire): >>>>    connectivity >>>>    core >>>>    devtools >>>>    extended >>>>    support >>>> >>>> >>>> *Details of packages ...* >>>> >>>> Relocated packages (internal to stx-integ): >>>>    base/ >>>>       dhcp >>>>       initscripts >>>>       libevent >>>>       lighttpd >>>>       memcached >>>>       net-snmp >>>>       novnc >>>>       ntp >>>>       openssh >>>>       pam >>>>       procps >>>>       sanlock >>>>       shadow >>>>       sudo >>>>       systemd >>>>       util-linux >>>>       vim >>>>       watchdog >>>> >>>>    ceph/ >>>>       python-cephclient >>>> >>>>    config/ >>>>       e2fsprogs >>>>       facter >>>>       nfs-utils >>>>       nfscheck >>>>       puppet-4.8.2 >>>>       puppet-modules >>>> >>>>    kernel/ >>>>       kernel-std >>>>       kernel-rt >>>> >>>>    kernel/kernel-modules/ >>>>       mlnx-ofa_kernel >>>> >>>>    ldap/ >>>>       nss-pam-ldapd >>>>       openldap >>>> >>>>    logging/ >>>>       syslog-ng >>>>       logrotate >>>> >>>>    networking/ >>>>       lldpd >>>>       iproute >>>>       mellanox >>>>       python-ryu >>>>       mlx4-config >>>> >>>>    python/ >>>>       python-2.7.5 >>>>       python-django >>>>       python-gunicorn >>>>       python-setuptools >>>>       python-smartpm >>>> >>>>    security/ >>>>       shim-signed >>>>       shim-unsigned >>>>       tboot >>>> >>>>    strorage-drivers/ >>>>       python-3parclient >>>>       python-lefthandclient >>>> >>>>    virt/ >>>>       cloud-init >>>>       libvirt >>>>       libvirt-python >>>>       qemu >>>> >>>>    tools/ >>>>       storage-topology >>>>       vm-topology >>>> >>>>    utilities/ >>>>       tis-extensions >>>>       namespace-utils >>>>       nova-utils >>>>       update-motd >>>> >>>> >>>> >>>> Relocated packages (stx-utils to stx-update): >>>>     enable-dev-patch >>>> >>>> >>>> >>>> Relocated packages (stx-utils to stx-integ): >>>> >>>>     config-files/ >>>>         io-scheduler >>>> >>>>     filesystem/ >>>>         filesystem-scripts >>>> >>>>     grub/ >>>>         grubby >>>> >>>>     logging/ >>>>         logmgmt >>>> >>>>     tools/ >>>>         collector >>>>         monitor-tools >>>> >>>>     tools/engtools/ >>>>         hostdata-collectors >>>>         parsers >>>> >>>>     utilities/ >>>>         build-info >>>>         branding   (formerly wrs-branding) >>>>         platform-util >>>> >>>> >>>> >>>> Relocated packages (stx-gpl2 to stx-integ): >>>>     base/ >>>>         bash >>>>         cgcs-users >>>>         cluster-resource-agents >>>>         dpkg >>>>         haproxy >>>>         libfdt >>>>         netpbm >>>>         rpm >>>> >>>>     database/ >>>>         mariadb >>>> >>>>     filesystem/ >>>>         iscsi-initiator-utils >>>> >>>>     filesystem/drbd/ >>>>         drbd-tools >>>> >>>>     kernel/kernel-modules/ >>>>         drbd >>>>         integrity >>>>         intel-e1000e >>>>         intel-i40e >>>>         intel-i40evf >>>>         intel-ixgbe >>>>         intel-ixgbevf >>>>         qat17 >>>>         tpmdd >>>> >>>>     ldap/ >>>>         ldapscripts >>>> >>>>     networking/ >>>>         iptables >>>>         net-tools >>>> >>>> >>>> >>>> Relocated packages (stx-gpl3 to stx-integ): >>>>     base/ >>>>         anaconda >>>>         crontabs >>>>         dnsmasq >>>>         rsync >>>> >>>>     database/ >>>>         python-psycopg2 >>>> >>>>     filesystem/ >>>>         parted >>>> >>>>     grub/ >>>>         grub2 >>>> >>>>     security/ >>>>         python-keyring >>>> >>>> >>>> >>>> Delete two packages from stx-integ: >>>>    tgt >>>>    irqbalance >>>> >>>> Delete two packages from stx-gplv3: >>>>    seabios >>>>    sysvinit >>>> >>>> Delete one package from stx-utils: >>>>    io-monitor >>>> >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss at lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jesus.ornelas.aguayo at intel.com Mon Aug 13 16:04:20 2018 From: jesus.ornelas.aguayo at intel.com (Ornelas Aguayo, Jesus) Date: Mon, 13 Aug 2018 16:04:20 +0000 Subject: [Starlingx-discuss] question about dl_rpms.sh generating 8 logs Message-ID: <1F66C208-2BD9-41BD-973E-43F00B8865DC@intel.com> Hi Marcela, I think we could standardize the logs to have a single log, by doing so it would be easier to error handling. On 8/10/18, 12:56 PM, "Rosales Jimenez, Marcela A" wrote: Hi team, I’m reviewing download_mirror.sh and dl_rpms.sh, because I’m working on setting up the mirror download on Jenkins daily. And I got a question: Why does dl_rpms.sh script generates 8 logs each time it is executed? For example, if we execute: $ ./dl_rpms.sh rpms_from_centos_repo.lst L1 centos We will get: centos_rpms_fail_move_L1.txt centos_rpms_missing_L1.txt centos_rpms_found_L1.txt centos_rpms_urls_L1.txt centos_srpms_fail_move_L1.txt centos_srpms_missing_L1.txt centos_srpms_found_L1.txt centos_srpms_urls_L1.txt Could we have four instead of eight? (let’s say centos_pkgs_fail_move_L1.txt, etc) The information about whether a package is noarch, x86_64 or src is already in its name. So for me it seems that we could leave four, but I don't know if in the past there was an intention for having this information like this. Thanks. Marcela From bruce.e.jones at intel.com Mon Aug 13 17:07:47 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 17:07:47 +0000 Subject: [Starlingx-discuss] Agendas open for this week's project calls Message-ID: <9A85D2917C58154C960D95352B22818BAB577FD1@fmsmsx115.amr.corp.intel.com> The agendas are open for this week's Project [0] and Core [1] calls. Please feel free to add any items that should be discussed to the Etherpads. Abraham, I've taken the liberty of putting the Release Note process on the agenda for you to discuss with the team. It LGTM but folks might have some questions. brucej [0] https://etherpad.openstack.org/p/stx-status [1] https://etherpad.openstack.org/p/stx-cores -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 13 18:36:04 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 18:36:04 +0000 Subject: [Starlingx-discuss] [Release] Sub-Project wiki pages are updated In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA40DB2C@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA40DB2C@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB578100@fmsmsx115.amr.corp.intel.com> Ghada, thank you for doing this! I've re-arranged the content of this part of the wiki a bit to try to make it use less pixels and cleaned up (removed) some of the obsolete content. Meanwhile I'd like to amplify Ghada's request for folks to tag the stories they would like to see included or plan to finish for the October release with stx.2018.10. One thing to consider for the teams is holding their own project calls. I've started one for the Docs team with the help of Ildiko and I think she'd be willing to do the same for other teams. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 10, 2018 8:28 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] Sub-Project wiki pages are updated Hello all, I've updated all the sub-project wiki pages and linked them to the main wiki. https://wiki.openstack.org/wiki/StarlingX#Sub-projects As suggested by Bruce, each page now lists the contributors to the sub-project (I put a link to each page in ethercalc). I also added the core reviewers to help everyone add the right people to their gerrit reviews. Each page also has a number of story board links for applicable stories. These are based on the tags defined here: https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes Note: I have tagged some of the existing stories, but I didn't have time to go thru the whole backlog. I encourage authors of stories to tag them upon creation if they know the right sub-project. Each sub-project team can customize their page as they see fit. I also encourage each team to tag the stories they are targeting for the October release with stx.2018.10. This will give us good data to pull the release plan. As a reference point, there are 182 active stories; only 40 are tagged for the October release as of now. Just a reminder, tags must be added one at a time. (stx.bug stx.config -> Add >> results in a new tag "stx.bug stx.config" being added, not two individual tags) Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 13 18:36:16 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 18:36:16 +0000 Subject: [Starlingx-discuss] [Release] Sub-Project wiki pages are updated In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA40DB2C@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA40DB2C@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB578115@fmsmsx115.amr.corp.intel.com> Ghada, thank you for doing this! I've re-arranged the content of this part of the wiki a bit to try to make it use less pixels. Meanwhile I'd like to amplify Ghada's request for the team's to tag the stories they would like to see included in the October release with stx.2018.10. One thing to consider for the teams is holding their own project calls. I've started one for the Docs team with the help of Ildiko and I think she'd be willing to do the same for other teams. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 10, 2018 8:28 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] Sub-Project wiki pages are updated Hello all, I've updated all the sub-project wiki pages and linked them to the main wiki. https://wiki.openstack.org/wiki/StarlingX#Sub-projects As suggested by Bruce, each page now lists the contributors to the sub-project (I put a link to each page in ethercalc). I also added the core reviewers to help everyone add the right people to their gerrit reviews. Each page also has a number of story board links for applicable stories. These are based on the tags defined here: https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes Note: I have tagged some of the existing stories, but I didn't have time to go thru the whole backlog. I encourage authors of stories to tag them upon creation if they know the right sub-project. Each sub-project team can customize their page as they see fit. I also encourage each team to tag the stories they are targeting for the October release with stx.2018.10. This will give us good data to pull the release plan. As a reference point, there are 182 active stories; only 40 are tagged for the October release as of now. Just a reminder, tags must be added one at a time. (stx.bug stx.config -> Add >> results in a new tag "stx.bug stx.config" being added, not two individual tags) Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at openstack.org Mon Aug 13 18:40:29 2018 From: james at openstack.org (James Cole) Date: Mon, 13 Aug 2018 11:40:29 -0700 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: Hi again everyone, It looks like the purple version won by a few votes. I’ve attached a couple of PNGs to this message and we will have other file types available soon. Thank you all for your feedback on the logos! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: StarlingX_Logo_PNGs.zip Type: application/zip Size: 44725 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 13 19:55:09 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 19:55:09 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> Message-ID: <9A85D2917C58154C960D95352B22818BAB578597@fmsmsx115.amr.corp.intel.com> James, thank you! Are these the final versions, are they ready to go live? brucej From: James Cole [mailto:james at openstack.org] Sent: Monday, August 13, 2018 11:40 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi again everyone, It looks like the purple version won by a few votes. I’ve attached a couple of PNGs to this message and we will have other file types available soon. Thank you all for your feedback on the logos! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From james at openstack.org Mon Aug 13 20:06:53 2018 From: james at openstack.org (James Cole) Date: Mon, 13 Aug 2018 13:06:53 -0700 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: <9A85D2917C58154C960D95352B22818BAB578597@fmsmsx115.amr.corp.intel.com> References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> <9A85D2917C58154C960D95352B22818BAB578597@fmsmsx115.amr.corp.intel.com> Message-ID: Hey Bruce, You’re very welcome! It was a lot of fun to work on this and I hope everyone likes them. Yes, this is the final design and color scheme. We’re still working on making different file formats downloadable via the OpenStack site, but I can send you more file types/one color versions/etc. as needed. Just let me know! James Cole Graphic Designer OpenStack Foundation > On Aug 13, 2018, at 12:55 PM, Jones, Bruce E wrote: > > James, thank you! Are these the final versions, are they ready to go live? > > brucej > > <>From: James Cole [mailto:james at openstack.org] > Sent: Monday, August 13, 2018 11:40 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts > > Hi again everyone, > > It looks like the purple version won by a few votes. I’ve attached a couple of PNGs to this message and we will have other file types available soon. > > Thank you all for your feedback on the logos! > > James Cole > Graphic Designer > OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 13 21:03:04 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 13 Aug 2018 21:03:04 +0000 Subject: [Starlingx-discuss] StarlingX Logo Concepts In-Reply-To: References: <8DCAF24C-F92F-4892-864F-0E3BF8A289AA@openstack.org> <03D458D5BAFF6041973594B00B4E58CE590F0975@fmsmsx101.amr.corp.intel.com> <93D5C33D-EDC8-48B5-AB60-ECD2777FD2F0@openstack.org> <9A85D2917C58154C960D95352B22818BAB578597@fmsmsx115.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB5786E7@fmsmsx115.amr.corp.intel.com> Story https://storyboard.openstack.org/#!/story/2003425 created for this, assigned to Eddie and tagged stx.2018.10. brucej From: James Cole [mailto:james at openstack.org] Sent: Monday, August 13, 2018 1:07 PM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hey Bruce, You’re very welcome! It was a lot of fun to work on this and I hope everyone likes them. Yes, this is the final design and color scheme. We’re still working on making different file formats downloadable via the OpenStack site, but I can send you more file types/one color versions/etc. as needed. Just let me know! James Cole Graphic Designer OpenStack Foundation On Aug 13, 2018, at 12:55 PM, Jones, Bruce E > wrote: James, thank you! Are these the final versions, are they ready to go live? brucej From: James Cole [mailto:james at openstack.org] Sent: Monday, August 13, 2018 11:40 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX Logo Concepts Hi again everyone, It looks like the purple version won by a few votes. I’ve attached a couple of PNGs to this message and we will have other file types available soon. Thank you all for your feedback on the logos! James Cole Graphic Designer OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Aug 13 23:22:22 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 13 Aug 2018 17:22:22 -0600 Subject: [Starlingx-discuss] adding packages normally installed using "pip install" Message-ID: <5B7212AE.7010000@windriver.com> Hi, I'm looking to add the "airship-armada" package (git://git.openstack.org/openstack/airship-armada) and it wants to bring in another three packages that are normally installed via "pip install" (grpcio, grpcio-tools, and protobuf) Does anyone have any bright ideas about how to cleanly create RPMs for these packages? Given that we have multiple packages (and may later on have more) it seems wrong to just copy/paste stuff into the RPM spec file. I was wondering whether it'd make sense to add native support somehow for PIP packages. Chris From huifeng.le at intel.com Tue Aug 14 01:34:09 2018 From: huifeng.le at intel.com (Le, Huifeng) Date: Tue, 14 Aug 2018 01:34:09 +0000 Subject: [Starlingx-discuss] Analysis report about Network Trunk feature for StartlingX upstreaming In-Reply-To: <9A85D2917C58154C960D95352B22818BAB577F1B@fmsmsx115.amr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D4EB222@SHSMSX104.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB577F1B@fmsmsx115.amr.corp.intel.com> Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D4EB495@SHSMSX104.ccr.corp.intel.com> Bruce, Based on our analysis, item #1(ba9d9f60), #3(43a68494), #4(c54d8047), #5(3eed837e) are special fix for AVS agent implementation and not bug for neutron upstream which can be removed with no impact. Item #2 (6955351c) are low priority bug (only applied for linux bridge and workaround available) with almost no impact to neutron upstream which we can either remove or keep it for tracking. We need WR experts’ help to confirm our analysis before moving forward. thanks much! Best Regards, Le, Huifeng From: Jones, Bruce E Sent: Monday, August 13, 2018 11:50 PM To: Le, Huifeng ; Jolliffe, Ian ; Rowsell, Brent ; Peters, Matt Cc: Zhao, Forrest ; Troyer, Dean ; starlingx-discuss at lists.starlingx.io Subject: RE: Analysis report about Network Trunk feature for StartlingX upstreaming Huifeng, thank you for this doing this analysis. What are the next steps for these patches? You suggest that we not upstream them, but do we keep them and carry them going forward, do we remove them – if so, is there any impact? brucej From: Le, Huifeng Sent: Sunday, August 12, 2018 10:52 PM To: Jolliffe, Ian >; Rowsell, Brent >; Peters, Matt > Cc: Zhao, Forrest >; Troyer, Dean >; Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: Analysis report about Network Trunk feature for StartlingX upstreaming Ian/Brent/Matt, We did analysis about the Network trunk related patches for StartingX upstream, below are the suggestions for upstreaming, could you please help to review and comment? Thanks much! 1. ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates Function: sent notification to the agent when a trunk is updated Analysis: (1)Trunk’s AFTER_UPDATE event is generated for API call: PUT /v2.0/trunks/{trunk-id} The update request is only for changing fields like name, description or admin_state_up. Setting the admin_state_up to False locks the trunk in that it prevents operations such as adding/removing subports. In Neutron upstream, admin_state_up is used in server side, e.g. add_subports, remove subports, delete_trunk and not used in agent side (2)OVS trunk agent driver uses OVSDB event to handle trunk event, no need to manually trigger trunk update event (3)Linux trunk agent driver will handle trunk update event triggered by server, while it will need apply the patch only in case admin_state_up update need to be handled Suggestion: Not a bug for Neutron upstream, suggest not to upstream 2. 6955351c5eca6e37061fb0140d11ea53693fe0e1: Add support to delete bound network Function: enable delete trunk if it is can_be_trunked (not bounded or driver’s can_trunk_bound_port=true) Analysis: Applied for LinuxBridge Driver and AVS bridge Driver (can_trunk_bound_port=True), no impact for OVSTrunkDriver (can_trunk_bound_port=False). workaround also available for linux bridge (e.g. unbind the port first then delete the trunk) Suggestion: it is a low priority bug for Neutron upstream (only applied for linux bridge and workround available), suggest not to upstream 3. 43a684946e781a25d21a4f50b8dc67d61be42809: Enable trunk service by default Function: add “trunk” in DEFAULT_SERVICE_PLUGINS Analysis: It is a deploy configuration for downstream product Suggestion: Not a bug for Neutron upstream, suggest not to upstream 4. c54d804792f10b7f505de6794274c4df4768f6f0: Include trunk presence in port details Function: add trunk_port (bool) flag in port_details to identify whether this port is a parent port for a trunk Analysis: It is a performance improvement for AVS agent by reducing RPC call from agent to server. OVS agent has different implementation with no improvement by introducing this field Suggestion: Not a bug for Neutron upstream, suggest not to upstream 5. 3eed837ebd236e6b1959ea88d9ab5322c9eef6b9: Ignore trunk subports on same vlan as vlan-subnet ports Function: Ignore trunk subports on same vlan as vlan-subnet ports Analysis: It is a bug fix for AVS agent Suggestion: Not a bug for Neutron upstream, suggest not to upstream Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchang77 at chinaunicom.cn Tue Aug 14 02:01:28 2018 From: liuchang77 at chinaunicom.cn (liuchang77 at chinaunicom.cn) Date: Tue, 14 Aug 2018 10:01:28 +0800 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn> Message-ID: <201808141001282493927@chinaunicom.cn> Hi Shuicheng According to your advice. I found it is really a problem with my network speed. But in the process of my repeated attempts. I found that some packages that have already been downloaded will be re-downloaded. This takes up bandwidth and increases deployment time. So I think we should judge whether we have downloaded the package before downloading it. And I made a change to implement this function. Hope this will help. https://review.openstack.org/#/c/591544/ Chang 发件人: liuchang77 at chinaunicom.cn 发送时间: 2018-08-10 10:16 收件人: Lin, Shuicheng; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io 主题: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Aug 14 03:25:10 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 14 Aug 2018 03:25:10 +0000 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= In-Reply-To: <201808141001282493927@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn> <201808141001282493927@chinaunicom.cn> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B315F10@SHSMSX104.ccr.corp.intel.com> Thanks Liu Chang with the patch to optimize the mirror download scripts. Question: with this patch, have you successfully setup your local mirror already? Or anything particular you expect WR or Intel to do further to accelerate it? Thx. - cindy From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Tuesday, August 14, 2018 10:01 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Shuicheng According to your advice. I found it is really a problem with my network speed. But in the process of my repeated attempts. I found that some packages that have already been downloaded will be re-downloaded. This takes up bandwidth and increases deployment time. So I think we should judge whether we have downloaded the package before downloading it. And I made a change to implement this function. Hope this will help. https://review.openstack.org/#/c/591544/<  https:/review.openstack.org/#/c/591544/> Chang 发件人: liuchang77 at chinaunicom.cn 发送时间: 2018-08-10 10:16 收件人: Lin, Shuicheng; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io 主题: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Aug 14 05:13:23 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 14 Aug 2018 05:13:23 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> All, Shuicheng has one story with many tasks open for SRPM (and its dependent RPM) upgrade to CentOS 7.5: https://storyboard.openstack.org/#!/story/2003389 Please provide your code review feedback actively (CR+1, CR+2). However, please hold to have W+1 at this moment. We will do a test build when all 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. @Ada, please can you kindly support the validation for the build when we are ready? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchang77 at chinaunicom.cn Tue Aug 14 06:00:06 2018 From: liuchang77 at chinaunicom.cn (liuchang77 at chinaunicom.cn) Date: Tue, 14 Aug 2018 14:00:06 +0800 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn>, <201808141001282493927@chinaunicom.cn>, <2FD5DDB5A04D264C80D42CA35194914F2B315F10@SHSMSX104.ccr.corp.intel.com> Message-ID: <2018081414000640774823@chinaunicom.cn> Hi Cindy, I have tested this script in my local environment. It works as I expected. And I hope my patch iwill help. But I haven't successfully downloaded all the packages. When I manually use the yumdownloader to download these packages. There shows the error "Cannot find repomd.xml file for base-source/7". I'm looking for ways to solve this problem. Chang 发件人: Xie, Cindy 发送时间: 2018-08-14 11:25 收件人: liuchang77 at chinaunicom.cn; Lin, Shuicheng 抄送: starlingx-discuss at lists.starlingx.io 主题: RE: [Starlingx-discuss] 回复: Re: So many packages are missing Thanks Liu Chang with the patch to optimize the mirror download scripts. Question: with this patch, have you successfully setup your local mirror already? Or anything particular you expect WR or Intel to do further to accelerate it? Thx. - cindy From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Tuesday, August 14, 2018 10:01 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Shuicheng According to your advice. I found it is really a problem with my network speed. But in the process of my repeated attempts. I found that some packages that have already been downloaded will be re-downloaded. This takes up bandwidth and increases deployment time. So I think we should judge whether we have downloaded the package before downloading it. And I made a change to implement this function. Hope this will help. https://review.openstack.org/#/c/591544/ Chang 发件人: liuchang77 at chinaunicom.cn 发送时间: 2018-08-10 10:16 收件人: Lin, Shuicheng; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io 主题: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Tue Aug 14 06:41:34 2018 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Tue, 14 Aug 2018 06:41:34 +0000 Subject: [Starlingx-discuss] adding packages normally installed using "pip install" In-Reply-To: <5B7212AE.7010000@windriver.com> References: <5B7212AE.7010000@windriver.com> Message-ID: Chris, I found protobuf 2.5 in centos 7.5 repo, http://vault.centos.org/7.5.1804/os/Source/SPackages/protobuf-2.5.0-8.el7.src.rpm BTW, armada is running within a container and the dependencies are installed in image build time, why do you need these dependency rpms created? are you going to create an srpm for armada and add it to strarlingx? Thanks, Mingyuan -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Tuesday, August 14, 2018 7:22 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] adding packages normally installed using "pip install" Hi, I'm looking to add the "airship-armada" package (git://git.openstack.org/openstack/airship-armada) and it wants to bring in another three packages that are normally installed via "pip install" (grpcio, grpcio-tools, and protobuf) Does anyone have any bright ideas about how to cleanly create RPMs for these packages? Given that we have multiple packages (and may later on have more) it seems wrong to just copy/paste stuff into the RPM spec file. I was wondering whether it'd make sense to add native support somehow for PIP packages. Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Tue Aug 14 06:45:49 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 14 Aug 2018 06:45:49 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B316592@SHSMSX104.ccr.corp.intel.com> Dean, What is your recommendation for creating a feature branch for CentOS 7.5 upgrade? Right now, as the work is going to have many patches generated, and they have dependencies, it'd difficult to maintain each patches in the mainline with multiple engineers working on the same mainline without actually merge them. Please advise if you can create the feature branch or you can grant Shuicheng to create it? Then the code review can happen in feature branch and we will generate a build from feature branch for Ada for validation. The merge back to mainline can be done after all sRPM update and test passed with accelerated CR+2 path. Thanks. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, August 14, 2018 1:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 All, Shuicheng has one story with many tasks open for SRPM (and its dependent RPM) upgrade to CentOS 7.5: https://storyboard.openstack.org/#!/story/2003389 Please provide your code review feedback actively (CR+1, CR+2). However, please hold to have W+1 at this moment. We will do a test build when all 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. @Ada, please can you kindly support the validation for the build when we are ready? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Tue Aug 14 11:38:10 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 14 Aug 2018 11:38:10 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C7655352956@SHSMSX101.ccr.corp.intel.com> Hi Scott, I created below story to track the rpm compete you mentioned in comments. I prefer to do it after CentOS7.5 upgrade, since it is a minor refine, not an issue fix. Is my understand correct? https://storyboard.openstack.org/#!/story/2003435 Hi Saul, For the configuration related patch move out of src rpm, I also prefer to do it after the upgrade. You mentioned it for package pam and openldap. I could follow up them after you share your thought later. Best Regards Shuicheng From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, August 14, 2018 1:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 All, Shuicheng has one story with many tasks open for SRPM (and its dependent RPM) upgrade to CentOS 7.5: https://storyboard.openstack.org/#!/story/2003389 Please provide your code review feedback actively (CR+1, CR+2). However, please hold to have W+1 at this moment. We will do a test build when all 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. @Ada, please can you kindly support the validation for the build when we are ready? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Aug 14 03:27:52 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 14 Aug 2018 03:27:52 +0000 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= In-Reply-To: <201808141001282493927@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn> <201808141001282493927@chinaunicom.cn> Message-ID: <93814834B4855241994F290E959305C752F78304@SHSMSX104.ccr.corp.intel.com> Hi Chang, Good to see your patch! We actually have a similar patch ongoing https://review.openstack.org/#/c/589333/ @Abraham add features like sum check, existed package check as you did. You can also review it and discuss with him, and finalized the patch. Thanks zhipeng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: 2018年8月14日 10:01 To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Shuicheng According to your advice. I found it is really a problem with my network speed. But in the process of my repeated attempts. I found that some packages that have already been downloaded will be re-downloaded. This takes up bandwidth and increases deployment time. So I think we should judge whether we have downloaded the package before downloading it. And I made a change to implement this function. Hope this will help. https://review.openstack.org/#/c/591544/<  https:/review.openstack.org/#/c/591544/> Chang 发件人: liuchang77 at chinaunicom.cn 发送时间: 2018-08-10 10:16 收件人: Lin, Shuicheng; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io 主题: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From liuchang77 at chinaunicom.cn Tue Aug 14 07:03:04 2018 From: liuchang77 at chinaunicom.cn (liuchang77 at chinaunicom.cn) Date: Tue, 14 Aug 2018 15:03:04 +0800 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn>, <201808141001282493927@chinaunicom.cn>, <93814834B4855241994F290E959305C752F78304@SHSMSX104.ccr.corp.intel.com> Message-ID: <2018081415030453402334@chinaunicom.cn> Hi zhipeng, Ok, the patch @Abraham made adds checksum verification work, it is better than my method. And I think maybe I can help with the packages in 'other_downloads.lst' in my review. Chang 发件人: Liu, ZhipengS 发送时间: 2018-08-14 11:27 收件人: liuchang77 at chinaunicom.cn; Lin, Shuicheng; Arce Moreno, Abraham 抄送: starlingx-discuss at lists.starlingx.io 主题: RE: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Chang, Good to see your patch! We actually have a similar patch ongoing https://review.openstack.org/#/c/589333/ @Abraham add features like sum check, existed package check as you did. You can also review it and discuss with him, and finalized the patch. Thanks zhipeng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: 2018年8月14日 10:01 To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Shuicheng According to your advice. I found it is really a problem with my network speed. But in the process of my repeated attempts. I found that some packages that have already been downloaded will be re-downloaded. This takes up bandwidth and increases deployment time. So I think we should judge whether we have downloaded the package before downloading it. And I made a change to implement this function. Hope this will help. https://review.openstack.org/#/c/591544/ Chang 发件人: liuchang77 at chinaunicom.cn 发送时间: 2018-08-10 10:16 收件人: Lin, Shuicheng; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io 主题: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Tue Aug 14 07:36:51 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 14 Aug 2018 07:36:51 +0000 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= In-Reply-To: <2018081415030453402334@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn>, <201808141001282493927@chinaunicom.cn>, <93814834B4855241994F290E959305C752F78304@SHSMSX104.ccr.corp.intel.com> <2018081415030453402334@chinaunicom.cn> Message-ID: <93814834B4855241994F290E959305C752F784C1@SHSMSX104.ccr.corp.intel.com> Hi Chang, Sure, welcome to join us to work together! There is a ticket related to mirror optimization, FYI https://storyboard.openstack.org/#!/story/2002736 Zhipeng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: 2018年8月14日 15:03 To: Liu, ZhipengS ; Lin, Shuicheng ; Arce Moreno, Abraham Cc: starlingx-discuss at lists.starlingx.io Subject: Re: RE: [Starlingx-discuss] 回复: Re: So many packages are missing Hi zhipeng, Ok, the patch @Abraham made adds checksum verification work, it is better than my method. And I think maybe I can help with the packages in 'other_downloads.lst' in my review. Chang 发件人: Liu, ZhipengS 发送时间: 2018-08-14 11:27 收件人: liuchang77 at chinaunicom.cn; Lin, Shuicheng; Arce Moreno, Abraham 抄送: starlingx-discuss at lists.starlingx.io 主题: RE: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Chang, Good to see your patch! We actually have a similar patch ongoing https://review.openstack.org/#/c/589333/ @Abraham add features like sum check, existed package check as you did. You can also review it and discuss with him, and finalized the patch. Thanks zhipeng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: 2018年8月14日 10:01 To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 回复: Re: So many packages are missing Hi Shuicheng According to your advice. I found it is really a problem with my network speed. But in the process of my repeated attempts. I found that some packages that have already been downloaded will be re-downloaded. This takes up bandwidth and increases deployment time. So I think we should judge whether we have downloaded the package before downloading it. And I made a change to implement this function. Hope this will help. https://review.openstack.org/#/c/591544/<  https:/review.openstack.org/#/c/591544/> Chang 发件人: liuchang77 at chinaunicom.cn 发送时间: 2018-08-10 10:16 收件人: Lin, Shuicheng; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io 主题: Re: RE: [Starlingx-discuss] So many packages are missing Hi Shuicheng, Thanks for your advice. I can access the links you provided. But the speed is very slow. So I think this is really a problem with my network speed. I work at the carrier, so our network is definitely fine. Maybe the real problem is Chinese firewall. I spent a few hours to re-run the download package script, and the missing list has not changed. So the only way I can think of now is to use vpn inside the container. But after I tried if, I found that the kernel of the system inside the container does not support pptp. I will keep trying to find another way to solve it. Thank you very much! Best Regards Chang From: Lin, Shuicheng Date: 2018-08-09 16:05 To: liuchang77 at chinaunicom.cn; Cordoba Malibran, Erich; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] So many packages are missing Hi Chang, I try to check some package in the missing list. Can you access below link for the package? http://vault.centos.org/7.5.1804/os/Source/SPackages/bash-4.2.46-30.el7.src.rpm https://s3.amazonaws.com/influxdb/influxdb-0.9.5.1-1.x86_64.rpm If yes, I think it maybe network speed issue. You could re-run the download package script. And check whether the missing list be shorter or not. If not, you may need check the networking setting. Best Regards Shuicheng From: liuchang77 at chinaunicom.cn [mailto:liuchang77 at chinaunicom.cn] Sent: Thursday, August 9, 2018 3:28 PM To: Cordoba Malibran, Erich >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Thanks,Erich, According to your advice, I ran "yum makecache" inside the container. Although it was slow, the task was complete with no error. Attachment 1 is my execution log. And attachment 2 is my missing packages' list. I also think that there are so many missing packages that are not normal. I suspect this is related to Chinese firewall. But I couldn't connect my vpn server in the container. I sincerely hope that you can give me some advice. Chang From: Cordoba Malibran, Erich Date: 2018-08-09 11:28 To: liuchang77 at chinaunicom.cn; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] So many packages are missing Hi Chang, You are right, download manually that amount of packages is a painful task. It’s not normal to have so many failures, my guess is that some of the repositories is down or blocked in the network. You can try to identify from where those packages are being downloaded to root cause the problem. Inside the mirror container you can run “yum makecache” to sync the repositories and try to see which repo failed. Also if you can share the list of failed packages we can help to identify what is the problematic repository. -Erich -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Aug 14 13:21:55 2018 From: scott.little at windriver.com (Scott Little) Date: Tue, 14 Aug 2018 09:21:55 -0400 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <9700A18779F35F49AF027300A49E7C7655352956@SHSMSX101.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C7655352956@SHSMSX101.ccr.corp.intel.com> Message-ID: On rare occasions it can produce an issue.  Would likely requires a case where our patches modify a -devel rpm in a way that changes how other packages compile. While I'd prefer to see the competing binary rpms removed sooner rather than later, I can settle on a follow up story. Scott On 18-08-14 07:38 AM, Lin, Shuicheng wrote: > > Hi Scott, > > I created below story to track the rpm compete you mentioned in > comments. I prefer to do it after CentOS7.5 upgrade, since it is a > minor refine, not an issue fix. > > Is my understand correct? > > https://storyboard.openstack.org/#!/story/2003435 > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Aug 14 13:29:53 2018 From: scott.little at windriver.com (Scott Little) Date: Tue, 14 Aug 2018 09:29:53 -0400 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C7655352956@SHSMSX101.ccr.corp.intel.com> Message-ID: <26c40ef8-82f0-d624-9280-3d7aad59eac0@windriver.com> One other concern. I'm surprised that there aren't more patches being refactored in the inspections seen so far.  When I've done this job in the past, it was my policy that our patches apply cleanly with no 'fuzz'. Scott On 18-08-14 09:21 AM, Scott Little wrote: > On rare occasions it can produce an issue.  Would likely requires a > case where our patches modify a -devel rpm in a way that changes how > other packages compile. > > While I'd prefer to see the competing binary rpms removed sooner > rather than later, I can settle on a follow up story. > > Scott > > > On 18-08-14 07:38 AM, Lin, Shuicheng wrote: >> >> Hi Scott, >> >> I created below story to track the rpm compete you mentioned in >> comments. I prefer to do it after CentOS7.5 upgrade, since it is a >> minor refine, not an issue fix. >> >> Is my understand correct? >> >> https://storyboard.openstack.org/#!/story/2003435 >> >> > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Tue Aug 14 13:54:21 2018 From: claire at openstack.org (Claire Massey) Date: Tue, 14 Aug 2018 08:54:21 -0500 Subject: [Starlingx-discuss] Berlin Summit Schedule Live! Message-ID: <1F9FA1B3-0A35-47DF-B6F6-57A08FF8F82D@openstack.org> Hi StarlingX team, The schedule for the Berlin Summit is live with all of the sessions that were accepted via the CFP. It includes a few StarlingX specific talks shown here . You can see the full Edge Computing Track here . Between now and the Summit Ildiko will be working closely with the team on a few additional content pieces where StarlingX may have a presence at the Summit. The opportunities includes a Project Update session and an On-Boarding session. In addition to all of the great content, the Summit will be a place for the StarlingX team to get some quality time together. We hope you all can make it! Check out 100+ sessions, demos, and workshops covering 35+ open source projects in the following Tracks: • CI/CD • Container Infrastructure • Edge Computing • HPC / GPU / AI • Private & Hybrid Cloud • Public Cloud • Telecom & NFV Log in with your OpenStackID and start building your schedule now! Register for the Summit - Get your Summit ticket for USD $699 before the price increases on August 21 at 11:59pm PT (August 22 at 6:59 UTC) For speakers with accepted sessions, look for an email from speakersupport at openstack.org for next steps on registration. Thank you to our Programming Committee! They have once again taken time out of their busy schedules to help create another round of outstanding content for the OpenStack Summit. The OpenStack Foundation relies on the community-nominated Programming Committee, along with your Community Votes to select the content of the summit. If you're curious about this process, you can read more about it here where we have also listed the Programming Committee members. Interested in sponsoring the Berlin Summit? Learn more here . Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Aug 14 15:08:34 2018 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 14 Aug 2018 08:08:34 -0700 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <9700A18779F35F49AF027300A49E7C7655352956@SHSMSX101.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C7655352956@SHSMSX101.ccr.corp.intel.com> Message-ID: <53a44598-e0a9-32e4-165b-0cbff7a1595f@linux.intel.com> On 08/14/2018 04:38 AM, Lin, Shuicheng wrote: > Hi Scott, > > I created below story to track the rpm compete you mentioned in > comments. I prefer to do it after CentOS7.5 upgrade, since it is a minor > refine, not an issue fix. > > Is my understand correct? > > https://storyboard.openstack.org/#!/story/2003435 > > Hi Saul, > > For the configuration related patch move out of src rpm, I also prefer > to do it after the upgrade. > That's fine, it can wait for the upgrade patch set to be validated and merged. > You mentioned it for package pam and openldap. I could follow up them > after you share your thought later. > There are many packages that have configuration changes., I just happened on pam and openldap. We need to do the analysis and then understand what's the best approach to providing the modified config outside of the primary packages. Sau! > Best Regards > > Shuicheng > > *From:* Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Tuesday, August 14, 2018 1:13 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 > > All, > > Shuicheng has one story with many tasks open for SRPM (and its dependent > RPM) upgrade to CentOS 7.5: > https://storyboard.openstack.org/#!/story/2003389 > > Please provide your code review feedback actively (CR+1, CR+2). However, > please hold to have W+1 at this moment. We will do a test build when all > 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. > > @Ada, please can you kindly support the validation for the build when we > are ready? > > Thanks. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From chris.friesen at windriver.com Tue Aug 14 16:50:48 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 14 Aug 2018 10:50:48 -0600 Subject: [Starlingx-discuss] adding packages normally installed using "pip install" In-Reply-To: References: <5B7212AE.7010000@windriver.com> Message-ID: <5B730868.5050307@windriver.com> On 08/14/2018 12:41 AM, Qi, Mingyuan wrote: > Chris, > > I found protobuf 2.5 in centos 7.5 repo, http://vault.centos.org/7.5.1804/os/Source/SPackages/protobuf-2.5.0-8.el7.src.rpm I found that as well. Armada wants at least version 3.4. > BTW, armada is running within a container and the dependencies are installed in image build time, why do you need these dependency rpms created? are you going to create an srpm for armada and add it to strarlingx? As far as I know our build system is unable to run docker commands since it is potentially running within a docker container itself. There are plans to change this but as far as I know the work hasn't yet been done. The Armada "quick start" suggests running Armada in a container for expediency, but if you follow the armada deployment scripts in openstack-helm it installs armada directly on the host. I had indeed been planning on creating RPMs for armada and a few packages it depends on, but I'm reconsidering this given some issues I've run into with armada itself. Chris From marcela.a.rosales.jimenez at intel.com Tue Aug 14 17:09:54 2018 From: marcela.a.rosales.jimenez at intel.com (Rosales Jimenez, Marcela A) Date: Tue, 14 Aug 2018 17:09:54 +0000 Subject: [Starlingx-discuss] question about dl_rpms.sh generating 8 logs In-Reply-To: <1F66C208-2BD9-41BD-973E-43F00B8865DC@intel.com> References: <1F66C208-2BD9-41BD-973E-43F00B8865DC@intel.com> Message-ID: <7FC0E508-CCD5-4313-8638-973F51AE15A3@intel.com> Thanks Jesus and Zhipeng, I agree that having one log would be easier to review and manage. I'll work on that and send a patch soon. Marcela On 8/13/18, 11:04 AM, "Ornelas Aguayo, Jesus" wrote: Hi Marcela, I think we could standardize the logs to have a single log, by doing so it would be easier to error handling. On 8/10/18, 12:56 PM, "Rosales Jimenez, Marcela A" wrote: Hi team, I’m reviewing download_mirror.sh and dl_rpms.sh, because I’m working on setting up the mirror download on Jenkins daily. And I got a question: Why does dl_rpms.sh script generates 8 logs each time it is executed? For example, if we execute: $ ./dl_rpms.sh rpms_from_centos_repo.lst L1 centos We will get: centos_rpms_fail_move_L1.txt centos_rpms_missing_L1.txt centos_rpms_found_L1.txt centos_rpms_urls_L1.txt centos_srpms_fail_move_L1.txt centos_srpms_missing_L1.txt centos_srpms_found_L1.txt centos_srpms_urls_L1.txt Could we have four instead of eight? (let’s say centos_pkgs_fail_move_L1.txt, etc) The information about whether a package is noarch, x86_64 or src is already in its name. So for me it seems that we could leave four, but I don't know if in the past there was an intention for having this information like this. Thanks. Marcela From abraham.arce.moreno at intel.com Tue Aug 14 21:11:20 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 14 Aug 2018 21:11:20 +0000 Subject: [Starlingx-discuss] =?utf-8?b?5Zue5aSNOiBSZTogIFNvIG1hbnkgcGFj?= =?utf-8?q?kages_are_missing?= In-Reply-To: <2018081415030453402334@chinaunicom.cn> References: <2018080911033096975810@chinaunicom.cn>, <4630D55A-FF84-44C1-9A88-962874F897F9@intel.com>, <2018080915282283292923@chinaunicom.cn>, <9700A18779F35F49AF027300A49E7C7655350671@SHSMSX101.ccr.corp.intel.com>, <2018081010161215532816@chinaunicom.cn>, <201808141001282493927@chinaunicom.cn>, <93814834B4855241994F290E959305C752F78304@SHSMSX104.ccr.corp.intel.com> <2018081415030453402334@chinaunicom.cn> Message-ID: Chang, Zhipeng, > Ok, the patch @Abraham made adds checksum verification work, it is > better than my method. > And I think maybe I can help with the packages in 'other_downloads.lst' in > my review. Please let me know if this change [0] works for you as intended. I am now taking the content generated to build the ISO. [0] https://review.openstack.org/#/c/589333 From mingyuan.qi at intel.com Wed Aug 15 05:47:50 2018 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 15 Aug 2018 05:47:50 +0000 Subject: [Starlingx-discuss] adding packages normally installed using "pip install" In-Reply-To: <5B730868.5050307@windriver.com> References: <5B7212AE.7010000@windriver.com> <5B730868.5050307@windriver.com> Message-ID: I got your idea, and yes, the armada enabling path in openstack-helm is different from the way in airship. IMO, pyp2rpm is the best choice, or you have to either leverage other distro's spec or write spec on your own. Thanks, Mingyuan -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, August 15, 2018 0:51 To: Qi, Mingyuan ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] adding packages normally installed using "pip install" On 08/14/2018 12:41 AM, Qi, Mingyuan wrote: > Chris, > > I found protobuf 2.5 in centos 7.5 repo, > http://vault.centos.org/7.5.1804/os/Source/SPackages/protobuf-2.5.0-8. > el7.src.rpm I found that as well. Armada wants at least version 3.4. > BTW, armada is running within a container and the dependencies are installed in image build time, why do you need these dependency rpms created? are you going to create an srpm for armada and add it to strarlingx? As far as I know our build system is unable to run docker commands since it is potentially running within a docker container itself. There are plans to change this but as far as I know the work hasn't yet been done. The Armada "quick start" suggests running Armada in a container for expediency, but if you follow the armada deployment scripts in openstack-helm it installs armada directly on the host. I had indeed been planning on creating RPMs for armada and a few packages it depends on, but I'm reconsidering this given some issues I've run into with armada itself. Chris From bruce.e.jones at intel.com Wed Aug 15 14:10:05 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 15 Aug 2018 14:10:05 +0000 Subject: [Starlingx-discuss] can't login into call this morning Message-ID: <9A85D2917C58154C960D95352B22818BAB579117@fmsmsx115.amr.corp.intel.com> Not sure what's going on but I can't hear anyone else on the call.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 15 14:58:11 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 15 Aug 2018 14:58:11 +0000 Subject: [Starlingx-discuss] Meeting notes from today's project call Message-ID: <9A85D2917C58154C960D95352B22818BAB5791BE@fmsmsx115.amr.corp.intel.com> Agenda and notes from the 8/15 meeting * Release note generation and process (Abraham) o Infra ready and in review in stx-tools. * Bug tracking - launchpad? Bugzilla? (Bruce) o Team agrees to start using LP. No objections. o Bruce to work with Dean and Foundation Infra to spin it up * October release planning - Please tag stories that you want in the release with stx.2018.10 o Features and bugs should be covered o TLs for each project to triage and prioritize bugs, but have all the bugs been assigned to a team? AR Bruce to make sure open bugs are tagged. o TLs for each project to triage their own bugs and report results to the list. * What level of designer testing is done, before gerrit review is submitted ? (Dariush) o Make sure at a minimum that the code builds and the dev should make sure basic functional testing on an ISO is performed - does the new code at least get executed. o Unit tests should be part of code submissions. As we build out Zuul infra those tests should be run per check in o Core Reviewers can/should ask about test status for code submissions. o Meanwhile our test infrastructure automation build-out is in progress and (long term) will help. * Updates on previous meeting topics: o Influx DB SB entry updated with comments. If the version is updated, the subsystem using it will need functional testing. o How should we handle CentOS 7.5 update? Needs to be tested thoroughly but how? Do we create a feature branch? Cindy to work with Dean and "teach someone to fish", no objection to creating a branch for this. o Build instructions seem to be complete. China Telecom has completed a build. * Gerrit reviews are live for general project documentation, API documentation and release notes. Info can be found on the Docs & Infra wiki page and the Release wiki page. Docs team working on how to formally publish documentation. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 15 17:39:10 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 15 Aug 2018 17:39:10 +0000 Subject: [Starlingx-discuss] Bug triage needed. Message-ID: <9A85D2917C58154C960D95352B22818BAB57A515@fmsmsx115.amr.corp.intel.com> I just reviewed all of the open StarlingX bugs, and almost all of them have been tagged for one of the teams. I updated a few bugs where the team was obvious. Meanwhile, there are some bugs where it is not obvious which team should fix the bug. You can find the list here: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.bug&tags=stx.new&project_group_id=86 The bugs on that list need to have the stx.new tag replaced with one of the team tags. Can the Cores and/or leaders please do so asap? All open bugs can be found at: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.bug&project_group_id=86 Teams can find their bugs by appending "&tags=" to that query. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Aug 15 20:40:26 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 15 Aug 2018 14:40:26 -0600 Subject: [Starlingx-discuss] Bug triage needed. In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57A515@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57A515@fmsmsx115.amr.corp.intel.com> Message-ID: <5B748FBA.5090109@windriver.com> On 08/15/2018 11:39 AM, Jones, Bruce E wrote: > I just reviewed all of the open StarlingX bugs, and almost all of them have been > tagged for one of the teams. I updated a few bugs where the team was obvious. > Meanwhile, there are some bugs where it is not obvious which team should fix the > bug. You can find the list here: > https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.bug&tags=stx.new&project_group_id=86 > > The bugs on that list need to have the stx.new tag replaced with one of the team > tags. Can the Cores and/or leaders please do so asap? Is there a list of team tags somewhere? Chris From bruce.e.jones at intel.com Wed Aug 15 20:42:27 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 15 Aug 2018 20:42:27 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 Message-ID: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 15 20:44:04 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 15 Aug 2018 20:44:04 +0000 Subject: [Starlingx-discuss] Bug triage needed. In-Reply-To: <5B748FBA.5090109@windriver.com> References: <9A85D2917C58154C960D95352B22818BAB57A515@fmsmsx115.amr.corp.intel.com> <5B748FBA.5090109@windriver.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB57A936@fmsmsx115.amr.corp.intel.com> Yes, its hiding in the wiki in the first paragraph under Story and Bug Tracking https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes brucej -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, August 15, 2018 1:40 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Bug triage needed. On 08/15/2018 11:39 AM, Jones, Bruce E wrote: > I just reviewed all of the open StarlingX bugs, and almost all of them > have been tagged for one of the teams. I updated a few bugs where the team was obvious. > Meanwhile, there are some bugs where it is not obvious which team > should fix the bug. You can find the list here: > https://storyboard.openstack.org/#!/story/list?status=active&tags=stx. > bug&tags=stx.new&project_group_id=86 > > The bugs on that list need to have the stx.new tag replaced with one > of the team tags. Can the Cores and/or leaders please do so asap? Is there a list of team tags somewhere? Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scottx.rifenbark at intel.com Wed Aug 15 20:44:40 2018 From: scottx.rifenbark at intel.com (Rifenbark, ScottX) Date: Wed, 15 Aug 2018 20:44:40 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> Message-ID: Bruce.... I must have a bad meeting placeholder in my calendar. I called in and was the only one on for 20 minutes. Could you send me an invite to the real meeting? Thanks, Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 1:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Aug 15 20:46:56 2018 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 15 Aug 2018 20:46:56 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> Message-ID: <3808363B39586544A6839C76CF81445EA1A20E3E@ORSMSX104.amr.corp.intel.com> I also need the meeting invite. Please forward (or add me). Thx! From: Rifenbark, ScottX [mailto:scottx.rifenbark at intel.com] Sent: Wednesday, August 15, 2018 2:45 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 Bruce.... I must have a bad meeting placeholder in my calendar. I called in and was the only one on for 20 minutes. Could you send me an invite to the real meeting? Thanks, Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 1:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From hazzim.i.anaya.casas at intel.com Wed Aug 15 20:48:18 2018 From: hazzim.i.anaya.casas at intel.com (Anaya casas, Hazzim I) Date: Wed, 15 Aug 2018 20:48:18 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: <3808363B39586544A6839C76CF81445EA1A20E3E@ORSMSX104.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> <3808363B39586544A6839C76CF81445EA1A20E3E@ORSMSX104.amr.corp.intel.com> Message-ID: <19BE190B-AB2F-4238-BB95-1076350C7BCA@intel.com> All the details are in the page: https://wiki.openstack.org/wiki/StarlingX/Docs_and_Infra Weekly call We will hold a weekly team call on Wednesdays at 12:30 PST / 1930 UTC. All are welcome. Call details Zoom link: https://zoom.us/j/342730236 Regards. On Aug 15, 2018, at 15:46, Tullis, Michael L > wrote: I also need the meeting invite. Please forward (or add me). Thx! From: Rifenbark, ScottX [mailto:scottx.rifenbark at intel.com] Sent: Wednesday, August 15, 2018 2:45 PM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 Bruce…. I must have a bad meeting placeholder in my calendar. I called in and was the only one on for 20 minutes. Could you send me an invite to the real meeting? Thanks, Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 1:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created • https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Wed Aug 15 21:22:10 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 15 Aug 2018 21:22:10 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E716D7842@fmsmsx104.amr.corp.intel.com> Sure Cindy, we can run our sanity using the ISO generated. A. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, August 14, 2018 12:13 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 All, Shuicheng has one story with many tasks open for SRPM (and its dependent RPM) upgrade to CentOS 7.5: https://storyboard.openstack.org/#!/story/2003389 Please provide your code review feedback actively (CR+1, CR+2). However, please hold to have W+1 at this moment. We will do a test build when all 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. @Ada, please can you kindly support the validation for the build when we are ready? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Wed Aug 15 21:37:05 2018 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Wed, 15 Aug 2018 21:37:05 +0000 Subject: [Starlingx-discuss] [Testing][Performance metrics] Feedback required Message-ID: Hello StarlingXers, We have been thinking in how to measure StarlingX performance, in order to do so, we would like to present you an initial proposal in order to get feedback and ideas from you guys. The proposed metrics for StarlingX performance are: * Detection of failed VM - tracked on milliseconds * Detection of failed compute node - tracked in milliseconds * Auto controller node failure recovery - No impact on StarlingX * Network link failure detection - tracked in milliseconds with no major impact on StarlingX We are considering the following tools for taking measurements and/or generating network traffic: * Iperf: IPerf3 is a tool for active measurements of the maximum achievable bandwidth on IP networks. o https://iperf.fr/ * Scapy: Simulating network traffic by creating packets and either saving a pcap file for further use or replaying the packets on a given interface o https://pypi.org/project/ScapyTrafficGenerator/ * Ostinato: Is a packet crafter, network traffic generator and analyzer with a friendly GUI. Also a powerful Python API for network test automation. o https://ostinato.org/ * T-rex: Is an open source, low cost, stateful and stateless traffic generator fueled by DPDK. It generates L4-7 traffic based on pre-processing and smart replay of real traffic templates o https://trex-tgn.cisco.com/ The proposed metrics cover functional areas that we believe are important for StarlinXers. Also, we know that Edge Computing Performance trending are focused on Latency and bandwidth, and we are going to land there (at some point). Regards -Ricardo -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Aug 15 22:29:50 2018 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 15 Aug 2018 22:29:50 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> Message-ID: <3808363B39586544A6839C76CF81445EA1A20F7B@ORSMSX104.amr.corp.intel.com> RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS * Converting from the raw XML looks problematic and may be unnecessary. * Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION * Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. * Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. * Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. * This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Aug 15 23:27:32 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 15 Aug 2018 17:27:32 -0600 Subject: [Starlingx-discuss] [Testing][Performance metrics] Feedback required In-Reply-To: References: Message-ID: <5B74B6E4.4040003@windriver.com> On 08/15/2018 03:37 PM, Perez, Ricardo O wrote: > Hello StarlingXers, > > We have been thinking in how to measure StarlingX performance, in order to do > so, we would like to present you an initial proposal in order to get feedback > and ideas from you guys. > > The proposed metrics for StarlingX performance are: > > * Detection of failed VM – tracked on milliseconds Are we talking the qemu process or intrusive guest monitoring? (Or both?) > * Detection of failed compute node – tracked in milliseconds Different failure modes (power outage, mgmt link failure, critical process failure) could result in different detection times. > * Auto controller node failure recovery - No impact on StarlingX > * Network link failure detection – tracked in milliseconds with no major > impact on StarlingX What about network failure beyond the first hop (so we don't lose carrier)? How about time from dead-office-recovery to instance network connectivity? Do we want to consider OpenStack performance? Chris From sgw at linux.intel.com Wed Aug 15 23:53:49 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 15 Aug 2018 16:53:49 -0700 Subject: [Starlingx-discuss] Creating new packages for Initialization / Configuration files In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA335B99@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA335B99@ALA-MBD.corp.ad.wrs.com> Message-ID: Folks, As I have been scanning through the various patches that are maintained in the stx-integ and other repos, I think we have a couple of classes: Source level changes in the form of patches, these need to be fully understood and upstreamed where possible. Build configuration changes, typically autoconf style of settings. Then there are the System level changes on a per package basis these are the ones that I want move to alternative mechanisms where possible such that it will allow us to reduce building the source rpm. The types of changes here are: User / Group Creation New configuration files New services Updated configuration files Modify services Overwrite default (for example: /etc/issue*) As mentioned below, we can take a couple of different approaches, %post in a new rpm, kickstart files for anaconda, or puppet. I would prefer not to get locked into puppet or even full anaconda kickstart as we start to think about how to handle MultiOS solutions. I believe that most of the above changes need to be applied during the initial installation of the OS and Openstack, in order to ensure first boot proceeds and allows all the middleware to run correct and complete any initialization that's required on first boot. We are continuing with the analysis of patches in the master spreadsheet, focusing on the stx-integ repo to start with. Sau! On 08/08/2018 06:40 PM, Penney, Don wrote: > For many of these, using puppet templates will be a viable alternative. There may be cases where a change is needed during installation, and we'd have a couple of options there. In some cases, we may be able to package an override file. Alternatively, we could use the kickstarts to make changes during postinstall, if absolutely necessary. We'd need to look at them case by case to decide what the best option would be. > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, August 08, 2018 9:35 PM > To: Rowsell, Brent; starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Creating new packages for Initialization / Configuration files > > > Brent, et al: > > There are a number of packages that contain modified configuration files > that bring in alternate default files and in some cases modified > initialization scripts. > > Currently there are puppet packages that do some configuration > management. We could continue with puppet for these configurations that > we want to disengage from the upstream patches, or we can use RPM package. > > Thoughts? > > Examples of configuration patches from stx-integ/base are: > centos-release (issue files) > iptables (iptables rules) > dhcp > vim (vimrc!) > lighttp > pam > sanlock > shadow > sudo > util-linux > > Regarding centos-release Issue files: > > As you saw today, I proposed removing the issue* files from a otherwise > unmodified centos-release package, is there a reason that we need to > restore those issue files for an Open Source OS Independent project? > > Those modified issue files contain legalize that seems appropriate for a > commercial product, but not sure if makes sense for an Open Source > project that a downstream OSV or other company would likely modify for > their use anyway. > > Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Greg.Waines at windriver.com Thu Aug 16 11:22:44 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 16 Aug 2018 11:22:44 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 Message-ID: <5C194841-9AF8-402F-BBE7-CAC5A4D3CA59@windriver.com> Dang missed this ... thought this meeting was on Thursdays ? Can you send me the meeting invite ? Thanks, Greg. From: "Jones, Bruce E" Date: Wednesday, August 15, 2018 at 4:42 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created § https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 16 15:04:12 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 16 Aug 2018 15:04:12 +0000 Subject: [Starlingx-discuss] Cores meeting minutes 8/16/18 Message-ID: <9A85D2917C58154C960D95352B22818BAB57B48F@fmsmsx115.amr.corp.intel.com> Agenda and notes for the 8/16 call: * Governance o The current draft is good and acceptable to the Core team. We suggest appointing four members to the Initial TSC - Dean, Saul, Brent and Ian. We have an open issue in regard to how/when/if the TSC should grow in the Bootstrapping phase. We are looking for guidance from the Foundation in how to handle this. * Project lead for the Networking team? Candidates are Ghada and Forrest. Park this for now until we can ask them if they even want the job. * Centos 7.5 upgrade status and plan o Ada's test content is minimal at this point. Will take time to ramp that up. o Brent can talk to the WR test lead to see if cycles can be run there o Do we pull a branch for this? Only Brian (sniff) and Dean have the ability to create branches right now. Dean to ramp Scott and Saul to that list. o Issues with branching (in general) * They tend to hang around and create technical debt. Can fall behind mainline development - needs to be periodically rebased to master (weekly). * If we do this on mainline all reviews are highly visible. On a branch less so. * There might be hundreds of checkins for this upgrade * People tend to ignore and not review feature branches, but the changes do need active review. o Dean to start a thread on the mailing list to continue the discussion there. Given the above he thinks a feature branch is an easier sell. * EdgeX foundry discussion - Ian o Bruce met with Intel folks working on the project. Possible ways for StarlingX to be part of their overall architecture, in what they call the "System Management" layer. Needs deeper technical / architectural analysis. o We need to be aware of this project and figure out our plan of engagement. * Multti-OS support / enablement o Intel is looking to enable support for Clear Linux. Supporting Ubuntu would have more impact in the community. Key work items would be supporting multiple package managers and (somehow) keeping the KPIs intact. o We (Brent, Ian, Saul, Dean) need to prep for a deep discussion on this topic at the PTG. o Saul is working on a way to abstract out the configuration patches into some other mechanism. o Need to review / design a way to handle multiple installers, how to build the abstraction layers needed in Update, etc... o Containerizing more content in Docker images can help with middleware layers, but they still need an OS image to run against. It may also cause us other issues e.g. how do we update all of the containers when the OS changes. o Ian will facilitate a call to start this effort. * Spec process - discuss o How do we scale specs from small micro-feature (1-2 commits) to major features with multiple commits over a long time? o How to we store, review, process, approve specs? o We should require some level of spec for any feature that introduces new patches o Keep it lightweight, low friction. Provide guidelines to make this easy. o Using a repo for specs allows the discussion to be captured there. Using LP isn't a great way to have a discussion. Dean doesn't recommend using LP Blueprints. Team agrees that we should establish a stx-specs repo. Saul to create the repo with Dean's guidance. o Further discussions on the spec process deferred to next time * http://starlingx.io is not using HTTPS. Long term plan is for the Foundation to deploy a new content management system which should be happening Soon(tm). -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Aug 16 15:27:44 2018 From: scott.little at windriver.com (Scott Little) Date: Thu, 16 Aug 2018 11:27:44 -0400 Subject: [Starlingx-discuss] splitting image.inc Message-ID: Last time it was centos_pkg_dirs that was the centralized file driving build-pkgs an forcing many changes to be co-ordinated across multiple gits. This time we are splitting image.inc, the primary input to build-iso and build-guest.  Again the goal is to allow packages to be added/removed without a multi-git update.  ' https://storyboard.openstack.org/#!/story/2003447 The existing image.inc can still be used for packages that are not build in a StarlingX sub-project.   The per-git image.inc files are primarily for packages built by the subgit. The per-git image.inc files are merged and duplicates removed before being applied. Exampled of the per-git image.inc files...    stx/stx-integ/centos_guest_image.inc    stx/stx-integ/centos_guest_image_rt.inc    stx/stx-integ/centos_iso_image.inc Reviews have been posted. Tool changes: https://review.openstack.org/592516 Tool changes to allow image.inc to be split across git repos. Split image.inc: https://review.openstack.org/592517 Split image.inc across git repos https://review.openstack.org/592518 Split image.inc across git repos https://review.openstack.org/592519 Split image.inc across git repos https://review.openstack.org/592521 Split image.inc across git repos https://review.openstack.org/592522 Split image.inc across git repos https://review.openstack.org/592523 Split image.inc across git repos https://review.openstack.org/592524 Split image.inc across git repos https://review.openstack.org/592526 Split image.inc across git repos https://review.openstack.org/592528 Split image.inc across git repos From scottx.rifenbark at intel.com Thu Aug 16 18:38:14 2018 From: scottx.rifenbark at intel.com (Rifenbark, ScottX) Date: Thu, 16 Aug 2018 18:38:14 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: <3808363B39586544A6839C76CF81445EA1A20F7B@ORSMSX104.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> <3808363B39586544A6839C76CF81445EA1A20F7B@ORSMSX104.amr.corp.intel.com> Message-ID: Hi, Initial pandoc conversion of a .html file to a .rst file seemed good. Created an index.rst file to include the .rst file in the contents and then used "make" to make the HTML manual. Bunch of warnings but it displays... only the tables are messed up. They are not created with any column integrity. Scott From: Tullis, Michael L [mailto:michael.l.tullis at intel.com] Sent: Wednesday, August 15, 2018 3:30 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS * Converting from the raw XML looks problematic and may be unnecessary. * Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION * Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. * Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. * Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. * This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Aug 16 18:44:38 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 16 Aug 2018 13:44:38 -0500 Subject: [Starlingx-discuss] Cores meeting minutes 8/16/18 In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57B48F@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57B48F@fmsmsx115.amr.corp.intel.com> Message-ID: On Thu, Aug 16, 2018 at 10:04 AM, Jones, Bruce E wrote: > · Centos 7.5 upgrade status and plan [...] > o Do we pull a branch for this? Only Brian (sniff) and Dean have the > ability to create branches right now. Dean to ramp Scott and Saul to that > list. For the record, this is controlled by the starlingx-release group in Gerrit. > o Issues with branching (in general) > > They tend to hang around and create technical debt. Can fall behind > mainline development - needs to be periodically rebased to master (weekly). > > If we do this on mainline all reviews are highly visible. On a branch less > so. > > There might be hundreds of checkins for this upgrade > > People tend to ignore and not review feature branches, but the changes do > need active review. > > o Dean to start a thread on the mailing list to continue the discussion > there. Given the above he thinks a feature branch is an easier sell. [Much of this was written yesterday before the meeting so there is overlap with the above; the purpose in writing this down is to start shaping the guidelines we use for this in the future...] Feature branches are useful things, however they have costs and downsides and I believe the bar for creating them should be higher than things like milestone branches. I'll outline the things on my mind to consider here Some of the reasons to create a feature branch is to make it easier to to focus on a set of changes without having other things change out from under you. This can also be done directly in Gerrit using the repo manifest to control what gets pulled out of Gerrit, pulling code directly from one (or a stack of) review. Everyone can do this on their own or can coordinate with shared manifest files. There is also a social benefit in structuring batches of reviews to not make things terrible for other developers in a project. Coordinating that work requires communication (a good thing) but also timezone overlap is extremely helpful here. Working in isolation is rarely friendly to the other developers, and sometimes feature branch reviews get de-prioritized by reviews not working on them directly. This is just something the project team needs to stay aware of and try to minimize. One of the issues with a feature branch is divergence from master over time. This must be countered by periodically rebasing the feature branch on master and not just waiting until the feature branch is ready to be merged back in to master. Doing the work inline will help find merge conflicts as they happen, with the feature branch they are generally only found when doing this rebase. This also puts the burden of resolving the conflicts on the feature branch and not master. I would suggest we set a maximum time between master rebases for each feature branch created to try and balance these issues. Another significant tradeoff is with regards to testing. A feature branch requires a distinct testing effort for things not covered directly by CI/CD. This means additional hours spent by QA people. However when this effort is of a different nature this separation may be desirable, as Brent pointed out on the call this morning in the current case. In the end I am leaning strongly toward creating the feature branch for the 7.5 work due primarily to a) the testing brought up by Brent, and b) the sheer volume of expected reviews. Managing a couple hundred stacked reviews with a manifest may be doable but we just do not have that experience to do work on that scale yet. That said, I would like to see at least weekly rebases with master to keep the divergence to a sane level. Also, we should only branch in the repos where it is actually needed, not in all 50+. The list of those repos that have been branched can be extracted from the manifest file, which will have a matching feature branch similar to the milestone branches. I am sure I've left out in the above some of the things I've mention in conversations earlier this week... dt -- Dean Troyer dtroyer at gmail.com From michael.l.tullis at intel.com Thu Aug 16 18:48:44 2018 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Thu, 16 Aug 2018 18:48:44 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> <3808363B39586544A6839C76CF81445EA1A20F7B@ORSMSX104.amr.corp.intel.com> Message-ID: <3808363B39586544A6839C76CF81445EA1A2149B@ORSMSX104.amr.corp.intel.com> Great news Scott! We'll look at the table issue when we meet tomorrow. Maybe a pre-processing script will help. From: Rifenbark, ScottX Sent: Thursday, August 16, 2018 12:38 PM To: Tullis, Michael L ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: Docs team meeting minutes 8/15 Hi, Initial pandoc conversion of a .html file to a .rst file seemed good. Created an index.rst file to include the .rst file in the contents and then used "make" to make the HTML manual. Bunch of warnings but it displays... only the tables are messed up. They are not created with any column integrity. Scott From: Tullis, Michael L [mailto:michael.l.tullis at intel.com] Sent: Wednesday, August 15, 2018 3:30 PM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS * Converting from the raw XML looks problematic and may be unnecessary. * Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION * Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. * Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. * Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. * This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From scottx.rifenbark at intel.com Thu Aug 16 18:50:35 2018 From: scottx.rifenbark at intel.com (Rifenbark, ScottX) Date: Thu, 16 Aug 2018 18:50:35 +0000 Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 In-Reply-To: <3808363B39586544A6839C76CF81445EA1A2149B@ORSMSX104.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57A920@fmsmsx115.amr.corp.intel.com> <3808363B39586544A6839C76CF81445EA1A20F7B@ORSMSX104.amr.corp.intel.com> <3808363B39586544A6839C76CF81445EA1A2149B@ORSMSX104.amr.corp.intel.com> Message-ID: I am trying to get more understanding on it ... Two issues... the warnings during the "make" and the tables. Also, formatting is way less pretty than the originals. I am sure we can work on that though. Scott From: Tullis, Michael L Sent: Thursday, August 16, 2018 11:49 AM To: Rifenbark, ScottX ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: Docs team meeting minutes 8/15 Great news Scott! We'll look at the table issue when we meet tomorrow. Maybe a pre-processing script will help. From: Rifenbark, ScottX Sent: Thursday, August 16, 2018 12:38 PM To: Tullis, Michael L >; Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: Docs team meeting minutes 8/15 Hi, Initial pandoc conversion of a .html file to a .rst file seemed good. Created an index.rst file to include the .rst file in the contents and then used "make" to make the HTML manual. Bunch of warnings but it displays... only the tables are messed up. They are not created with any column integrity. Scott From: Tullis, Michael L [mailto:michael.l.tullis at intel.com] Sent: Wednesday, August 15, 2018 3:30 PM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS * Converting from the raw XML looks problematic and may be unnecessary. * Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION * Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. * Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. * Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. * This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From daniel.chavolla at windriver.com Thu Aug 16 19:19:15 2018 From: daniel.chavolla at windriver.com (Chavolla de la Cabada, Daniel) Date: Thu, 16 Aug 2018 19:19:15 +0000 Subject: [Starlingx-discuss] merge to starlingx-staging/stx-ceilometer Message-ID: <025E5797D4008944A34E5116F46B70E4F2BBC6EF@ALA-MBD.corp.ad.wrs.com> Hi Dean, We have a pull request to starlingx-staging/stx-ceilometer for backporting a set of Rocky and Queens commits (14 in total). They are needed for Gnocchi development. All these commits are cleanly ported. https://github.com/starlingx-staging/stx-ceilometer/pull/1 We understand that we want to minimize changes in staringx-staging, but please note that this request fails into the "backporting an OpenStack commit that has been merged in a future OpenStack release and is needed for current development" category. Please let us know if an additional process is required. Thanks Daniel From dtroyer at gmail.com Thu Aug 16 20:51:58 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 16 Aug 2018 15:51:58 -0500 Subject: [Starlingx-discuss] merge to starlingx-staging/stx-ceilometer In-Reply-To: <025E5797D4008944A34E5116F46B70E4F2BBC6EF@ALA-MBD.corp.ad.wrs.com> References: <025E5797D4008944A34E5116F46B70E4F2BBC6EF@ALA-MBD.corp.ad.wrs.com> Message-ID: On Thu, Aug 16, 2018 at 2:19 PM, Chavolla de la Cabada, Daniel wrote: > We have a pull request to starlingx-staging/stx-ceilometer for backporting a set of Rocky and Queens commits (14 in total). They are needed for Gnocchi development. All these commits are cleanly ported. > https://github.com/starlingx-staging/stx-ceilometer/pull/1 > > We understand that we want to minimize changes in staringx-staging, but please note that this request fails into the "backporting an OpenStack commit that has been merged in a future OpenStack release and is needed for current development" category. > Please let us know if an additional process is required. No additional process, and thanks for the heads-up, I do not look at the staging repos frequently. I am repeating my comment in Github here: I am OK with the set of backports, I would ask that you keep the original commit message and add a pointer to the source of each commit and to the story directly in the commit messages so that linkage can be traced in the future. Github PR messages are not kept in the repo and all of that information is left behind in a git clone for example. If changes to the original commit were necessary add a mention of that also. The practice when cherry-picking/backporting changes in OpenStack is to keep the Commit-ID the same also as that allows Gerrit to do some useful things. Since these repos are not in Gerrit we lose that bit but for the sake of consistency I would advise we keep the original Comit-ID also. Let me know if you have any questions. dt -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Thu Aug 16 23:45:01 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 16 Aug 2018 23:45:01 +0000 Subject: [Starlingx-discuss] Cores meeting minutes 8/16/18 In-Reply-To: References: <9A85D2917C58154C960D95352B22818BAB57B48F@fmsmsx115.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB57BC01@fmsmsx115.amr.corp.intel.com> Let's go ahead and pull a branch for this. Dean, can you enable Saul and/or Yong to do this? brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, August 16, 2018 11:45 AM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Cores meeting minutes 8/16/18 On Thu, Aug 16, 2018 at 10:04 AM, Jones, Bruce E wrote: > · Centos 7.5 upgrade status and plan [...] > o Do we pull a branch for this? Only Brian (sniff) and Dean have the > ability to create branches right now. Dean to ramp Scott and Saul to > that list. For the record, this is controlled by the starlingx-release group in Gerrit. > o Issues with branching (in general) > > They tend to hang around and create technical debt. Can fall behind > mainline development - needs to be periodically rebased to master (weekly). > > If we do this on mainline all reviews are highly visible. On a branch > less so. > > There might be hundreds of checkins for this upgrade > > People tend to ignore and not review feature branches, but the changes > do need active review. > > o Dean to start a thread on the mailing list to continue the discussion > there. Given the above he thinks a feature branch is an easier sell. [Much of this was written yesterday before the meeting so there is overlap with the above; the purpose in writing this down is to start shaping the guidelines we use for this in the future...] Feature branches are useful things, however they have costs and downsides and I believe the bar for creating them should be higher than things like milestone branches. I'll outline the things on my mind to consider here Some of the reasons to create a feature branch is to make it easier to to focus on a set of changes without having other things change out from under you. This can also be done directly in Gerrit using the repo manifest to control what gets pulled out of Gerrit, pulling code directly from one (or a stack of) review. Everyone can do this on their own or can coordinate with shared manifest files. There is also a social benefit in structuring batches of reviews to not make things terrible for other developers in a project. Coordinating that work requires communication (a good thing) but also timezone overlap is extremely helpful here. Working in isolation is rarely friendly to the other developers, and sometimes feature branch reviews get de-prioritized by reviews not working on them directly. This is just something the project team needs to stay aware of and try to minimize. One of the issues with a feature branch is divergence from master over time. This must be countered by periodically rebasing the feature branch on master and not just waiting until the feature branch is ready to be merged back in to master. Doing the work inline will help find merge conflicts as they happen, with the feature branch they are generally only found when doing this rebase. This also puts the burden of resolving the conflicts on the feature branch and not master. I would suggest we set a maximum time between master rebases for each feature branch created to try and balance these issues. Another significant tradeoff is with regards to testing. A feature branch requires a distinct testing effort for things not covered directly by CI/CD. This means additional hours spent by QA people. However when this effort is of a different nature this separation may be desirable, as Brent pointed out on the call this morning in the current case. In the end I am leaning strongly toward creating the feature branch for the 7.5 work due primarily to a) the testing brought up by Brent, and b) the sheer volume of expected reviews. Managing a couple hundred stacked reviews with a manifest may be doable but we just do not have that experience to do work on that scale yet. That said, I would like to see at least weekly rebases with master to keep the divergence to a sane level. Also, we should only branch in the repos where it is actually needed, not in all 50+. The list of those repos that have been branched can be extracted from the manifest file, which will have a matching feature branch similar to the milestone branches. I am sure I've left out in the above some of the things I've mention in conversations earlier this week... dt -- Dean Troyer dtroyer at gmail.com From shuicheng.lin at intel.com Fri Aug 17 02:21:37 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 17 Aug 2018 02:21:37 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B316592@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B316592@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C76553532DA@SHSMSX101.ccr.corp.intel.com> Hi all, To workaround the patch list management, I forked the working project to my private account in github. And we will work on it verify the patch, then submit code review in gerrit. So branch should be not needed now. Best Regards Shuicheng From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, August 14, 2018 2:46 PM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 Dean, What is your recommendation for creating a feature branch for CentOS 7.5 upgrade? Right now, as the work is going to have many patches generated, and they have dependencies, it'd difficult to maintain each patches in the mainline with multiple engineers working on the same mainline without actually merge them. Please advise if you can create the feature branch or you can grant Shuicheng to create it? Then the code review can happen in feature branch and we will generate a build from feature branch for Ada for validation. The merge back to mainline can be done after all sRPM update and test passed with accelerated CR+2 path. Thanks. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, August 14, 2018 1:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 All, Shuicheng has one story with many tasks open for SRPM (and its dependent RPM) upgrade to CentOS 7.5: https://storyboard.openstack.org/#!/story/2003389 Please provide your code review feedback actively (CR+1, CR+2). However, please hold to have W+1 at this moment. We will do a test build when all 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. @Ada, please can you kindly support the validation for the build when we are ready? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Fri Aug 17 13:14:43 2018 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Fri, 17 Aug 2018 13:14:43 +0000 Subject: [Starlingx-discuss] [Testing][Performance metrics] Feedback required Message-ID: > -----Original Message----- > From: Chris Friesen [mailto:chris.friesen at windriver.com] > Sent: Wednesday, August 15, 2018 6:28 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Testing][Performance metrics] Feedback > required > > On 08/15/2018 03:37 PM, Perez, Ricardo O wrote: > > Hello StarlingXers, > > > > We have been thinking in how to measure StarlingX performance, in > > order to do so, we would like to present you an initial proposal in > > order to get feedback and ideas from you guys. > > > > The proposed metrics for StarlingX performance are: > > > > * Detection of failed VM – tracked on milliseconds > > Are we talking the qemu process or intrusive guest monitoring? (Or both?) [Perez, Ricardo O] By now, we are talking about QEMU, but when bare metal became fully functional we can go through guest monitoring also.> > > * Detection of failed compute node – tracked in milliseconds > > Different failure modes (power outage, mgmt link failure, critical process > failure) could result in different detection times. [Perez, Ricardo O] Yes, I agree on that, and I would like to ask if you believe we should include all of them, or just some ? which ones could be more useful ? > > > * Auto controller node failure recovery - No impact on StarlingX > > * Network link failure detection – tracked in milliseconds with no major > > impact on StarlingX > > What about network failure beyond the first hop (so we don't lose carrier)? [Perez, Ricardo O] That sounds good, in fact, the idea is to test the failure between source / destination, mostly between VM to VM, Compute to Controller. The first hop you mean between Carrier (Internet) and Controller ? or something else ? > > How about time from dead-office-recovery to instance network connectivity? [Perez, Ricardo O] You mean from a complete power off until we become available to the network ? and it would be for the controller ?, VM's ? , other scenario ? > > Do we want to consider OpenStack performance? [Perez, Ricardo O] Maybe, but as we are taking the Open Stack and tunning some areas, I think we should focus only in the areas that we are touching, instead of consider "all" OpenStack performance, what do you think ? > > > Chris [Perez, Ricardo O] Thanks for all your comments Chris :) > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michael.l.tullis at intel.com Fri Aug 17 18:20:49 2018 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Fri, 17 Aug 2018 18:20:49 +0000 Subject: [Starlingx-discuss] [DOCS] API processing and conversion Message-ID: <3808363B39586544A6839C76CF81445EA1A21EE0@ORSMSX104.amr.corp.intel.com> All, After a few more hours of technical investigation, sample builds, and script writing, Scott and I are convinced that the "new direction" outlined below is the way to go for StarlingX API content conversion into the new OpenStack tooling. For the October release, we are confident (and are now committing) that we can deliver clean reST source for the eight API manuals, building into HTML via Sphinx. We can provide more details in the upcoming docs meeting. Thx. -- Mike and Scott From: Tullis, Michael L [mailto:michael.l.tullis at intel.com] Sent: Wednesday, August 15, 2018 4:30 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS * Converting from the raw XML looks problematic and may be unnecessary. * Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION * Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. * Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. * Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. * This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created ? https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Fri Aug 17 20:15:48 2018 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 17 Aug 2018 20:15:48 +0000 Subject: [Starlingx-discuss] Analysis report about Network Trunk feature for StartlingX upstreaming In-Reply-To: <76647BD697F40748B1FA4F56DA02AA0B4D4EB495@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D4EB222@SHSMSX104.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB577F1B@fmsmsx115.amr.corp.intel.com> <76647BD697F40748B1FA4F56DA02AA0B4D4EB495@SHSMSX104.ccr.corp.intel.com> Message-ID: <304C67CB-BAE4-4E42-A613-1C848535F166@windriver.com> Hi Huifeng; Thanks for the updates/analysis, comments below. Ian Ian/Brent/Matt, We did analysis about the Network trunk related patches for StartingX upstream, below are the suggestions for upstreaming, could you please help to review and comment? Thanks much! 1. ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates Function: sent notification to the agent when a trunk is updated Analysis: (1)Trunk’s AFTER_UPDATE event is generated for API call: PUT /v2.0/trunks/{trunk-id} The update request is only for changing fields like name, description or admin_state_up. Setting the admin_state_up to False locks the trunk in that it prevents operations such as adding/removing subports. In Neutron upstream, admin_state_up is used in server side, e.g. add_subports, remove subports, delete_trunk and not used in agent side (2)OVS trunk agent driver uses OVSDB event to handle trunk event, no need to manually trigger trunk update event (3)Linux trunk agent driver will handle trunk update event triggered by server, while it will need apply the patch only in case admin_state_up update need to be handled Suggestion: Not a bug for Neutron upstream, suggest not to upstream If this is not upstreamed, are the dependencies or changes required in the StarlingX code base? What are the implications of not upstreaming? 2. 6955351c5eca6e37061fb0140d11ea53693fe0e1: Add support to delete bound network Function: enable delete trunk if it is can_be_trunked (not bounded or driver’s can_trunk_bound_port=true) Analysis: Applied for LinuxBridge Driver and AVS bridge Driver (can_trunk_bound_port=True), no impact for OVSTrunkDriver (can_trunk_bound_port=False). workaround also available for linux bridge (e.g. unbind the port first then delete the trunk) Suggestion: it is a low priority bug for Neutron upstream (only applied for linux bridge and workround available), suggest not to upstream I think you need to propose a fix. Or this will need to be carried long term. 3. 43a684946e781a25d21a4f50b8dc67d61be42809: Enable trunk service by default Function: add “trunk” in DEFAULT_SERVICE_PLUGINS Analysis: It is a deploy configuration for downstream product Suggestion: Not a bug for Neutron upstream, suggest not to upstream Agree 4. c54d804792f10b7f505de6794274c4df4768f6f0: Include trunk presence in port details Function: add trunk_port (bool) flag in port_details to identify whether this port is a parent port for a trunk Analysis: It is a performance improvement for AVS agent by reducing RPC call from agent to server. OVS agent has different implementation with no improvement by introducing this field Suggestion: Not a bug for Neutron upstream, suggest not to upstream Agree 5. 3eed837ebd236e6b1959ea88d9ab5322c9eef6b9: Ignore trunk subports on same vlan as vlan-subnet ports Function: Ignore trunk subports on same vlan as vlan-subnet ports Analysis: It is a bug fix for AVS agent Suggestion: Not a bug for Neutron upstream, suggest not to upstream Agree Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 17 22:44:44 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 17 Aug 2018 17:44:44 -0500 Subject: [Starlingx-discuss] Cores meeting minutes 8/16/18 In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57BC01@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57B48F@fmsmsx115.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB57BC01@fmsmsx115.amr.corp.intel.com> Message-ID: On Thu, Aug 16, 2018 at 6:45 PM, Jones, Bruce E wrote: > Let's go ahead and pull a branch for this. Dean, can you enable Saul and/or Yong to do this? I went ahead and created a 'f/centos75' branch in stx-tools and stx-integ. I would prefer to only create branches in repos where they are needed so if we need more let me know. I needed to make some changes to the branch-stx.sh script to do this cleanly so went ahead and did it myself. I will make sure Saul and Scott know how this works and will update the wiki docs after https://review.openstack.org/593211merges. dt -- Dean Troyer dtroyer at gmail.com From huifeng.le at intel.com Mon Aug 20 02:31:30 2018 From: huifeng.le at intel.com (Le, Huifeng) Date: Mon, 20 Aug 2018 02:31:30 +0000 Subject: [Starlingx-discuss] Analysis report about Network Trunk feature for StartlingX upstreaming In-Reply-To: <304C67CB-BAE4-4E42-A613-1C848535F166@windriver.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D4EB222@SHSMSX104.ccr.corp.intel.com> <9A85D2917C58154C960D95352B22818BAB577F1B@fmsmsx115.amr.corp.intel.com> <76647BD697F40748B1FA4F56DA02AA0B4D4EB495@SHSMSX104.ccr.corp.intel.com> <304C67CB-BAE4-4E42-A613-1C848535F166@windriver.com> Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D4EDBFE@SHSMSX104.ccr.corp.intel.com> Ian, Thanks very much for the comments. some comments below for you reference, and please help to review, thanks much! Best Regards, Le, Huifeng From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Saturday, August 18, 2018 4:16 AM To: Le, Huifeng ; Jones, Bruce E ; Rowsell, Brent ; Peters, Matt Cc: Zhao, Forrest ; Troyer, Dean ; starlingx-discuss at lists.starlingx.io Subject: Re: Analysis report about Network Trunk feature for StartlingX upstreaming Hi Huifeng; Thanks for the updates/analysis, comments below. Ian Ian/Brent/Matt, We did analysis about the Network trunk related patches for StartingX upstream, below are the suggestions for upstreaming, could you please help to review and comment? Thanks much! 1. ba9d9f60a7a2665194cacb92a05e0acd2dc3de41: Add rpc notification for trunk updates Function: sent notification to the agent when a trunk is updated Analysis: (1)Trunk’s AFTER_UPDATE event is generated for API call: PUT /v2.0/trunks/{trunk-id} The update request is only for changing fields like name, description or admin_state_up. Setting the admin_state_up to False locks the trunk in that it prevents operations such as adding/removing subports. In Neutron upstream, admin_state_up is used in server side, e.g. add_subports, remove subports, delete_trunk and not used in agent side (2)OVS trunk agent driver uses OVSDB event to handle trunk event, no need to manually trigger trunk update event (3)Linux trunk agent driver will handle trunk update event triggered by server, while it will need apply the patch only in case admin_state_up update need to be handled Suggestion: Not a bug for Neutron upstream, suggest not to upstream If this is not upstreamed, are the dependencies or changes required in the StarlingX code base? What are the implications of not upstreaming? [hle2] for STX, trunk_updated event will force the trunk’s parent-port to refresh (e.g. handle_trunks->mark_port_for_refresh(trunk['port_id']) etc.) to get the new “admin_state_up” value from server and this value will be used in handle_updated_port() to determine whether it is allowed to update port/device status in server side. “admin_state_up” is mainly used to control operation at neutron server side like add_subports, remove subports, delete_trunk etc. and all these 3 operations will force port to refresh (handle_trunks/handle_subports->mark_port_for_refresh), so suppose, the general flow will not be impacted whether to handle trunk_updated event or not. But in some wired cases, add “admin_state_up” check in agent side may cause issues (please help to review whether it is make sense), e.g for below calling flow (suppose trunk’s ‘admin_state_up’ is ‘up’): (1) add_subports (2) set “admin_state_up” to ‘down’, step(1) may fail to set device’s state at agent side in case aws agent’s handle_updated_port() (in daemon loop) executed after step (2) So to my understanding: (1) if using OVS agent in STX, no impact for not upstream (2) if using AVS agent + STX, suggest removing “admin_state_up” check in AVS agent (in function handle_updated_port () of avs/agent.py) like below. if trunk_details and trunk_details['admin_state_up']: … 2. 6955351c5eca6e37061fb0140d11ea53693fe0e1: Add support to delete bound network Function: enable delete trunk if it is can_be_trunked (not bounded or driver’s can_trunk_bound_port=true) Analysis: Applied for LinuxBridge Driver and AVS bridge Driver (can_trunk_bound_port=True), no impact for OVSTrunkDriver (can_trunk_bound_port=False). workaround also available for linux bridge (e.g. unbind the port first then delete the trunk) Suggestion: it is a low priority bug for Neutron upstream (only applied for linux bridge and workround available), suggest not to upstream I think you need to propose a fix. Or this will need to be carried long term. [hle2] yes, let’s try to propose a fix for upstream. 3. 43a684946e781a25d21a4f50b8dc67d61be42809: Enable trunk service by default Function: add “trunk” in DEFAULT_SERVICE_PLUGINS Analysis: It is a deploy configuration for downstream product Suggestion: Not a bug for Neutron upstream, suggest not to upstream Agree 4. c54d804792f10b7f505de6794274c4df4768f6f0: Include trunk presence in port details Function: add trunk_port (bool) flag in port_details to identify whether this port is a parent port for a trunk Analysis: It is a performance improvement for AVS agent by reducing RPC call from agent to server. OVS agent has different implementation with no improvement by introducing this field Suggestion: Not a bug for Neutron upstream, suggest not to upstream Agree 5. 3eed837ebd236e6b1959ea88d9ab5322c9eef6b9: Ignore trunk subports on same vlan as vlan-subnet ports Function: Ignore trunk subports on same vlan as vlan-subnet ports Analysis: It is a bug fix for AVS agent Suggestion: Not a bug for Neutron upstream, suggest not to upstream Agree Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From elio.martinez.monroy at intel.com Mon Aug 20 13:01:11 2018 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Mon, 20 Aug 2018 13:01:11 +0000 Subject: [Starlingx-discuss] Sanity Results Message-ID: <1466AF2176E6F040BD63860D0A241BBD1ED4DC98@FMSMSX109.amr.corp.intel.com> Hello guys, During this week we are going to start sending Sanity results for each build. Until today , we have a Jenkins job called full-sanity-multinode-controller-2compute-horizon. As the name says, it is only for 1 controller, 2 compute configuration. We are facing some parsing issues that we need to investigate, the job is not showing the real values, blaming the robot plugin for Jenkins. Beside that we have a couple of test cases failing because of the execution order, this is almost done. Starting today, the job is triggered every time that a new ISO image is available. And will be alert in order to share those results, as I mention before, we are working on this job since it is going to be our baseline to determine if the ISO is healthy or not. https://starlingx-ci.ostc.intel.com/job/full-sanity-multinode-controller-2compute-horizon/ Please let me know who should I include on the Jenkins mail list that get this results. [cid:image001.png at 01CF8BAC.3B4C5DD0] Martinez Monroy, Elio. QA Engineer. Open-source Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4914 bytes Desc: image001.png URL: From Greg.Waines at windriver.com Mon Aug 20 11:17:18 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 20 Aug 2018 11:17:18 +0000 Subject: [Starlingx-discuss] [DOCS] API processing and conversion In-Reply-To: <3808363B39586544A6839C76CF81445EA1A21EE0@ORSMSX104.amr.corp.intel.com> References: <3808363B39586544A6839C76CF81445EA1A21EE0@ORSMSX104.amr.corp.intel.com> Message-ID: <0E72F05D-0741-472C-AF65-2B8246006BE5@windriver.com> Inlined comments below Greg. From: "Tullis, Michael L" Date: Friday, August 17, 2018 at 2:20 PM To: "starlingx-discuss at lists.starlingx.io" Cc: "Jones, Bruce E" , "Khalil, Ghada" , "Kinder, David B" , Greg Waines , "Jolliffe, Ian" , "Arce Moreno, Abraham" , "Rifenbark, ScottX" Subject: [DOCS] API processing and conversion All, After a few more hours of technical investigation, sample builds, and script writing, Scott and I are convinced that the “new direction” outlined below is the way to go for StarlingX API content conversion into the new OpenStack tooling. For the October release, we are confident (and are now committing) that we can deliver clean reST source for the eight API manuals, building into HTML via Sphinx. [Greg] Will this include ? · Documentation of all StarlingX-specific REST APIs? ... i.e. formerly the sysinv REST APIs, · AND · Documentation of “INCREMENTAL” changes to OTHER OpenStack Services' (e.g. Nova, Neutron, Cinder, Glance, ...) REST APIs for which StarlingX is still carrying NON-UPSTREAMED patches? And what will the actual ‘deliverable’ be ? · .rst files ? · a tarball of PDFs and HTMLs generated from .rst files ? · external publishing of HTML pages generated from .rst files on a starlingx.io page ? o with option to download the PDF version ( hoping the later ) Greg. We can provide more details in the upcoming docs meeting. Thx. -- Mike and Scott From: Tullis, Michael L [mailto:michael.l.tullis at intel.com] Sent: Wednesday, August 15, 2018 4:30 PM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS · Converting from the raw XML looks problematic and may be unnecessary. · Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION · Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. · Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. · Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. · This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created § https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Mon Aug 20 15:42:31 2018 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Mon, 20 Aug 2018 15:42:31 +0000 Subject: [Starlingx-discuss] [DOCS] API processing and conversion In-Reply-To: <0E72F05D-0741-472C-AF65-2B8246006BE5@windriver.com> References: <3808363B39586544A6839C76CF81445EA1A21EE0@ORSMSX104.amr.corp.intel.com> <0E72F05D-0741-472C-AF65-2B8246006BE5@windriver.com> Message-ID: <3808363B39586544A6839C76CF81445EA1A2264E@ORSMSX104.amr.corp.intel.com> Inline answers in green. -- Mike From: Waines, Greg [mailto:Greg.Waines at windriver.com] Sent: Monday, August 20, 2018 5:17 AM To: Tullis, Michael L ; starlingx-discuss at lists.starlingx.io Cc: Jones, Bruce E ; Khalil, Ghada ; Kinder, David B ; Jolliffe, Ian ; Arce Moreno, Abraham ; Rifenbark, ScottX Subject: Re: [DOCS] API processing and conversion Inlined comments below Greg. From: "Tullis, Michael L" > Date: Friday, August 17, 2018 at 2:20 PM To: "starlingx-discuss at lists.starlingx.io" > Cc: "Jones, Bruce E" >, "Khalil, Ghada" >, "Kinder, David B" >, Greg Waines >, "Jolliffe, Ian" >, "Arce Moreno, Abraham" >, "Rifenbark, ScottX" > Subject: [DOCS] API processing and conversion All, After a few more hours of technical investigation, sample builds, and script writing, Scott and I are convinced that the “new direction” outlined below is the way to go for StarlingX API content conversion into the new OpenStack tooling. For the October release, we are confident (and are now committing) that we can deliver clean reST source for the eight API manuals, building into HTML via Sphinx. [Greg] Will this include ? · Documentation of all StarlingX-specific REST APIs? ... i.e. formerly the sysinv REST APIs, · AND · Documentation of “INCREMENTAL” changes to OTHER OpenStack Services' (e.g. Nova, Neutron, Cinder, Glance, ...) REST APIs for which StarlingX is still carrying NON-UPSTREAMED patches? Generally speaking, we can generate new API source and new HTML/PDF output for any previously generated APIs that used the old method. Let’s dig into your specific questions in our upcoming doc meeting on Wednesday. And what will the actual ‘deliverable’ be ? · .rst files ? · a tarball of PDFs and HTMLs generated from .rst files ? · external publishing of HTML pages generated from .rst files on a starlingx.io page ? o with option to download the PDF version ( hoping the later ) Yes, the latter. Greg. We can provide more details in the upcoming docs meeting. Thx. -- Mike and Scott From: Tullis, Michael L [mailto:michael.l.tullis at intel.com] Sent: Wednesday, August 15, 2018 4:30 PM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Docs team meeting minutes 8/15 RE: regarding https://storyboard.openstack.org/#!/story/2002712 (converted APIs) for October. I met with Scott today, and based on our early testing, we are feeling confident about our ability to deliver converted APIs. We decided on a new, more efficient methodology today (proposed in more detail below) and will be attempting our first conversion in the coming days. RECENT FINDINGS · Converting from the raw XML looks problematic and may be unnecessary. · Pandoc can convert individual XML (including DocBook and WADL) files, but it cannot handle the variables, hierarchies and includes, which would case substantial sorting out and manual labor. NEW DIRECTION · Start our conversion from the generated HTML. Ghada sent a sample the other day for one of the APIs (the large TGZ file), which we downloaded and inspected. · Based on our early findings, we believe the cleanest path is to start with this compiled HTML file, which sorts out all of the variables, hierarchy, and includes when it is compiled through the old build process. · Pandoc can convert HTML into reST. We do need to write a pre-processing script to handle some of the proprietary markup in the generated HTML. Scott and I are dedicating much of tomorrow to check into that and will circle back again on Friday. · This path is looking efficient and promising! -- Mike and Scott From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, August 15, 2018 2:42 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs team meeting minutes 8/15 We held our weekly Docs team call today. Agenda & notes for the 8/15 team call: Please update your calendars to include a reminder for this meeting. Abraham volunteered to be the TL for this team * Review storyboard open issues and prioritize. Scope and size and decide which can be committed to the October release. * We reviewed each story. All stories and bugs for the release have been tagged stx.2018.10. o Developer Guide / API Documentation available as a source of information * https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation * In particular, can we do https://storyboard.openstack.org/#!/story/2002712 for October? o Checking with our Tech Writing team * What work items are missing from the current backlog? o Greg Waines has created a list of work items, initial stories created § https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Work_Items -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Aug 20 21:23:41 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 20 Aug 2018 21:23:41 +0000 Subject: [Starlingx-discuss] Sanity Results In-Reply-To: <1466AF2176E6F040BD63860D0A241BBD1ED4DC98@FMSMSX109.amr.corp.intel.com> References: <1466AF2176E6F040BD63860D0A241BBD1ED4DC98@FMSMSX109.amr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E716D94A9@fmsmsx104.amr.corp.intel.com> Hi Elio, Thanks for letting us know, please include this email list (starlingx-discuss at lists.starlingx.io) into the recipients for the report. It also would be good to know what tests are part of the Sanity check. BTW, the link you sent doesn't work in the open. Regards A. From: Martinez Monroy, Elio Sent: Monday, August 20, 2018 8:01 AM To: starlingx-discuss at lists.starlingx.io Cc: Cabrales, Ada Subject: Sanity Results Hello guys, During this week we are going to start sending Sanity results for each build. Until today , we have a Jenkins job called full-sanity-multinode-controller-2compute-horizon. As the name says, it is only for 1 controller, 2 compute configuration. We are facing some parsing issues that we need to investigate, the job is not showing the real values, blaming the robot plugin for Jenkins. Beside that we have a couple of test cases failing because of the execution order, this is almost done. Starting today, the job is triggered every time that a new ISO image is available. And will be alert in order to share those results, as I mention before, we are working on this job since it is going to be our baseline to determine if the ISO is healthy or not. https://starlingx-ci.ostc.intel.com/job/full-sanity-multinode-controller-2compute-horizon/ Please let me know who should I include on the Jenkins mail list that get this results.       Martinez Monroy, Elio.                        QA Engineer.                        Open-source Technology Center From dtroyer at gmail.com Mon Aug 20 21:54:26 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 20 Aug 2018 16:54:26 -0500 Subject: [Starlingx-discuss] Sanity Results In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E716D94A9@fmsmsx104.amr.corp.intel.com> References: <1466AF2176E6F040BD63860D0A241BBD1ED4DC98@FMSMSX109.amr.corp.intel.com> <4F6AACE4B0F173488D033B02A8BB5B7E716D94A9@fmsmsx104.amr.corp.intel.com> Message-ID: On Mon, Aug 20, 2018 at 4:23 PM, Cabrales, Ada wrote: > Thanks for letting us know, please include this email list (starlingx-discuss at lists.starlingx.io) into the recipients for the report. If you do this (and I'm not saying it is a good or bad idea), please use a tag in the subject that makes it east to filter for those who may not want to see these emails. In general, sending automated email to a discussion list is rarely a popular idea. Also, they'll all be archived, which may or may not be a feature. This somewhat echos the conversation in IRC Friday about the Gerrit messages being more noise than signal. dt -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Mon Aug 20 22:45:43 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 20 Aug 2018 22:45:43 +0000 Subject: [Starlingx-discuss] Launchpad instance is live Message-ID: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> Our Launchpad database for filing bugs is now live. You can find it on https://bugs.launchpad.net/starlingx. As of right now myself, Dean and Ghada are the administrators. We should add a few more. Volunteers? There are changes between what Launchpad does and how it works, compared to Storyboard. Launchpad has the concept of a Series, which maps to what Jira calls a Release. I have defined the stx.2018.10 and stx.2019.03 Series in Launchpad. Launchpad bugs have an Importance with the usual Critical, High, Medium, Low values, as well as Wishlist. Some things are the same. Both have Tags. I have added all of the Tags defined on https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes as "Official Tags" in our Launchpad instance. There are queries on the main page for each one. You can also query them directly, e.g. https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.build will show you the open bugs for the Build team. Integration with our gerrit is in progress. Please file any new bugs in Launchpad. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Aug 21 00:19:45 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 20 Aug 2018 19:19:45 -0500 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> Message-ID: On Mon, Aug 20, 2018 at 5:45 PM, Jones, Bruce E wrote: > Our Launchpad database for filing bugs is now live. You can find it on > https://bugs.launchpad.net/starlingx. [...] > Integration with our gerrit is in progress. This is working: [0] [1] To link to a Launchpad bug, use the Closes-bug: footer: Closes-bug: 1788069 dt [0] https://review.openstack.org/#/c/593978/ [1] https://bugs.launchpad.net/starlingx/+bug/1788069 -- Dean Troyer dtroyer at gmail.com From chris.friesen at windriver.com Tue Aug 21 15:00:16 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 21 Aug 2018 09:00:16 -0600 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> Message-ID: <5B7C2900.90203@windriver.com> On 08/20/2018 04:45 PM, Jones, Bruce E wrote: > Our Launchpad database for filing bugs is now live. You can find it on > https://bugs.launchpad.net/starlingx. Will the existing bugs in Storyboard be re-created in Launchpad? Chris From bruce.e.jones at intel.com Tue Aug 21 15:03:16 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 21 Aug 2018 15:03:16 +0000 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: <5B7C2900.90203@windriver.com> References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> <5B7C2900.90203@windriver.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB57D620@fmsmsx115.amr.corp.intel.com> I didn't move or modify any Storyboard bugs. If bug owners or teams want to move their bugs to LP, please feel free. brucej -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Tuesday, August 21, 2018 8:00 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Launchpad instance is live On 08/20/2018 04:45 PM, Jones, Bruce E wrote: > Our Launchpad database for filing bugs is now live. You can find it on > https://bugs.launchpad.net/starlingx. Will the existing bugs in Storyboard be re-created in Launchpad? Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Tue Aug 21 15:48:26 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 21 Aug 2018 15:48:26 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> Hi, Can I get clarification/context on these two bug stories that have been recently created? StarlingX does not use gcc 8 currently. What is the activity that is triggering this work? Which sub-team is looking at this? I wouldn't really consider these bugs as there was no requirement previously to support this compiler. If this is part of a new initiative, then we should have a [Feature] story that tracks this initiative with tasks for the different work items required to make the various Starlingx components compliant. https://storyboard.openstack.org/#!/story/2003497 [Bug] GCC 8 complains of invalid reference null check on fm_common https://storyboard.openstack.org/#!/story/2003498 [Bug] fm-common cannot be built with GCC 8 due to string bound checks Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Tue Aug 21 15:50:27 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 21 Aug 2018 15:50:27 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> +1 From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, August 21, 2018 11:48 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Request for clarification related to gcc 8 Hi, Can I get clarification/context on these two bug stories that have been recently created? StarlingX does not use gcc 8 currently. What is the activity that is triggering this work? Which sub-team is looking at this? I wouldn't really consider these bugs as there was no requirement previously to support this compiler. If this is part of a new initiative, then we should have a [Feature] story that tracks this initiative with tasks for the different work items required to make the various Starlingx components compliant. https://storyboard.openstack.org/#!/story/2003497 [Bug] GCC 8 complains of invalid reference null check on fm_common https://storyboard.openstack.org/#!/story/2003498 [Bug] fm-common cannot be built with GCC 8 due to string bound checks Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Aug 21 16:07:05 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 21 Aug 2018 16:07:05 +0000 Subject: [Starlingx-discuss] Count of Storyboard stories per Owner Message-ID: <9A85D2917C58154C960D95352B22818BAB57D695@fmsmsx115.amr.corp.intel.com> We have a script running internally that creates a CSV file using the Storyboard APIs. So, one pivot table later, here is a list of Owners of Stories with the number of Stories they own. The top 3 Story owners are None, Yan and Brent. I'll follow up separately with the list of Stories owned by None. brucej Abraham Arce 14 Al Bailey 1 Alexander Kozyrev 1 Andy Ning 3 Angie Wang 1 Austin Sun 10 Bin Qian 1 Brent Rowsell 35 Bruce Jones 5 Chris Friesen 1 Daniel Badea 1 Daniel Chavolla 2 David Kinder 4 David Sullivan 1 Dean Troyer 2 Don Penney 1 Eddie Ramirez 3 Elena Taivan 1 Elio Martinez 17 Eric MacDonald 5 Erich Cordoba 12 Erick Cardona 8 Fernando Hernandez Gonzalez 2 Florin Dumitrascu 1 Frank Miller 1 Hazzim Anaya 13 Humberto Israel Perez Rodriguez 1 Irina Mihai 1 Jack Ding 1 Jerry Sun 1 Jesus Ornelas Aguayo 5 Jim Gauld 1 Jim Somerville 3 John Kung 1 Jose Perez Carranza 10 Joseph Richard 1 Kailun Qin 2 Kevin Smith 4 Kristine Bujold 2 Lachlan Plant 2 Lin Shuicheng 25 Luis Botello 1 Marcela Rosales 1 Matt Peters 6 Michel Thebeau (WIND) 1 Mingyuan Qi 9 None 69 Ovidiu Poncea 1 Paul-Emile Element 1 Ricardo Perez 9 Saul Wold 2 Scott Little 2 Stefan Dinescu 1 Teresa Ho 3 Tyler Smith 1 Wei Zhou 1 Yan Chen 42 Yi Wang 3 yong hu 1 zhipeng liu 16 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Aug 21 16:12:09 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 21 Aug 2018 16:12:09 +0000 Subject: [Starlingx-discuss] Stories owned by None Message-ID: <9A85D2917C58154C960D95352B22818BAB57D6B8@fmsmsx115.amr.corp.intel.com> Here is the list of Stories owned by None. If you are working on one of these, please assign it to yourself. If your team wants to deliver any of these stories for the next release, the team might want to have someone assign to them and start working on them... Brucej https://storyboard.openstack.org/#!/story/2002561 build-iso still has titanium cloud" in it [...]" todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002566 [neutron] vhost-user configuration option [...] todo Dean Troyer None https://storyboard.openstack.org/#!/story/2002567 [stx-nova] Add OVS datapath type to port profile [...] todo Dean Troyer None https://storyboard.openstack.org/#!/story/2002571 [horizon] Open vSwitch integration with host and c [...] todo Dean Troyer None https://storyboard.openstack.org/#!/story/2002607 Raw image caching support in Cinder [...] todo Zhuweiwei None https://storyboard.openstack.org/#!/story/2002608 Force deletion of an attached volume [...] todo Zhuweiwei None https://storyboard.openstack.org/#!/story/2002616 STX - Define a workflow (pipeline) for building fo [...] todo Cesar Lara None https://storyboard.openstack.org/#!/story/2002716 Implement Validation test content for internal and [...] todo Ada Cabrales None https://storyboard.openstack.org/#!/story/2002734 Get simplex mode, duplex mode and multi-node confi [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002736 Make mirror creation more user friendly [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002759 Create and document automation to run the Clear Li [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002763 BUG: STX compute gets stuck in a reboot loop after [...] inprogress Luis Botello None https://storyboard.openstack.org/#!/story/2002710 Define Exit criteria todo Ada Cabrales None https://storyboard.openstack.org/#!/story/2002803 Evaluate existing unit tests for the StarlingX ser [...] todo Ada Cabrales None https://storyboard.openstack.org/#!/story/2002877 [Bug] Support for configurable pci bus slots for [...] todo Brent Rowsell None https://storyboard.openstack.org/#!/story/2002879 Fix Zuul check failures todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2002880 Fix Zuul check failures todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2002923 write the script todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002923 run the script to create stx.2018.07 [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002923 check in the script todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002923 run a build and send it to the test team [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2002938 lighttpd not started by default in build container [...] todo zhipeng liu None https://storyboard.openstack.org/#!/story/2002939 The url of TisCentos7Distro is wrong in mock.cfg.p [...] todo zhipeng liu None https://storyboard.openstack.org/#!/story/2002944 introduce networking-ovs-dpdk package and validate [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002944 configure neutron::agents::ml2::ovs::firewall_driv [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002946 OVS LLDP protocol enable and configuration [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002946 OVS LLDP agent and neighbour inventory [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002947 create and package OVS PMON configuration files [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002948 OVS collectd cpu usage monitoring [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002948 OVS collectd interface/port state monitoring [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002948 OVS collectd LACP state monitoring [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002948 OVS collectd memory usage monitoring [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002960 Configure PMD rxq affinity based on data port and [...] todo Matt Peters None https://storyboard.openstack.org/#!/story/2002999 Remove SHA from filenames and paths in the downloa [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003011 Missing dependencies for kubernetes packages [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003086 [Bug] ovs validation throws error [...] todo Hayde Martinez None https://storyboard.openstack.org/#!/story/2003122 [Nova]-Need-option-to-enable-track-guest [...] todo Jonte Watford None https://storyboard.openstack.org/#!/story/2003169 [Build][Bug]Http/https proxy set in Dockerfile.cen [...] todo zhipeng liu None https://storyboard.openstack.org/#!/story/2003288 [Build] Provide a way for partner companies to man [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2003320 Missing httplib2 from requirements (cgtsclient) [...] todo Eddie Ramirez None https://storyboard.openstack.org/#!/story/2003330 [BUG] Packages Dependencies Missing [...] review Abraham Arce None https://storyboard.openstack.org/#!/story/2003359 [build] Make zuul linters happy in stx-clients [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003359 [build] Analyze the failures reported by Zuul and [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003360 [build] Make zuul linters happy in stx-config [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003360 [build] Analyze the failures reported by Zuul and [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003361 [build] Make zuul linters happy in stx-fault [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003361 [build] Analyze the failures reported by Zuul and [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003367 [build] Make zuul linters happy in stx-manifest [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003367 [build] Analyze the failures reported by Zuul and [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003370 [build] Make zuul linters happy in stx-root [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003370 [build] Analyze the failures reported by Zuul and [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003372 [build] Make zuul linters happy in stx-upstream [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003372 [build] Analyze the failures reported by Zuul and [...] todo Erich Cordoba None https://storyboard.openstack.org/#!/story/2003389 upgrade openstack-aodh to CentOS 7.5 version [...] todo Lin Shuicheng None https://storyboard.openstack.org/#!/story/2003397 Update page title in Horizon todo Eddie Ramirez None https://storyboard.openstack.org/#!/story/2003410 [bug] Syntax error when installing cgcs_patch. [...] todo Eddie Ramirez None https://storyboard.openstack.org/#!/story/2003389 upgrade remaining rpm to CentOS 7.5 version [...] todo Lin Shuicheng None https://storyboard.openstack.org/#!/story/2003389 Clean Obsolete repo to minimize repo list [...] todo Lin Shuicheng None https://storyboard.openstack.org/#!/story/2003432 [Feature] [Python2] Python 2 to 3 upgrade for stx- [...] todo Yan Chen None https://storyboard.openstack.org/#!/story/2003433 [Feature] [Python2] Python 2 to 3 upgrade for stx- [...] todo Yan Chen None https://storyboard.openstack.org/#!/story/2003435 Remove rpm from repo list if it could be generated [...] todo Lin Shuicheng None https://storyboard.openstack.org/#!/story/2002739 Document the bug handling process [...] todo Bruce Jones None https://storyboard.openstack.org/#!/story/2003452 Nova Upstream Bug #1787298 todo Jonte Watford None https://storyboard.openstack.org/#!/story/2003462 [Bug] stx patched openssh masked by higher CentOS [...] todo Scott Little None https://storyboard.openstack.org/#!/story/2003485 [BUG] lst file renaming breaks generate-cgcs-cento [...] todo Scott Little None https://storyboard.openstack.org/#!/story/2003486 Add 'std' to various paths and filenames [...] todo Scott Little None https://storyboard.openstack.org/#!/story/2003497 [Bug] fm-common cannot be built with GCC 8 [...] review Erich Cordoba None https://storyboard.openstack.org/#!/story/2003498 [Bug] GCC 8 complains of invalid reference null ch [...] review Erich Cordoba None https://storyboard.openstack.org/#!/story/2003506 [Build][Bug] rpm downloader script not always call [...] review Jason McKenna None -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Tue Aug 21 16:35:54 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 21 Aug 2018 16:35:54 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> Message-ID: <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> Hi, I created the two bugs. I'm using gcc 8 as a tool for finding issues that more evident with modern compilers. What I'm doing right now is to compile the C/C++ projects in an isolated environment to perform static analysis. I'm sorry that the title of the issues causes confusion, the two issues are there but gcc 4 doesn't show them. Let me elaborate more on this. > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks This issue is reported also by Coverity (and I think cppcheck as well). A string is stored without a null terminator. This is a security problem not a gcc 8 specific. > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common This one is more tricky. An incorrect usage of a C struct inside C++ code, the difference between a reference and a pointer was confused in the code causing a segfault with optimized code in newer gcc versions. In our path for multi-os support, I think, it's expected to be able to build our projects in different compiler versions. Also, now that we are open source there will be people that will try to build this in clang or even a different architecture having use cases that haven't think about. I believe our code should be robust enough to be portable/flexible without breaking the existing functionality or breaking backwards compatibility with older compilers. I'll update the bugs to clarify the nature of the issues. -Erich On Tue, 2018-08-21 at 15:50 +0000, Rowsell, Brent wrote: > +1 > > > From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] > Sent: Tuesday, August 21, 2018 11:48 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Request for clarification related to gcc > 8 > > Hi, > Can I get clarification/context on these two bug stories that have > been recently created? StarlingX does not use gcc 8 currently. What > is the activity that is triggering this work? Which sub-team is > looking at this? > > I wouldn’t really consider these bugs as there was no requirement > previously to support this compiler. > > If this is part of a new initiative, then we should have a [Feature] > story that tracks this initiative with tasks for the different work > items required to make the various Starlingx components compliant. > > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common > > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks > > Thanks, > Ghada > > Ghada Khalil, Manager, Titanium Cloud, Wind River > direct 613.270.2273 skype ghada.khalil.ottawa > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Tao.Liu at windriver.com Tue Aug 21 17:43:16 2018 From: Tao.Liu at windriver.com (Liu, Tao) Date: Tue, 21 Aug 2018 17:43:16 +0000 Subject: [Starlingx-discuss] Fault Management CLI and GUI Message-ID: <7242A3DC72E453498E3D783BBB134C3E9DD5214A@ALA-MBD.corp.ad.wrs.com> All, I am informing you that, Fault Management(FM) has been decoupled from stx-config as of August 17, 2018. Subsequently, new FM DB, API service and client were created under stx-fault repo. Thus, all fault management CLI were removed from system shell and were added to fm shell. The following CLI commands have been moved from system shell to fm shell: alarm-delete Delete an active alarm. alarm-list List all active alarms. alarm-show Show an active alarm. alarm-summary Show a summary of active alarms. event-list List event logs. event-show Show an event log. event-suppress Suppress specified Event ID's. event-suppress-list List Suppressed Event ID's event-unsuppress Unsuppress specified Event ID's. event-unsuppress-all Unsuppress all Event ID's. For example: "system alarm-list" changes to "fm alarm-list" "system event-list" changes to " fm event-list" "system alarm-summary" changes to "fm alarm-summary" Since aforementioned changes also affect Horizon GUI, Horizon change is also required. The required Horizon change is waiting for Eddie Ramirez to enable the stx-gui so that the change can be integrated. Regards, Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Tue Aug 21 17:48:19 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 21 Aug 2018 17:48:19 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> Hi Erich, Thank you for your response. Perhaps we need to align on the definition of a bug. My definition of a bug is an issue that impacts the operation of starlingx software as it is built/used today. I don't consider issues found in code as a result of using a different compiler/tool/build env/distro a bug. I have no issue with the work itself. I just want it to be categorized properly as a feature/enhancement (ex: Support for gcc 8 in prep for multi-OS Support) with tasks that track the extent of the work instead of individual bug stories. Bruce, we can discuss story creation / categorization guidelines in the Wednesday meeting if needed. Thanks, Ghada -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, August 21, 2018 12:36 PM To: Rowsell, Brent; Khalil, Ghada; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 Hi, I created the two bugs. I'm using gcc 8 as a tool for finding issues that more evident with modern compilers. What I'm doing right now is to compile the C/C++ projects in an isolated environment to perform static analysis. I'm sorry that the title of the issues causes confusion, the two issues are there but gcc 4 doesn't show them. Let me elaborate more on this. > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks This issue is reported also by Coverity (and I think cppcheck as well). A string is stored without a null terminator. This is a security problem not a gcc 8 specific. > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common This one is more tricky. An incorrect usage of a C struct inside C++ code, the difference between a reference and a pointer was confused in the code causing a segfault with optimized code in newer gcc versions. In our path for multi-os support, I think, it's expected to be able to build our projects in different compiler versions. Also, now that we are open source there will be people that will try to build this in clang or even a different architecture having use cases that haven't think about. I believe our code should be robust enough to be portable/flexible without breaking the existing functionality or breaking backwards compatibility with older compilers. I'll update the bugs to clarify the nature of the issues. -Erich On Tue, 2018-08-21 at 15:50 +0000, Rowsell, Brent wrote: > +1 > > > From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] > Sent: Tuesday, August 21, 2018 11:48 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Request for clarification related to gcc > 8 > > Hi, > Can I get clarification/context on these two bug stories that have > been recently created? StarlingX does not use gcc 8 currently. What > is the activity that is triggering this work? Which sub-team is > looking at this? > > I wouldn’t really consider these bugs as there was no requirement > previously to support this compiler. > > If this is part of a new initiative, then we should have a [Feature] > story that tracks this initiative with tasks for the different work > items required to make the various Starlingx components compliant. > > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common > > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks > > Thanks, > Ghada > > Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 > skype ghada.khalil.ottawa > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Tue Aug 21 18:52:13 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 21 Aug 2018 18:52:13 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> Ghada wrote: > Perhaps we need to align on the definition of a bug. My definition of a bug is an issue that impacts the operation of starlingx software as it is built/used today. I don't consider issues found in code as a result of using a different compiler/tool/build env/distro a bug. Code errors like this are bugs that have not yet been found. That may make them less important but it doesn't mean they are not bugs. Our goal should be to make our code as clean and bug free as possible. brucej -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, August 21, 2018 10:48 AM To: Cordoba Malibran, Erich ; Rowsell, Brent ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 Hi Erich, Thank you for your response. Perhaps we need to align on the definition of a bug. My definition of a bug is an issue that impacts the operation of starlingx software as it is built/used today. I don't consider issues found in code as a result of using a different compiler/tool/build env/distro a bug. I have no issue with the work itself. I just want it to be categorized properly as a feature/enhancement (ex: Support for gcc 8 in prep for multi-OS Support) with tasks that track the extent of the work instead of individual bug stories. Bruce, we can discuss story creation / categorization guidelines in the Wednesday meeting if needed. Thanks, Ghada -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, August 21, 2018 12:36 PM To: Rowsell, Brent; Khalil, Ghada; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 Hi, I created the two bugs. I'm using gcc 8 as a tool for finding issues that more evident with modern compilers. What I'm doing right now is to compile the C/C++ projects in an isolated environment to perform static analysis. I'm sorry that the title of the issues causes confusion, the two issues are there but gcc 4 doesn't show them. Let me elaborate more on this. > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks This issue is reported also by Coverity (and I think cppcheck as well). A string is stored without a null terminator. This is a security problem not a gcc 8 specific. > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common This one is more tricky. An incorrect usage of a C struct inside C++ code, the difference between a reference and a pointer was confused in the code causing a segfault with optimized code in newer gcc versions. In our path for multi-os support, I think, it's expected to be able to build our projects in different compiler versions. Also, now that we are open source there will be people that will try to build this in clang or even a different architecture having use cases that haven't think about. I believe our code should be robust enough to be portable/flexible without breaking the existing functionality or breaking backwards compatibility with older compilers. I'll update the bugs to clarify the nature of the issues. -Erich On Tue, 2018-08-21 at 15:50 +0000, Rowsell, Brent wrote: > +1 > > > From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] > Sent: Tuesday, August 21, 2018 11:48 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Request for clarification related to gcc > 8 > > Hi, > Can I get clarification/context on these two bug stories that have > been recently created? StarlingX does not use gcc 8 currently. What > is the activity that is triggering this work? Which sub-team is > looking at this? > > I wouldn’t really consider these bugs as there was no requirement > previously to support this compiler. > > If this is part of a new initiative, then we should have a [Feature] > story that tracks this initiative with tasks for the different work > items required to make the various Starlingx components compliant. > > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common > > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks > > Thanks, > Ghada > > Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 > skype ghada.khalil.ottawa > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Dariush.Eslimi at windriver.com Tue Aug 21 19:11:07 2018 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 21 Aug 2018 19:11:07 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> Message-ID: Bruce I agree on the first on that is a bug, regardless of the tool it is a security issue and should be triaged and assigned a priority to be fixed, but the second one is only an issue if you try to use gcc 8, under currently supported build process that is not a bug. So the second one would be better tracked under a feature that tracks all issues that need to be resolved to support the new compiler. Dariush -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: August-21-18 2:52 PM To: Khalil, Ghada; Cordoba Malibran, Erich; Rowsell, Brent; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 Ghada wrote: > Perhaps we need to align on the definition of a bug. My definition of a bug is an issue that impacts the operation of starlingx software as it is built/used today. I don't consider issues found in code as a result of using a different compiler/tool/build env/distro a bug. Code errors like this are bugs that have not yet been found. That may make them less important but it doesn't mean they are not bugs. Our goal should be to make our code as clean and bug free as possible. brucej -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, August 21, 2018 10:48 AM To: Cordoba Malibran, Erich ; Rowsell, Brent ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 Hi Erich, Thank you for your response. Perhaps we need to align on the definition of a bug. My definition of a bug is an issue that impacts the operation of starlingx software as it is built/used today. I don't consider issues found in code as a result of using a different compiler/tool/build env/distro a bug. I have no issue with the work itself. I just want it to be categorized properly as a feature/enhancement (ex: Support for gcc 8 in prep for multi-OS Support) with tasks that track the extent of the work instead of individual bug stories. Bruce, we can discuss story creation / categorization guidelines in the Wednesday meeting if needed. Thanks, Ghada -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, August 21, 2018 12:36 PM To: Rowsell, Brent; Khalil, Ghada; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 Hi, I created the two bugs. I'm using gcc 8 as a tool for finding issues that more evident with modern compilers. What I'm doing right now is to compile the C/C++ projects in an isolated environment to perform static analysis. I'm sorry that the title of the issues causes confusion, the two issues are there but gcc 4 doesn't show them. Let me elaborate more on this. > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks This issue is reported also by Coverity (and I think cppcheck as well). A string is stored without a null terminator. This is a security problem not a gcc 8 specific. > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common This one is more tricky. An incorrect usage of a C struct inside C++ code, the difference between a reference and a pointer was confused in the code causing a segfault with optimized code in newer gcc versions. In our path for multi-os support, I think, it's expected to be able to build our projects in different compiler versions. Also, now that we are open source there will be people that will try to build this in clang or even a different architecture having use cases that haven't think about. I believe our code should be robust enough to be portable/flexible without breaking the existing functionality or breaking backwards compatibility with older compilers. I'll update the bugs to clarify the nature of the issues. -Erich On Tue, 2018-08-21 at 15:50 +0000, Rowsell, Brent wrote: > +1 > > > From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] > Sent: Tuesday, August 21, 2018 11:48 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Request for clarification related to gcc > 8 > > Hi, > Can I get clarification/context on these two bug stories that have > been recently created? StarlingX does not use gcc 8 currently. What > is the activity that is triggering this work? Which sub-team is > looking at this? > > I wouldn’t really consider these bugs as there was no requirement > previously to support this compiler. > > If this is part of a new initiative, then we should have a [Feature] > story that tracks this initiative with tasks for the different work > items required to make the various Starlingx components compliant. > > https://storyboard.openstack.org/#!/story/2003497 > [Bug] GCC 8 complains of invalid reference null check on fm_common > > https://storyboard.openstack.org/#!/story/2003498 > [Bug] fm-common cannot be built with GCC 8 due to string bound checks > > Thanks, > Ghada > > Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 > skype ghada.khalil.ottawa > 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Tue Aug 21 19:19:34 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 21 Aug 2018 13:19:34 -0600 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> Message-ID: <5B7C65C6.1030900@windriver.com> On 08/21/2018 01:11 PM, Eslimi, Dariush wrote: > Bruce I agree on the first on that is a bug, regardless of the tool it is a > security issue and should be triaged and assigned a priority to be fixed, but > the second one is only an issue if you try to use gcc 8, under currently > supported build process that is not a bug. So the second one would be better > tracked under a feature that tracks all issues that need to be resolved to > support the new compiler. The second case is a classic strncpy() error scenario, which was why strlcpy() was created. In the second case, does the code properly handle the scenario where the resulting string has no null terminator? If the code expects the resulting string to be null-terminated then I think it should be counted as a bug rather than a "support new compiler" feature. Chris From Dariush.Eslimi at windriver.com Tue Aug 21 19:26:42 2018 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Tue, 21 Aug 2018 19:26:42 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <5B7C65C6.1030900@windriver.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> <5B7C65C6.1030900@windriver.com> Message-ID: If that is case Chris, then title of the bug should not be : "fm-common cannot be built with GCC 8 due to string bound checks" Dariush -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: August-21-18 3:20 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 On 08/21/2018 01:11 PM, Eslimi, Dariush wrote: > Bruce I agree on the first on that is a bug, regardless of the tool it > is a security issue and should be triaged and assigned a priority to > be fixed, but the second one is only an issue if you try to use gcc 8, > under currently supported build process that is not a bug. So the > second one would be better tracked under a feature that tracks all > issues that need to be resolved to support the new compiler. The second case is a classic strncpy() error scenario, which was why strlcpy() was created. In the second case, does the code properly handle the scenario where the resulting string has no null terminator? If the code expects the resulting string to be null-terminated then I think it should be counted as a bug rather than a "support new compiler" feature. Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Tue Aug 21 21:07:52 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 21 Aug 2018 15:07:52 -0600 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> <5B7C65C6.1030900@windriver.com> Message-ID: <5B7C7F28.2060303@windriver.com> On 08/21/2018 01:26 PM, Eslimi, Dariush wrote: > If that is case Chris, then title of the bug should not be : "fm-common cannot be built with GCC 8 due to string bound checks" Title has been changed. It is now: [Bug] GCC 8 highlights potentially-risky strncpy() usage in fm-common Chris From sgw at linux.intel.com Tue Aug 21 22:11:54 2018 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 21 Aug 2018 15:11:54 -0700 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> Message-ID: <62367bcc-f801-2ac2-22c3-a6e2d32857b9@linux.intel.com> On 08/20/2018 03:45 PM, Jones, Bruce E wrote: > Our Launchpad database for filing bugs is now live.   You can find it on > https://bugs.launchpad.net/starlingx. > > As of right now myself, Dean and Ghada are the administrators.  We > should add a few more.  Volunteers? > I will step up to this also. Sau! > There are changes between what Launchpad does and how it works, compared > to Storyboard.    Launchpad has the concept of a Series, which maps to > what Jira calls a Release.  I have defined the stx.2018.10 and > stx.2019.03 Series in Launchpad.  Launchpad bugs have an Importance with > the usual Critical, High, Medium, Low values, as well as Wishlist. > > Some things are the same.  Both have Tags.  I have added all of the Tags > defined on https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes > as “Official Tags” in our Launchpad instance.  There are queries on the > main page for each one.  You can also query them directly, e.g. > https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.build will show > you the open bugs for the Build team. > > Integration with our gerrit is in progress. > > Please file any new bugs in Launchpad. > >         brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From hayde.martinez.landa at intel.com Tue Aug 21 22:45:45 2018 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Tue, 21 Aug 2018 22:45:45 +0000 Subject: [Starlingx-discuss] Mirror Download Results Notification Message-ID: <84E71219-66BB-43D3-BA3C-1BC45AC00464@intel.com> Hi All, In the Weekly StarlingX Management Meeting from 2 weeks Ago, the build team talked about the daily automated jobs for mirror download, and it was requested to have results sent to the email list. This job runs once per day, this results in one email per day. By reading one of the latest [1] emails where it was discussed that sending daily results to the list might bother we decided that the notification will only be sent if the job fails. This is how the email will look: Subject : Automated Notification for: [mirror-downloader][#6] Results report Body: [mirror-downloader][#6] Missing packages. Results report: - Missing: output/centos_rpms_missing_L1.txt python2-pyngus-2.2.3-1.el7.noarch.rpm - Missing GPG key ./output/stx-r1/CentOS/pike/Binary/noarch/OVMF-20150414-2.gitc9e5618.el7.noarch.rpm: RSA sha1 ((MD5) PGP) md5 NOT OK (MISSING KEYS: (MD5) PGP#61e8806c) ./output/stx-r1/CentOS/pike/Source/kubernetes-1.10.0-1.el7.src.rpm: RSA sha1 ((MD5) PGP) md5 NOT OK (MISSING KEYS: (MD5) PGP#61e8806c) ./output/stx-r1/CentOS/pike/Source/libvirt-python-3.5.0-1.fc24.src.rpm: RSA sha1 ((MD5) PGP) md5 NOT OK (MISSING KEYS: (MD5) PGP#596bea5d) As suggested in the already mentioned email [1] you can filter by "Automated Notification" and ignore these. Please let me know if you have any comments or suggestions. [1] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000763.html From cindy.xie at intel.com Wed Aug 22 00:23:33 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 22 Aug 2018 00:23:33 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <5B7C7F28.2060303@windriver.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> <5B7C65C6.1030900@windriver.com> <5B7C7F28.2060303@windriver.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B320D99@SHSMSX104.ccr.corp.intel.com> Both shall be tagged with "stx-security" bug in my opinion. Thx. - cindy -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, August 22, 2018 5:08 AM To: Eslimi, Dariush ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 On 08/21/2018 01:26 PM, Eslimi, Dariush wrote: > If that is case Chris, then title of the bug should not be : "fm-common cannot be built with GCC 8 due to string bound checks" Title has been changed. It is now: [Bug] GCC 8 highlights potentially-risky strncpy() usage in fm-common Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dehao.shang at intel.com Wed Aug 22 05:47:09 2018 From: dehao.shang at intel.com (Shang, Dehao) Date: Wed, 22 Aug 2018 05:47:09 +0000 Subject: [Starlingx-discuss] build Starlingx Image Message-ID: <71AECFE5078153419EB7B8DBE0644B2638634721@shsmsx102.ccr.corp.intel.com> Hi: When I try to build starlingx image, i have some problems and can't resolve it. I follow the following doc to build our image. https://wiki.openstack.org/wiki/StarlingX/Developer_Guide#Build_the_CentOS_Mirror_Repository At setup building Docker Container, step 7: make build At building process, executing the below command will fails. RUN useradd -r -u $MYUID -g cgts -m $MYUNAME && ln -s /home/$MYUNAME/.ssh /mySSH Error is useradd: UID 0 is not unique. I try to add flag -o into useradd, so can pass this step. But at build packages, step 4 will fails. build-pkgs --serial The main reason of this step is that lighttpd startup fails. I also try other method to build it. For example: at /../stx-tools/Makefile file, delete -build-arg MYUID=$(UID) (at my host machine, i directly use root account) So, command "make build " can pass, but non-root account will be lack of permission to do anything inside container. Anybody have ideal to this issue. Thanks Dehao -------------- next part -------------- An HTML attachment was scrubbed... URL: From dehao.shang at intel.com Wed Aug 22 05:25:27 2018 From: dehao.shang at intel.com (Shang, Dehao) Date: Wed, 22 Aug 2018 05:25:27 +0000 Subject: [Starlingx-discuss] build Starlingx Image Message-ID: <71AECFE5078153419EB7B8DBE0644B26386346D0@shsmsx102.ccr.corp.intel.com> Hi: When I try to build starlingx image, i have some problems and can't resolve it. I follow the following doc to build our image. https://wiki.openstack.org/wiki/StarlingX/Developer_Guide#Build_the_CentOS_Mirror_Repository At setup building Docker Container, step 7: make build At building process, executing the below command will fails. RUN useradd -r -u $MYUID -g cgts -m $MYUNAME && ln -s /home/$MYUNAME/.ssh /mySSH Error is useradd: UID 0 is not unique. I try to add flag -o into useradd, so can pass this step. But at build packages, step 4 will fails. build-pkgs --serial The main reason of this step is that lighttpd startup fails. I also try other method to build it. For example: at /../stx-tools/Makefile file, delete -build-arg MYUID=$(UID) (at my host machine, i directly use root account) So, command "make build " can pass, but non-root account will be lack of permission to do anything inside container. Anybody have ideal to this issue. Thanks Dehao -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Wed Aug 22 09:31:57 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 22 Aug 2018 09:31:57 +0000 Subject: [Starlingx-discuss] Analysis of patch 9671feb for StartlingX upstreaming Message-ID: Ian/Brent/Matt, We analyze the patch 9671feb related to l2pop. We think that this is not a bug for neutron ovs and suggest not to upstream. Because for the outgoing packet that destination unknown, avs agent may drop the packet while ovs agent will flood it and finally learn the flows. Could you please help to review and comment? Thanks very much! Patch ID: 9671feb Patch Description: l2pop: allow csnat ports in list of endpoints. Not quite sure why upstream is not impacted by this issue, but we need to have CSNAT ports included in the list of ports that the l2pop driver handles. Without this it is not possible for a VM on one node to reach the CSNAT port because the MAC address of the CSNAT port is not included in the static FDB entries. Without this our vswitch drops the outgoing packet since that's what it does with packets to unknown destinations while in static mode. Patch Analysis Report: You can find it in the attachment Comment: We setup an environment(Openstack: DVR + L3 HA + L2POP) to check the bug exists or not in neutron ovs. Several scenarios are checked and the result shows that the patch 9671feb is not a bug for neutron ovs. In each scenario, we check whether the vm can ping the router's gateway or not and the flows in the compute node exists or not. In all scenarios, the router's gateway can be accessed. The flows related to csnat port exists after the csnat port is created except for the third scenario. In the third scenario(a new host is added to the openstack, thus the host don't have the flows for l2pop), when a vm is created on the new host, all fdbs(except ha and csnat port) should be sent to this host due to that the port created is the first port in this host. And now there are no flows related to csnat port. After ping router's gateway, the flows related to csnat port appears. This means that ovs agent won't drop the outgoing packet that destination unknown and will flood it. Below is the result: Before ping router's gateway(fa:16:3e:72:06:05 is csnat port's mac address), there are no flows related to csnat port. This means that the fdb for csnat port is not sent to the host even the first port is created due to a vm creation. [cid:image001.png at 01D43A34.92B4F470] After ping router's gateway, the flow related to csnat port appears. This means that ovs agent won't drop the outgoing packet that destination unknown and will flood it. [cid:image002.png at 01D43A34.92B4F470] Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 14324 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 26368 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Patch_Report_9671feb.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 139435 bytes Desc: Patch_Report_9671feb.docx URL: From Ken.Young at windriver.com Wed Aug 22 13:40:08 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Wed, 22 Aug 2018 13:40:08 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B320D99@SHSMSX104.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> <5B7C65C6.1030900@windriver.com> <5B7C7F28.2060303@windriver.com> <2FD5DDB5A04D264C80D42CA35194914F2B320D99@SHSMSX104.ccr.corp.intel.com> Message-ID: Cindy, At a higher level, what are the plans for gcc 8? Is this really part of the security strategy or part of moving the OS forward? All this needs to be discussed as we kick off the work in the security group's efforts. Right now, all the work feels adhoc. Regards, Ken Y On 2018-08-21, 8:23 PM, "Xie, Cindy" wrote: Both shall be tagged with "stx-security" bug in my opinion. Thx. - cindy -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, August 22, 2018 5:08 AM To: Eslimi, Dariush ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 On 08/21/2018 01:26 PM, Eslimi, Dariush wrote: > If that is case Chris, then title of the bug should not be : "fm-common cannot be built with GCC 8 due to string bound checks" Title has been changed. It is now: [Bug] GCC 8 highlights potentially-risky strncpy() usage in fm-common Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From erich.cordoba.malibran at intel.com Wed Aug 22 14:05:02 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 22 Aug 2018 14:05:02 +0000 Subject: [Starlingx-discuss] Request for clarification related to gcc 8 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4367A9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB22C96E@ALA-MBD.corp.ad.wrs.com> <5cb2f35452663aa372eac14f2cd6bca51fa1ccbd.camel@intel.com> <151EE31B9FCCA54397A757BC674650F0BA43690F@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB57D8CE@fmsmsx115.amr.corp.intel.com> <5B7C65C6.1030900@windriver.com> <5B7C7F28.2060303@windriver.com> <2FD5DDB5A04D264C80D42CA35194914F2B320D99@SHSMSX104.ccr.corp.intel.com> Message-ID: <4AA0FD1B-B834-4088-8D8F-511D3C8961A4@intel.com> One of the reason behind exploring gcc 8 is because that is the one used in Clear Linux, as we are looking into support Clear Linux in some point in the future and we'll need to build these projects with a newer compiler. I think that this is not different from the effort to move from python 2 into python 3. BTW, the issue regarding the null reference is present since gcc 6, I pointed out into gcc 8 as was the first I tried. -Erich On 8/22/18, 8:40 AM, "Young, Ken" wrote: Cindy, At a higher level, what are the plans for gcc 8? Is this really part of the security strategy or part of moving the OS forward? All this needs to be discussed as we kick off the work in the security group's efforts. Right now, all the work feels adhoc. Regards, Ken Y On 2018-08-21, 8:23 PM, "Xie, Cindy" wrote: Both shall be tagged with "stx-security" bug in my opinion. Thx. - cindy -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, August 22, 2018 5:08 AM To: Eslimi, Dariush ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Request for clarification related to gcc 8 On 08/21/2018 01:26 PM, Eslimi, Dariush wrote: > If that is case Chris, then title of the bug should not be : "fm-common cannot be built with GCC 8 due to string bound checks" Title has been changed. It is now: [Bug] GCC 8 highlights potentially-risky strncpy() usage in fm-common Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Jason.McKenna at windriver.com Wed Aug 22 14:47:54 2018 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Wed, 22 Aug 2018 14:47:54 +0000 Subject: [Starlingx-discuss] breaking .lst files apart among repos Message-ID: Hi folks (especially Erich, Scott, Marcela, or anyone else working on the download tools), We'd like to support per-repo .lst files used by download_mirrors.sh. This would decouple the stx-tools repo from all the code repos. Changes (for example, to uprev a package) would only affect a single repo rather than both the code repo and stx-tools. This would also allow individual entities to integrate their own repos into products, or to pick-and-choose versions of the different repos to build into products. My initial work would just be to support per-repo .lst files as an option, with actually breaking the existing .lst files up among the repos as a later task. My initial thoughts would be that .src.rpms would be in per-repo lst files. Build-time and run time requirement binary rpms are tougher to nail down, as they might be used by different repos, and we don't want one repo asking for one version, and a second repo asking for a different version, etc. Before I start working too deep on this, does anyone have any thoughts, or work in progress along these lines? -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 22 14:58:58 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 22 Aug 2018 14:58:58 +0000 Subject: [Starlingx-discuss] Project call minutes 8/22/2018 Message-ID: <9A85D2917C58154C960D95352B22818BAB57DF56@fmsmsx115.amr.corp.intel.com> Agenda and notes from the 8/22 meeting * Core team call is cancelled for this week. * Security issue handling - Security team o Need to be more open and transparent about the process the Intel team is following - don't surprise the community. o Need to be more open (within the Security team) about the work items coming out of the Intel security review o Static analysis results and how to handle? o Banned C function usage and how to handle? o Intel has internal milestones for this work. o Ken to set a Security team call. Action needed to set the priority of the Security work in the community. o Ildiko can help with setting calls. Please reach out to her. * STX-GUI end to end plan - Dariush - Eddie's code merged yesterday. Code is not yet enabled, during configuration the plug-in needs to be enabled. Want to see an email to the community notifying people of the change, since other devs depend on this. Configuration code needs to be written, likely outside of Eddie's scope. Will also need to remove older panels from Horizon in a coordinated way once we flip the switch. Bruce to get with Eddie. * Centos package updates - stale reviews and feature branch status - Ghada - reviews on main have been moved to the feature branch, will be merged there, reviews on mainline should be abandoned. Both community and Wind River should test the branch before it merges to main. Test build should be available end of next week. * Definition of a bug - Ghada o Follow up to the Security discussion - would like to see defects coming out of large efforts e.g. static analysis shouldn't be tracked on a bug by bug basis but as part of larged stories - don't want to see one bug per static analysis issue for instance. For example, we should define one story for static analysis issues for stx-fault and let the team decide how/if to break that work up into tasks. o Can use the Importance field in LP to assign priorities to bugs. Should agree on how to set priorities. * Bottom up plan for release - Ghada o weekly meetings for sub-teams - schedule on wiki with Ildiko o Sub-teams should define priorities for the teams and tag stories for the right release. * For example, the Docs team has tagged their stories, holds weekly calls and post call minutes to the email list. o In OpenStack the teams determine release content and the Release team handles process and mechanics. In StarlingX we've been leaning toward the Release team determining content. We should probably start leaning more toward a collaborative model. o Release team can put a proposal together for review * Sanity testing is almost done. Elio would like to start publishing the results to some public forum. Spamming the email isn't desirable. Will start with a wiki page. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Wed Aug 22 16:07:54 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 22 Aug 2018 16:07:54 +0000 Subject: [Starlingx-discuss] build Starlingx Image In-Reply-To: <71AECFE5078153419EB7B8DBE0644B2638634721@shsmsx102.ccr.corp.intel.com> References: <71AECFE5078153419EB7B8DBE0644B2638634721@shsmsx102.ccr.corp.intel.com> Message-ID: Hi Dehao, > When I try to build starlingx image, i have some problems and can't > resolve it. > > At setup building Docker Container, step 7: > make build > At building process, executing the below command will fails. > RUN useradd -r -u $MYUID -g cgts -m $MYUNAME && ln -s > /home/$MYUNAME/.ssh /mySSH > Error is useradd: UID 0 is not unique. > > Anybody have ideal to this issue. Not from this side, is you user part of docker group? are you executing with sudo? From sgw at linux.intel.com Wed Aug 22 16:40:51 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 22 Aug 2018 09:40:51 -0700 Subject: [Starlingx-discuss] build Starlingx Image In-Reply-To: <71AECFE5078153419EB7B8DBE0644B2638634721@shsmsx102.ccr.corp.intel.com> References: <71AECFE5078153419EB7B8DBE0644B2638634721@shsmsx102.ccr.corp.intel.com> Message-ID: On 08/21/2018 10:47 PM, Shang, Dehao wrote: > Hi: > >          When I try to build starlingx image, i have some problems and > can’t resolve it. > >          I follow the following doc to build our image. > > https://wiki.openstack.org/wiki/StarlingX/Developer_Guide#Build_the_CentOS_Mirror_Repository > >          At setup building Docker Container, step 7: > >                   make build > >          At building process,  executing the below command will fails. > >          RUN useradd -r -u $MYUID -g cgts -m $MYUNAME &&     ln -s > /home/$MYUNAME/.ssh /mySSH > > Error is useradd: UID 0 is not unique. > Are you running the "make build" as root (or via sudo)? You need to run this as a regular user not root. >          I try to add flag –o into useradd, so can pass this step. But > at build packages, step 4 will fails. > > build-pkgs --serial > > The main reason of this step is that lighttpd startup fails. > >          I also try other method to build it. > >          For example: at /../stx-tools/Makefile file, delete –build-arg > MYUID=$(UID) (at my host machine, i directly use root account) > >          So, command “make build ” can pass, but non-root account will > be lack of permission to do anything inside container. > Mostly you should not be doing things as root inside the container, but if needed sudo is available. Sau! >          Anybody have ideal to this issue. > > Thanks > > Dehao > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From claire at openstack.org Wed Aug 22 17:07:45 2018 From: claire at openstack.org (Claire Massey) Date: Wed, 22 Aug 2018 12:07:45 -0500 Subject: [Starlingx-discuss] Bi-Weekly Mtgs - Marketing/Community Content Work Message-ID: Hello StarlingX team, Between now and the Berlin Summit our OSF team would like to dedicate time to workshop through some marketing and community content with you. Content includes high-level brand messaging, tagline, faq, project one page overview, community on-boarding slide deck, mission statement, etc. Anyone who's interested in helping out is welcomed to join, particularly the docs team! We’ve identified every other Wednesday at 8:00am PT as an ideal time for the calls. The first bi-weekly call will be held on September 5. Dial in: https://zoom.us/j/470924647 Agenda & Notes: https://etherpad.openstack.org/p/stx-2018-marketing-content Upcoming Calls Scheduled at 8:00am PT: • Sept 5 • Sept 19 • Oct 3 • Oct 17 • Oct 31 • Nov 7 We look forward to chatting with you! Thanks, Claire Claire Massey OpenStack Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Wed Aug 22 17:07:52 2018 From: claire at openstack.org (claire at openstack.org) Date: Wed, 22 Aug 2018 17:07:52 +0000 Subject: [Starlingx-discuss] Invitation: Bi-Weekly StarlingX Marketing/Community Content Work @ Every 2 weeks from 10am to 11am on Wednesday from Wed Sep 5 to Wed Nov 7 (CDT) (starlingx-discuss@lists.starlingx.io) Message-ID: <000000000000cd25ed0574092ea6@google.com> You have been invited to the following event. Title: Bi-Weekly StarlingX Marketing/Community Content Work Agenda & Notes: https://etherpad.openstack.org/p/stx-2018-marketing-content Join from PC, Mac, Linux, iOS or Android: https://zoom.us/j/470924647 Or iPhone one-tap : US: +16699006833,,470924647# or +16468769923,,470924647# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Meeting ID: 470 924 647 International numbers available: https://zoom.us/u/RIBXdQZH When: Every 2 weeks from 10am to 11am on Wednesday from Wed Sep 5 to Wed Nov 7 Central Time - Chicago Where: https://zoom.us/j/470924647 Calendar: starlingx-discuss at lists.starlingx.io Who: * claire at openstack.org - organizer * Allison Price * glenn.seiler at windriver.com * Ildiko Vancsa * Jeff * scott.w.doenecke at intel.com * starlingx-discuss at lists.starlingx.io * Travis V Broughton Event details: https://www.google.com/calendar/event?action=VIEW&eid=XzZzcDM4ZHBvNjRzNDJiYTU2a3JrY2I5azZvc2owYjlwNzRxamliOXA4NHMzZWRobDZjcWo0Y3E1NzBsbGtqcWY5bDZrYWhhazk1NzRlamlsOWwxNGFraHE2Z3JqMGU5aTZncjM4ZG8gc3Rhcmxpbmd4LWRpc2N1c3NAbGlzdHMuc3Rhcmxpbmd4Lmlv&tok=MjAjY2xhaXJlQG9wZW5zdGFjay5vcmcyOTNkMDQ5ZjYxYzVjZjM4MjU3MDRjNDQ0ZDI3MzRkYjE0MWY2ZTA1&ctz=America%2FChicago&hl=en&es=0 Invitation from Google Calendar: https://www.google.com/calendar/ You are receiving this courtesy email at the account starlingx-discuss at lists.starlingx.io because you are an attendee of this event. To stop receiving future updates for this event, decline this event. Alternatively you can sign up for a Google account at https://www.google.com/calendar/ and control your notification settings for your entire calendar. Forwarding this invitation could allow any recipient to modify your RSVP response. Learn more at https://support.google.com/calendar/answer/37135#forwarding -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3521 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: invite.ics Type: application/ics Size: 3600 bytes Desc: not available URL: From bruce.e.jones at intel.com Wed Aug 22 17:23:33 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 22 Aug 2018 17:23:33 +0000 Subject: [Starlingx-discuss] Superuser article Message-ID: <9A85D2917C58154C960D95352B22818BAB57E0D4@fmsmsx115.amr.corp.intel.com> Here is a link to an article about our project: https://superuser.openstack.org/articles/starlingx-overview/ brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Aug 22 18:35:17 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 22 Aug 2018 18:35:17 +0000 Subject: [Starlingx-discuss] Github pull requests & link to StoryBoard? Message-ID: Dean: I assume that the github PRs are not tied into StoryBoard like gerrit is and that after you merge a PR then the prime who initiated the PR should go to their StoryBoard task and manually mark it as merged. Can you confirm? Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 22 18:39:31 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 22 Aug 2018 18:39:31 +0000 Subject: [Starlingx-discuss] Intel security process..... Message-ID: <9A85D2917C58154C960D95352B22818BAB57E13C@fmsmsx115.amr.corp.intel.com> As discussed at the team meeting today, we're going to provide more transparency on the Intel internal security process. While this is very much our burden to carry, the work impacts the project and needs to be discussed with the community. We have completed the first two phases of the process. The second phase included a deep security threat review which resulted in several new work items for the team to address. The major deliverables of the 3rd phase of the security process are: * Don't implement backdoors * Document APIs and Interfaces * Do not use banned C functions * Validate inputs * Compile with defenses enabled * Conduct manual code reviews * Complete static analysis * Create security and privacy validation plan * Complete security risk evaluation of open source components * Remove software debug access Some of these items are in progress (e.g. API documentation, static analysis) and have or will result in work items for the project. Some of them are entirely internal (e.g. security risk evaluation). Some will result in code changes, some will not. Some are binary (thou shall / shall not) and some are quite nuanced. The purpose behind it all is to help the project identify, manage and address security and privacy issues. We have been tracking this work internally. In the interests of openness and collaboration, I'm going to ask the team to also open Stories for this work and work more openly. I would like to suggest that the Security team needs to meet soon and have a discussion as to how to prioritize this work, and in particular, how to handle potential Critical/High vulnerabilities in the software that may be found in the course of this work. There is one more thing I'd like to clarify. This work does not need to be complete for the October release. I would like to see this part of the work complete by the end of the year, if possible. Of course, any Critical/High vulnerabilities or issues found need to be addressed urgently. I am happy to answer any questions on this topic. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Aug 22 19:38:48 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 22 Aug 2018 14:38:48 -0500 Subject: [Starlingx-discuss] Github pull requests & link to StoryBoard? In-Reply-To: References: Message-ID: On Wed, Aug 22, 2018 at 1:35 PM, Miller, Frank wrote: > I assume that the github PRs are not tied into StoryBoard like gerrit is and > that after you merge a PR then the prime who initiated the PR should go to > their StoryBoard task and manually mark it as merged. Can you confirm? This is correct. dt -- Dean Troyer dtroyer at gmail.com From cindy.xie at intel.com Thu Aug 23 00:41:24 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 23 Aug 2018 00:41:24 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B322D31@SHSMSX104.ccr.corp.intel.com> All, As we moved all CentOS7.5 patches from mainline to feature branch: f/centos75, please do the review in feature branch for the new coming patches: https://review.openstack.org/#/q/status:open+project:openstack/stx-integ+branch:f/centos75 https://review.openstack.org/#/q/status:open+project:openstack/stx-tools+branch:f/centos75 https://review.openstack.org/#/q/status:open+project:openstack/stx-upstream+branch:f/centos75 for those already have CR+2, appreciate that Cores can provide W+1 so that they can be merged into feature branch. Thx. - cindy From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, August 14, 2018 1:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 All, Shuicheng has one story with many tasks open for SRPM (and its dependent RPM) upgrade to CentOS 7.5: https://storyboard.openstack.org/#!/story/2003389 Please provide your code review feedback actively (CR+1, CR+2). However, please hold to have W+1 at this moment. We will do a test build when all 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. @Ada, please can you kindly support the validation for the build when we are ready? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Aug 23 00:59:11 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 22 Aug 2018 17:59:11 -0700 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B322D31@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B322D31@SHSMSX104.ccr.corp.intel.com> Message-ID: <594c218c-2196-1386-db18-2934d440fe72@linux.intel.com> On 08/22/2018 05:41 PM, Xie, Cindy wrote: > All, > > As we moved all CentOS7.5 patches from mainline to feature branch: > f/centos75, please do the review in feature branch for the new coming > patches: > > https://review.openstack.org/#/q/status:open+project:openstack/stx-integ+branch:f/centos75 > > https://review.openstack.org/#/q/status:open+project:openstack/stx-tools+branch:f/centos75 > > https://review.openstack.org/#/q/status:open+project:openstack/stx-upstream+branch:f/centos75 > > for those already have CR+2, appreciate that Cores can provide W+1 so > that they can be merged into feature branch. > I have been working my way through that list. I just +W most of them. Please have your team mark the ones for Master as abandoned at this point also. Sau! > Thx. - cindy > > *From:* Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Tuesday, August 14, 2018 1:13 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 > > All, > > Shuicheng has one story with many tasks open for SRPM (and its dependent > RPM) upgrade to CentOS 7.5: > https://storyboard.openstack.org/#!/story/2003389 > > Please provide your code review feedback actively (CR+1, CR+2). However, > please hold to have W+1 at this moment. We will do a test build when all > 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. > > @Ada, please can you kindly support the validation for the build when we > are ready? > > Thanks. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From dehao.shang at intel.com Thu Aug 23 01:27:14 2018 From: dehao.shang at intel.com (Shang, Dehao) Date: Thu, 23 Aug 2018 01:27:14 +0000 Subject: [Starlingx-discuss] build Starlingx Image In-Reply-To: References: <71AECFE5078153419EB7B8DBE0644B2638634721@shsmsx102.ccr.corp.intel.com> Message-ID: <71AECFE5078153419EB7B8DBE0644B2638634847@shsmsx102.ccr.corp.intel.com> Hi, Arce Moreno Thank you for your response. Yes, I have added two account into docker group at host machine. One is root. The other is non-root account which have the same name with container's non-root account. Thanks Dehao -----Original Message----- From: Arce Moreno, Abraham Sent: Thursday, August 23, 2018 12:08 AM To: Shang, Dehao ; starlingx-discuss at lists.starlingx.io Subject: RE: build Starlingx Image Hi Dehao, > When I try to build starlingx image, i have some problems and > can't resolve it. > > At setup building Docker Container, step 7: > make build > At building process, executing the below command will fails. > RUN useradd -r -u $MYUID -g cgts -m $MYUNAME && ln -s > /home/$MYUNAME/.ssh /mySSH > Error is useradd: UID 0 is not unique. > > Anybody have ideal to this issue. Not from this side, is you user part of docker group? are you executing with sudo? From eddie.ramirez at intel.com Thu Aug 23 01:43:09 2018 From: eddie.ramirez at intel.com (Ramirez, Eddie) Date: Thu, 23 Aug 2018 01:43:09 +0000 Subject: [Starlingx-discuss] The stx-gui challenge Message-ID: Hi all, The last few months working on stx-horizon have given me a broad understanding of the customizations, new dependencies, additions and removals of LOCs that WRS have added on top of Horizon. For simplicity, let’s imagine stx-horizon as a superset of horizon that makes stx-specific functionality a reality. The downside of having and maintaining a custom “superset” of horizon is that catching up with upstream is expensive, painful and time-consuming that requires a having solid understanding of this project. In an attempt to alleviate the tedious, rebase work after every upstream release, a pluggable python package that would carry stx-specific functionality was the most sound option and architecture to adopt. Horizon supports “plugins”, the recommended way to extend and add to the functionality that already exists and, after removing more than 25,000 lines of code from stx-horizon, the stx-gui horizon was born. [cid:image001.png at 01D43A47.F8374DD0] *the stx-horizon superset of horizon is smaller now* What does the plugin do? Stx-gui isolates new panels (System Inventory, Fault Management, Server Groups, etc), API wrappers for clients (cgtsclient, cgcs_patch, sysinv, etc) and many other utility functions that are specific to the StarlingX project. This architecture helps us to: 1. Significantly improve the way we deal with technical debt: rebases will be easier as more LOCs are moved from stx-horizon to stx-gui, until we end up using the upstream version of horizon ☺ 2. Follow a community well-known architecture for extending horizon: “want to add custom functionality to horizon? Write a plugin”… that’s what you’d hear from the community Another side-benefits from doing this splitting 1. Detected additions that can make horizon upstream better: this process threw light on the upstreaming work that I do in parallel. 2. Detected dead code: some files are still holding portions of code that are never executed 3. Found and documented hard dependencies that must be defined somewhere (requirements.txt) 4. An x-ray for understanding what the development documentation would look like 5. Understanding the dimension of the customizations made by WRS: horizon acts as a proxy between operators/users and API endpoints through python-clients. Future modifications to python-clients and APIs will certainly affect the way Horizon talks to them. What does NOT the plugin do? * It does not remove all of the customizations made to the internals of Horizon, that includes: * Any modification done to existing panels (built in panels like Instances, Containers, etc) * The horizon Framework (the way tables are rendered or respond to user actions) * Fixes to JS files, etc. * Tests * Tabs added to Horizon Panels: Network->Routers->Port Forwarding is an example * It does not specify what dependencies it needs (but they are documented) How do I use it? The steps below are ONLY for development and assuming you’re developing outside of the VM running StarlingX. 1. Git clone stx-horizon and switch to branch post-stx-gui-cleanup, you can find the branch on my github profile or check the PR. 2. cd stx-horizon and create a virtualenv with py27 in it. 3. Install Horizon dependencies:pip install -r requirements.txt 4. Git clone stx-gui in a different directory outside of stx-horizon 5. cd stx-gui and using the same virtualenvironment for stx-horizon, run python setup.py install. This will create a packaged version of the plugin that is accessible by stx-horizon’s venv. 6. Install all stx-gui dependencies 7. Copy enable files inside of stx-gui/starlingx_dashboard/enabled/ to stx-horizon/openstack_dashboard/local/enabled/ 8. Finally, inside of stx-horizon, run the horizon dev server with “python manage.py runserver” The installation can be simplified and automated when building an ISO. I’m not familiar with the process but I can guide by explaining what steps must be followed and in what order. It basically involves moving to the right branches, cloning stx-gui, creating a package and copying files to one directory. Please read the extensive documentation I’ve put on this etherpad for more instructions for better format. What’s next? There will be a number of things to keep in mind from now on, answering some questions can help to understand if a change goes to stx-horizon or stx-gui. Also, the long work to make stx-gui work with Upstream Horizon is still pending, but fairly documented thanks to the splitting process. The developer experience is important too, how can we make working with stx-gui an inviting place for others? Py3 compatibility and many other things that are aligned to our priorities and commitments. Eddie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 25451 bytes Desc: image001.png URL: From cindy.xie at intel.com Thu Aug 23 02:29:46 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 23 Aug 2018 02:29:46 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <594c218c-2196-1386-db18-2934d440fe72@linux.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B322D31@SHSMSX104.ccr.corp.intel.com> <594c218c-2196-1386-db18-2934d440fe72@linux.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B3231A5@SHSMSX104.ccr.corp.intel.com> Shuicheng who submitted most of the patches in mainline is now on vacation till next week. So please bear those for a while before he came back. Thx. - cindy -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, August 23, 2018 8:59 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 On 08/22/2018 05:41 PM, Xie, Cindy wrote: > All, > > As we moved all CentOS7.5 patches from mainline to feature branch: > f/centos75, please do the review in feature branch for the new coming > patches: > > https://review.openstack.org/#/q/status:open+project:openstack/stx-int > eg+branch:f/centos75 > > https://review.openstack.org/#/q/status:open+project:openstack/stx-too > ls+branch:f/centos75 > > https://review.openstack.org/#/q/status:open+project:openstack/stx-ups > tream+branch:f/centos75 > > for those already have CR+2, appreciate that Cores can provide W+1 so > that they can be merged into feature branch. > I have been working my way through that list. I just +W most of them. Please have your team mark the ones for Master as abandoned at this point also. Sau! > Thx. - cindy > > *From:* Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Tuesday, August 14, 2018 1:13 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 > > All, > > Shuicheng has one story with many tasks open for SRPM (and its > dependent > RPM) upgrade to CentOS 7.5: > https://storyboard.openstack.org/#!/story/2003389 > > Please provide your code review feedback actively (CR+1, CR+2). > However, please hold to have W+1 at this moment. We will do a test > build when all > 47 sRPM upgrade has been done and mirror upgrade to 7.5 done. > > @Ada, please can you kindly support the validation for the build when > we are ready? > > Thanks. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Thu Aug 23 04:33:43 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 22 Aug 2018 23:33:43 -0500 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B3231A5@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B322D31@SHSMSX104.ccr.corp.intel.com> <594c218c-2196-1386-db18-2934d440fe72@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B3231A5@SHSMSX104.ccr.corp.intel.com> Message-ID: On Wed, Aug 22, 2018 at 9:29 PM, Xie, Cindy wrote: > Shuicheng who submitted most of the patches in mainline is now on vacation till next week. So please bear those for a while before he came back. Any core reviewer can abandon reviews in addition to the owner, I went through and got the ones where the f/centos75 version has merged. dt -- Dean Troyer dtroyer at gmail.com From cindy.xie at intel.com Thu Aug 23 04:51:20 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 23 Aug 2018 04:51:20 +0000 Subject: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F2B316343@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B322D31@SHSMSX104.ccr.corp.intel.com> <594c218c-2196-1386-db18-2934d440fe72@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F2B3231A5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B3236E4@SHSMSX104.ccr.corp.intel.com> Thanks - it works! - cindy -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, August 23, 2018 12:34 PM To: Xie, Cindy Cc: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] patches for SRPM upgrade to CentOS 7.5 On Wed, Aug 22, 2018 at 9:29 PM, Xie, Cindy wrote: > Shuicheng who submitted most of the patches in mainline is now on vacation till next week. So please bear those for a while before he came back. Any core reviewer can abandon reviews in addition to the owner, I went through and got the ones where the f/centos75 version has merged. dt -- Dean Troyer dtroyer at gmail.com From aragorn at intel.com Thu Aug 23 18:43:58 2018 From: aragorn at intel.com (aragorn at intel.com) Date: 23 Aug 2018 11:43:58 -0700 Subject: [Starlingx-discuss] Automated Notification for: [mirror-downloader][#63] Missing packages - Results report Message-ID: [mirror-downloader][#63] Missing packages. Results report: - Missing: output/rpms_rpms_missing_K1.txt python2-pyngus-2.2.3-1.el7.noarch.rpm - Missing GPG key: ./output/stx-r1/CentOS/pike/Source/libvirt-python-3.5.0-1.fc24.src.rpm: RSA sha1 ((MD5) PGP) md5 NOT OK (MISSING KEYS: (MD5) PGP#596bea5d) From cindy.xie at intel.com Fri Aug 24 01:37:51 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 24 Aug 2018 01:37:51 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B325649@SHSMSX104.ccr.corp.intel.com> * Cadence and time slot: o Wednesday 9AM EDT (9PM China time) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3044 bytes Desc: not available URL: From ran1.an at intel.com Fri Aug 24 02:15:19 2018 From: ran1.an at intel.com (An, Ran1) Date: Fri, 24 Aug 2018 02:15:19 +0000 Subject: [Starlingx-discuss] add unit test to zuul Message-ID: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> Hi: I'm looking to add exist unit tests of controlconfig(under project stx-config) to zuul but 2 of them are fail now. The error note that can't import module "fm_core" which seams be imported by commit https://git.openstack.org/cgit/openstack/stx-fault/commit/fm-api/fm_api/fm_api.py?id=c8159ea6cbace0a23a7639fc41d5c73619e70704. Does anyone is on the way to fixing this? Thanks Ran An -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.McKenna at windriver.com Fri Aug 24 13:13:19 2018 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Fri, 24 Aug 2018 13:13:19 +0000 Subject: [Starlingx-discuss] Current build compile time error Message-ID: Hi StarlingX, As a heads up, I'm seeing the current StarlingX build fail. Problem seems to be introduced by a package being uprev'd yesterday in the .lst files, but the dependencies for that package not being uprev'd. I've got a fix on my system and am doing a test build now. I will post a gerrit review after I've validated. Did anyone else observe this in builds over the last ~ 10 hours? -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Fri Aug 24 13:19:06 2018 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Fri, 24 Aug 2018 13:19:06 +0000 Subject: [Starlingx-discuss] add unit test to zuul In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> Message-ID: You should be able to run tox -e py27 now since this commit was merged https://github.com/openstack/stx-config/commit/b9ce2626ff452d3c9ffc58cdbcac3c0d85b13716 Al From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Thursday, August 23, 2018 10:15 PM To: Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] add unit test to zuul Hi: I'm looking to add exist unit tests of controlconfig(under project stx-config) to zuul but 2 of them are fail now. The error note that can't import module "fm_core" which seams be imported by commit https://git.openstack.org/cgit/openstack/stx-fault/commit/fm-api/fm_api/fm_api.py?id=c8159ea6cbace0a23a7639fc41d5c73619e70704. Does anyone is on the way to fixing this? Thanks Ran An -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Fri Aug 24 14:25:17 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Fri, 24 Aug 2018 14:25:17 +0000 Subject: [Starlingx-discuss] Current build compile time error In-Reply-To: References: Message-ID: > As a heads up, I'm seeing the current StarlingX build fail. Problem seems to > be introduced by a package being uprev'd yesterday in the .lst files, but the > dependencies for that package not being uprev'd. I've got a fix on my system > and am doing a test build now. I will post a gerrit review after I've validated. > > Did anyone else observe this in builds over the last ~ 10 hours? I have launched a build environment one hour ago, I will get back to you during the day with my findings including your patch cherry picked, please let me know how I can help. From cindy.xie at intel.com Fri Aug 24 14:27:03 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 24 Aug 2018 14:27:03 +0000 Subject: [Starlingx-discuss] Current build compile time error In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B32754A@SHSMSX104.ccr.corp.intel.com> Jason, Is this patch? https://review.openstack.org/#/c/594795/ Thx. - cindy From: McKenna, Jason [mailto:Jason.McKenna at windriver.com] Sent: Friday, August 24, 2018 9:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Current build compile time error Hi StarlingX, As a heads up, I'm seeing the current StarlingX build fail. Problem seems to be introduced by a package being uprev'd yesterday in the .lst files, but the dependencies for that package not being uprev'd. I've got a fix on my system and am doing a test build now. I will post a gerrit review after I've validated. Did anyone else observe this in builds over the last ~ 10 hours? -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 24 14:29:03 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 24 Aug 2018 14:29:03 +0000 Subject: [Starlingx-discuss] Current build compile time error In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B327587@SHSMSX104.ccr.corp.intel.com> Abraham, Try to revert the patch https://review.openstack.org/#/c/594795/ and have a try? Thx. - cindy -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Friday, August 24, 2018 10:25 PM To: McKenna, Jason ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Current build compile time error > As a heads up, I'm seeing the current StarlingX build fail. Problem > seems to be introduced by a package being uprev'd yesterday in the > .lst files, but the dependencies for that package not being uprev'd. > I've got a fix on my system and am doing a test build now. I will post a gerrit review after I've validated. > > Did anyone else observe this in builds over the last ~ 10 hours? I have launched a build environment one hour ago, I will get back to you during the day with my findings including your patch cherry picked, please let me know how I can help. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Fri Aug 24 14:30:56 2018 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 24 Aug 2018 14:30:56 +0000 Subject: [Starlingx-discuss] add unit test to zuul In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> Message-ID: Hi AI and All: Thanks. https://review.openstack.org/#/c/595352/ was merged, but An ran is working on controlconfig , which is different unit test with sysinv. Can it use similar fix for controlconfig ? BTW: I met "ImportError: No module named rpm" in Ubuntu platform for sysinv unit test. Do you know how to install python rpm package in Ubuntu ? I tried several ways , but none worked. Thanks. BR Austin Sun. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, August 24, 2018 9:19 PM To: An, Ran1 ; Liu, Tao ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] add unit test to zuul You should be able to run tox -e py27 now since this commit was merged https://github.com/openstack/stx-config/commit/b9ce2626ff452d3c9ffc58cdbcac3c0d85b13716 Al From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Thursday, August 23, 2018 10:15 PM To: Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] add unit test to zuul Hi: I'm looking to add exist unit tests of controlconfig(under project stx-config) to zuul but 2 of them are fail now. The error note that can't import module "fm_core" which seams be imported by commit https://git.openstack.org/cgit/openstack/stx-fault/commit/fm-api/fm_api/fm_api.py?id=c8159ea6cbace0a23a7639fc41d5c73619e70704. Does anyone is on the way to fixing this? Thanks Ran An -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Fri Aug 24 14:59:11 2018 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Fri, 24 Aug 2018 14:59:11 +0000 Subject: [Starlingx-discuss] add unit test to zuul In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> Message-ID: You can use a similar trick in controllerconfig for fm_core Add something like this to controllerconfig/tests/__init__.py try: import fm_core except: import mock import sys sys.modules['fm_core'] = mock.Mock() The tox.ini paths also need to be adjusted. The tests themselves are failing due to version discrepencies (18.03 vs 18.04). I have not looked into this. For the rpm package, in Centos the rpm-python package needed to be yum installed and site-packages set to true in tox.ini. I don't have Ubuntu experience but from what I have read, Ubuntu uses Debian format instead of RPM so the code would likely need to be written differently to work in an Ubuntu env. Al From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Friday, August 24, 2018 10:31 AM To: Bailey, Henry Albert (Al); An, Ran1; Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] add unit test to zuul Hi AI and All: Thanks. https://review.openstack.org/#/c/595352/ was merged, but An ran is working on controlconfig , which is different unit test with sysinv. Can it use similar fix for controlconfig ? BTW: I met "ImportError: No module named rpm" in Ubuntu platform for sysinv unit test. Do you know how to install python rpm package in Ubuntu ? I tried several ways , but none worked. Thanks. BR Austin Sun. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, August 24, 2018 9:19 PM To: An, Ran1 ; Liu, Tao ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] add unit test to zuul You should be able to run tox -e py27 now since this commit was merged https://github.com/openstack/stx-config/commit/b9ce2626ff452d3c9ffc58cdbcac3c0d85b13716 Al From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Thursday, August 23, 2018 10:15 PM To: Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] add unit test to zuul Hi: I'm looking to add exist unit tests of controlconfig(under project stx-config) to zuul but 2 of them are fail now. The error note that can't import module "fm_core" which seams be imported by commit https://git.openstack.org/cgit/openstack/stx-fault/commit/fm-api/fm_api/fm_api.py?id=c8159ea6cbace0a23a7639fc41d5c73619e70704. Does anyone is on the way to fixing this? Thanks Ran An -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Fri Aug 24 18:20:44 2018 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 24 Aug 2018 11:20:44 -0700 Subject: [Starlingx-discuss] Berlin Community Contributor Awards Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), its time to kick off the Community Contributor Award nominations! For those of you that might already know what they are, here is the form[1]. For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in the community that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. So go forth and nominate these amazing community members[1]! Nominations will close on October 21st at 7:00 UTC and winners will be announced at the OpenStack Summit in Berlin[2]. -Kendall (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/berlin_stein_ccas [2]https://www.openstack.org/summit/berlin-2018/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.McKenna at windriver.com Fri Aug 24 18:50:00 2018 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Fri, 24 Aug 2018 18:50:00 +0000 Subject: [Starlingx-discuss] Current build compile time error In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B327587@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B327587@SHSMSX104.ccr.corp.intel.com> Message-ID: StarlingX build now successful and fix merged https://review.openstack.org/#/c/596376/ > -----Original Message----- > From: Xie, Cindy > Sent: August 24, 2018 10:29 AM > To: Arce Moreno, Abraham ; McKenna, > Jason ; starlingx-discuss at lists.starlingx.io > Subject: RE: Current build compile time error > > Abraham, > Try to revert the patch https://review.openstack.org/#/c/594795/ and have a > try? > > Thx. - cindy > > -----Original Message----- > From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] > Sent: Friday, August 24, 2018 10:25 PM > To: McKenna, Jason ; starlingx- > discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Current build compile time error > > > As a heads up, I'm seeing the current StarlingX build fail. Problem > > seems to be introduced by a package being uprev'd yesterday in the > > .lst files, but the dependencies for that package not being uprev'd. > > I've got a fix on my system and am doing a test build now. I will post a gerrit > review after I've validated. > > > > Did anyone else observe this in builds over the last ~ 10 hours? > > I have launched a build environment one hour ago, I will get back to you > during the day with my findings including your patch cherry picked, please let > me know how I can help. > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Fri Aug 24 20:18:49 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 24 Aug 2018 20:18:49 +0000 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: <62367bcc-f801-2ac2-22c3-a6e2d32857b9@linux.intel.com> References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> <62367bcc-f801-2ac2-22c3-a6e2d32857b9@linux.intel.com> Message-ID: Bruce: Please add Dariush, Ken and I as administrators as well. Frank -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Tuesday, August 21, 2018 6:12 PM To: Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Launchpad instance is live On 08/20/2018 03:45 PM, Jones, Bruce E wrote: > Our Launchpad database for filing bugs is now live.   You can find it > on https://bugs.launchpad.net/starlingx. > > As of right now myself, Dean and Ghada are the administrators.  We > should add a few more.  Volunteers? > I will step up to this also. Sau! > There are changes between what Launchpad does and how it works, > compared to Storyboard.    Launchpad has the concept of a Series, > which maps to what Jira calls a Release.  I have defined the > stx.2018.10 and > stx.2019.03 Series in Launchpad.  Launchpad bugs have an Importance > with the usual Critical, High, Medium, Low values, as well as Wishlist. > > Some things are the same.  Both have Tags.  I have added all of the > Tags defined on > https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes > as “Official Tags” in our Launchpad instance.  There are queries on > the main page for each one.  You can also query them directly, e.g. > https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.build will > show you the open bugs for the Build team. > > Integration with our gerrit is in progress. > > Please file any new bugs in Launchpad. > >         brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From erich.cordoba.malibran at intel.com Sat Aug 25 20:02:38 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Sat, 25 Aug 2018 20:02:38 +0000 Subject: [Starlingx-discuss] Build error on service manager Message-ID: <36812E51-6053-4EA9-8B5D-C0BD07612C76@intel.com> Hi all, A recent change in the sm package caused a compilation error. I sent this review to fix the problem: https://review.openstack.org/#/c/596554/1 however I'm still not sure if that is the expected behavior. -Erich From austin.sun at intel.com Sun Aug 26 01:29:25 2018 From: austin.sun at intel.com (Sun, Austin) Date: Sun, 26 Aug 2018 01:29:25 +0000 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> <62367bcc-f801-2ac2-22c3-a6e2d32857b9@linux.intel.com> Message-ID: Hi Bruce and experts of Launchpad Bug: Can we add "Also affects project" for sub project Or we can only use tag to record it ? Thanks. BR Austin Sun. -----Original Message----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Saturday, August 25, 2018 4:19 AM To: Saul Wold ; Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Launchpad instance is live Bruce: Please add Dariush, Ken and I as administrators as well. Frank -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Tuesday, August 21, 2018 6:12 PM To: Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Launchpad instance is live On 08/20/2018 03:45 PM, Jones, Bruce E wrote: > Our Launchpad database for filing bugs is now live.   You can find it > on https://bugs.launchpad.net/starlingx. > > As of right now myself, Dean and Ghada are the administrators.  We > should add a few more.  Volunteers? > I will step up to this also. Sau! > There are changes between what Launchpad does and how it works, > compared to Storyboard.    Launchpad has the concept of a Series, > which maps to what Jira calls a Release.  I have defined the > stx.2018.10 and > stx.2019.03 Series in Launchpad.  Launchpad bugs have an Importance > with the usual Critical, High, Medium, Low values, as well as Wishlist. > > Some things are the same.  Both have Tags.  I have added all of the > Tags defined on > https://wiki.openstack.org/wiki/StarlingX/Tags_and_Prefixes > as “Official Tags” in our Launchpad instance.  There are queries on > the main page for each one.  You can also query them directly, e.g. > https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.build will > show you the open bugs for the Build team. > > Integration with our gerrit is in progress. > > Please file any new bugs in Launchpad. > >         brucej > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Sun Aug 26 01:55:22 2018 From: austin.sun at intel.com (Sun, Austin) Date: Sun, 26 Aug 2018 01:55:22 +0000 Subject: [Starlingx-discuss] add unit test to zuul In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> Message-ID: Hi AI: Thanks . I found a way to fix "rpm package" issue. https://review.openstack.org/#/c/596563/. Thanks. BR Austin Sun. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, August 24, 2018 10:59 PM To: Sun, Austin ; An, Ran1 ; Liu, Tao ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] add unit test to zuul You can use a similar trick in controllerconfig for fm_core Add something like this to controllerconfig/tests/__init__.py try: import fm_core except: import mock import sys sys.modules['fm_core'] = mock.Mock() The tox.ini paths also need to be adjusted. The tests themselves are failing due to version discrepencies (18.03 vs 18.04). I have not looked into this. For the rpm package, in Centos the rpm-python package needed to be yum installed and site-packages set to true in tox.ini. I don't have Ubuntu experience but from what I have read, Ubuntu uses Debian format instead of RPM so the code would likely need to be written differently to work in an Ubuntu env. Al From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Friday, August 24, 2018 10:31 AM To: Bailey, Henry Albert (Al); An, Ran1; Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] add unit test to zuul Hi AI and All: Thanks. https://review.openstack.org/#/c/595352/ was merged, but An ran is working on controlconfig , which is different unit test with sysinv. Can it use similar fix for controlconfig ? BTW: I met "ImportError: No module named rpm" in Ubuntu platform for sysinv unit test. Do you know how to install python rpm package in Ubuntu ? I tried several ways , but none worked. Thanks. BR Austin Sun. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, August 24, 2018 9:19 PM To: An, Ran1 >; Liu, Tao >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] add unit test to zuul You should be able to run tox -e py27 now since this commit was merged https://github.com/openstack/stx-config/commit/b9ce2626ff452d3c9ffc58cdbcac3c0d85b13716 Al From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Thursday, August 23, 2018 10:15 PM To: Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] add unit test to zuul Hi: I'm looking to add exist unit tests of controlconfig(under project stx-config) to zuul but 2 of them are fail now. The error note that can't import module "fm_core" which seams be imported by commit https://git.openstack.org/cgit/openstack/stx-fault/commit/fm-api/fm_api/fm_api.py?id=c8159ea6cbace0a23a7639fc41d5c73619e70704. Does anyone is on the way to fixing this? Thanks Ran An -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Sun Aug 26 03:09:06 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Sat, 25 Aug 2018 22:09:06 -0500 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> <62367bcc-f801-2ac2-22c3-a6e2d32857b9@linux.intel.com> Message-ID: On Sat, Aug 25, 2018 at 8:29 PM, Sun, Austin wrote: > Can we add "Also affects project" for sub project Or we can only use tag to record it ? There 'project' refers to other Launchpad projects not StarlingX projects. Since we only have one Launchpad project we will likely not use that unless we have a bug that crosses to something else in Launchpad (possibly an OpenStack project that has not migrated to Storyboard?) dt -- Dean Troyer dtroyer at gmail.com From dehao.shang at intel.com Sun Aug 26 14:36:49 2018 From: dehao.shang at intel.com (Shang, Dehao) Date: Sun, 26 Aug 2018 14:36:49 +0000 Subject: [Starlingx-discuss] building starlingx image issue. Message-ID: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> Hi, On 24/7/18, i cleaned up all pkgs and environments, clone the newest stx-tools and re-build image from the first step. When running "bash download_mirror.sh", in order to ensure mirror completeness, i check all missing.lst and fails.lst content at $HOME/stx-tools/centos-mirror-tools/output folder. When running "generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/", i also check miss.txt file, and manually download all missing packages. But now, "build-pkgs --serial" still show that some packages building fails. Build-std.log file show 259 pkgs building success, and still have some packages fails as following: Failed to build packages: vm-topology-1.0-1.tis.src.rpm sm-1.0.0-23.tis.src.rpm libvirt-python-3.5.0-1.tis.1.src.rpm libvirt-3.5.0-1.tis.2.src.rpm integrity-kmod-4.12-0.tis.5.src.rpm I check every std/results/user-starling-tis-r5-pike-std/xxxxx/root.log file. I think that one error is obvious, namely Error: No Package found for libvirt-devel >= 0.9.11 at std/results/user-starlingx-tis-r5-pike-std/libvirt-python-3.5.0-1.tis.1/root.log I try to manually download libvirt-devel-4.3.0-1.el7.x86_64.rpm, copy it into $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/x86_64/, create a link to /localdisk/designer/dehao/starlingx/cgcs-root/cgcs-centos-repo/Binary/, then update link using generate-cgcs-centos-repo.sh inside container. But, still fails. Whether still lack of some package? If yes, I should download which version, and which folder these missing package be placed to ? The following is some log information at /std/results/user-starlingx-tis-pike-std/ folder: ========================================== For vm-topology-1.0-1.tis DEBUG util.py:577: Executing command: ['/usr/bin/yum-builddep', '--installroot', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/', '--releasever', '7', '/localdisk/loadbuild/dehao/starlingx/std/mock/root//builddir/build/SRPMS/vm-topology-1.0-1.tis.src.rpm'] with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'LC_MESSAGES': 'C.UTF-8', 'WRS_GIT_BRANCH': 'HEAD', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PS1': ' \\s-\\v\\$ ', 'BUILD_BY': 'dehao'} and shell False DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for vm-topology-1.0-1.tis.src DEBUG util.py:491: --> Already installed : python-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-keyring-5.7.1-1.tis.2.noarch DEBUG util.py:491: --> python2-setuptools-38.5.1-1.el7.tis.0.noarch DEBUG util.py:491: Error: No Package found for libvirt DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 For sm-1.0.0-23.tis [dehao at 64407abf411b std]$ vm-topology-1.0-1.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log [dehao at 64407abf411b std]$ results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log^C [dehao at 64407abf411b std]$ sm-1.0.0-23.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/root.log DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for sm-1.0.0-23.tis.src DEBUG util.py:491: --> Already installed : fm-common-dev-1.0-8.tis.x86_64 DEBUG util.py:491: --> Already installed : gcc-4.8.5-28.el7.x86_64 DEBUG util.py:491: --> Already installed : glib2-devel-2.50.3-3.el7.x86_64 DEBUG util.py:491: --> Already installed : glibc-2.17-222.el7.x86_64 DEBUG util.py:491: --> Already installed : json-c-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : json-c-devel-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : libuuid-devel-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-devel-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : sm-common-dev-1.0.0-19.tis.x86_64 DEBUG util.py:491: --> Already installed : sm-db-dev-1.0.0-25.tis.x86_64 DEBUG util.py:491: --> Already installed : sqlite-devel-3.7.17-8.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True For libvirt-python-3.5.0-1.tis.1 DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for libvirt-python-3.5.0-1.tis.1.src DEBUG util.py:491: --> python-devel-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-lxml-3.2.1-4.el7.x86_64 DEBUG util.py:491: --> python-nose-1.3.7-7.el7.noarch DEBUG util.py:491: Error: No Package found for libvirt-devel >= 0.9.11 DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For libvirt-3.5.0-1.tis.2 DEBUG util.py:491: --> Already installed : sanlock-devel-3.5.0-1.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : scrub-2.5.2-7.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemtap-sdt-devel-3.1-3.el7.x86_64 DEBUG util.py:491: --> Already installed : util-linux-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : xhtml1-dtds-1.0-20020801.11.el7.noarch DEBUG util.py:491: --> Already installed : yajl-devel-2.0.4-4.el7.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/libvirt-3.5.0-1.tis.2/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/pts'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/shm'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For integrity-kmod-4.12-0.tis.5.src.rpm DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for integrity-kmod-4.12-0.tis.5.src DEBUG util.py:491: --> Already installed : kernel-devel-3.10.0-862.6.3.el7.36.tis.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : 4:perl-5.16.3-292.el7.x86_64 DEBUG util.py:491: --> Already installed : redhat-rpm-config-9.1.0-80.el7.centos.noarch DEBUG util.py:491: --> Already installed : tpm-kmod-symbols-4.12-0.tis.5.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/integrity-kmod-4.12-0.tis.5/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None Thanks Dehao -------------- next part -------------- An HTML attachment was scrubbed... URL: From ran1.an at intel.com Sun Aug 26 16:21:41 2018 From: ran1.an at intel.com (An, Ran1) Date: Sun, 26 Aug 2018 16:21:41 +0000 Subject: [Starlingx-discuss] add unit test to zuul In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CECABDD7@shsmsx102.ccr.corp.intel.com> Message-ID: <9BAB5B7CAF57C3459E4636391F1071CECAC1DB@shsmsx102.ccr.corp.intel.com> Hi Al Thanks. I'm trying to solve the version discrepancies problem. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, August 24, 2018 10:59 PM To: Sun, Austin ; An, Ran1 ; Liu, Tao ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] add unit test to zuul You can use a similar trick in controllerconfig for fm_core Add something like this to controllerconfig/tests/__init__.py try: import fm_core except: import mock import sys sys.modules['fm_core'] = mock.Mock() The tox.ini paths also need to be adjusted. The tests themselves are failing due to version discrepencies (18.03 vs 18.04). I have not looked into this. For the rpm package, in Centos the rpm-python package needed to be yum installed and site-packages set to true in tox.ini. I don't have Ubuntu experience but from what I have read, Ubuntu uses Debian format instead of RPM so the code would likely need to be written differently to work in an Ubuntu env. Al From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Friday, August 24, 2018 10:31 AM To: Bailey, Henry Albert (Al); An, Ran1; Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] add unit test to zuul Hi AI and All: Thanks. https://review.openstack.org/#/c/595352/ was merged, but An ran is working on controlconfig , which is different unit test with sysinv. Can it use similar fix for controlconfig ? BTW: I met "ImportError: No module named rpm" in Ubuntu platform for sysinv unit test. Do you know how to install python rpm package in Ubuntu ? I tried several ways , but none worked. Thanks. BR Austin Sun. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Friday, August 24, 2018 9:19 PM To: An, Ran1 >; Liu, Tao >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] add unit test to zuul You should be able to run tox -e py27 now since this commit was merged https://github.com/openstack/stx-config/commit/b9ce2626ff452d3c9ffc58cdbcac3c0d85b13716 Al From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Thursday, August 23, 2018 10:15 PM To: Liu, Tao; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] add unit test to zuul Hi: I'm looking to add exist unit tests of controlconfig(under project stx-config) to zuul but 2 of them are fail now. The error note that can't import module "fm_core" which seams be imported by commit https://git.openstack.org/cgit/openstack/stx-fault/commit/fm-api/fm_api/fm_api.py?id=c8159ea6cbace0a23a7639fc41d5c73619e70704. Does anyone is on the way to fixing this? Thanks Ran An -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Aug 27 01:06:26 2018 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 27 Aug 2018 01:06:26 +0000 Subject: [Starlingx-discuss] building starlingx image issue. In-Reply-To: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> References: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> Message-ID: <0AFBA892-368F-454D-A493-702EC4EF9258@intel.com> Libvirt and its related packages should be built out of “~/cgcs-root/stx/git/libvirt/”. If the build of this libvirt src tree was successful, you should see such a file: ./std/rpmbuild/RPMS/libvirt-devel-3.5.0-1.tis.2.x86_64.rpm, which would be used as the dependency for other packages build later. Looking through your log, it seemed “libvirt” build failed. And most likely that’s caused by other packages (required by libvirt building) missing in your mirror. My advice is you go back to check what packages were missing (meaning failed to download) by studying log files in your **mirror download container**, Specially watch out those files named “xx_missing.txt”. From: "Shang, Dehao" dehao.shang at intel.com Date: Sunday, 26 August 2018 at 11:09 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] building starlingx image issue. Hi, On 24/7/18, i cleaned up all pkgs and environments, clone the newest stx-tools and re-build image from the first step. When running “bash download_mirror.sh”, in order to ensure mirror completeness, i check all missing.lst and fails.lst content at $HOME/stx-tools/centos-mirror-tools/output folder. When running “generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/”, i also check miss.txt file, and manually download all missing packages. But now, “build-pkgs --serial” still show that some packages building fails. Build-std.log file show 259 pkgs building success, and still have some packages fails as following: Failed to build packages: vm-topology-1.0-1.tis.src.rpm sm-1.0.0-23.tis.src.rpm libvirt-python-3.5.0-1.tis.1.src.rpm libvirt-3.5.0-1.tis.2.src.rpm integrity-kmod-4.12-0.tis.5.src.rpm I check every std/results/user-starling-tis-r5-pike-std/xxxxx/root.log file. I think that one error is obvious, namely Error: No Package found for libvirt-devel >= 0.9.11 at std/results/user-starlingx-tis-r5-pike-std/libvirt-python-3.5.0-1.tis.1/root.log I try to manually download libvirt-devel-4.3.0-1.el7.x86_64.rpm, copy it into $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/x86_64/, create a link to /localdisk/designer/dehao/starlingx/cgcs-root/cgcs-centos-repo/Binary/, then update link using generate-cgcs-centos-repo.sh inside container. But, still fails. Whether still lack of some package? If yes, I should download which version, and which folder these missing package be placed to ? The following is some log information at /std/results/user-starlingx-tis-pike-std/ folder: ========================================== For vm-topology-1.0-1.tis DEBUG util.py:577: Executing command: ['/usr/bin/yum-builddep', '--installroot', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/', '--releasever', '7', '/localdisk/loadbuild/dehao/starlingx/std/mock/root//builddir/build/SRPMS/vm-topology-1.0-1.tis.src.rpm'] with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'LC_MESSAGES': 'C.UTF-8', 'WRS_GIT_BRANCH': 'HEAD', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PS1': ' \\s-\\v\\$ ', 'BUILD_BY': 'dehao'} and shell False DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for vm-topology-1.0-1.tis.src DEBUG util.py:491: --> Already installed : python-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-keyring-5.7.1-1.tis.2.noarch DEBUG util.py:491: --> python2-setuptools-38.5.1-1.el7.tis.0.noarch DEBUG util.py:491: Error: No Package found for libvirt DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 For sm-1.0.0-23.tis [dehao at 64407abf411b std]$ vm-topology-1.0-1.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log [dehao at 64407abf411b std]$ results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log^C [dehao at 64407abf411b std]$ sm-1.0.0-23.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/root.log DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for sm-1.0.0-23.tis.src DEBUG util.py:491: --> Already installed : fm-common-dev-1.0-8.tis.x86_64 DEBUG util.py:491: --> Already installed : gcc-4.8.5-28.el7.x86_64 DEBUG util.py:491: --> Already installed : glib2-devel-2.50.3-3.el7.x86_64 DEBUG util.py:491: --> Already installed : glibc-2.17-222.el7.x86_64 DEBUG util.py:491: --> Already installed : json-c-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : json-c-devel-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : libuuid-devel-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-devel-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : sm-common-dev-1.0.0-19.tis.x86_64 DEBUG util.py:491: --> Already installed : sm-db-dev-1.0.0-25.tis.x86_64 DEBUG util.py:491: --> Already installed : sqlite-devel-3.7.17-8.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True For libvirt-python-3.5.0-1.tis.1 DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for libvirt-python-3.5.0-1.tis.1.src DEBUG util.py:491: --> python-devel-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-lxml-3.2.1-4.el7.x86_64 DEBUG util.py:491: --> python-nose-1.3.7-7.el7.noarch DEBUG util.py:491: Error: No Package found for libvirt-devel >= 0.9.11 DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For libvirt-3.5.0-1.tis.2 DEBUG util.py:491: --> Already installed : sanlock-devel-3.5.0-1.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : scrub-2.5.2-7.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemtap-sdt-devel-3.1-3.el7.x86_64 DEBUG util.py:491: --> Already installed : util-linux-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : xhtml1-dtds-1.0-20020801.11.el7.noarch DEBUG util.py:491: --> Already installed : yajl-devel-2.0.4-4.el7.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/libvirt-3.5.0-1.tis.2/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/pts'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/shm'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For integrity-kmod-4.12-0.tis.5.src.rpm DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for integrity-kmod-4.12-0.tis.5.src DEBUG util.py:491: --> Already installed : kernel-devel-3.10.0-862.6.3.el7.36.tis.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : 4:perl-5.16.3-292.el7.x86_64 DEBUG util.py:491: --> Already installed : redhat-rpm-config-9.1.0-80.el7.centos.noarch DEBUG util.py:491: --> Already installed : tpm-kmod-symbols-4.12-0.tis.5.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/integrity-kmod-4.12-0.tis.5/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None Thanks Dehao -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Mon Aug 27 01:28:26 2018 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Mon, 27 Aug 2018 01:28:26 +0000 Subject: [Starlingx-discuss] building starlingx image issue. In-Reply-To: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> References: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> Message-ID: Dehao, Libvirt is the one pkg you have to build out, not downloaded. >From root.log of libvirt, it seems the mock env is good. Please check build.log in the same folder to find more info. Other failure except sm may be related to the libvirt failure, which means you can focus on libvirt now. Thanks, Mingyuan From: Shang, Dehao [mailto:dehao.shang at intel.com] Sent: Sunday, August 26, 2018 22:37 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] building starlingx image issue. Hi, On 24/7/18, i cleaned up all pkgs and environments, clone the newest stx-tools and re-build image from the first step. When running "bash download_mirror.sh", in order to ensure mirror completeness, i check all missing.lst and fails.lst content at $HOME/stx-tools/centos-mirror-tools/output folder. When running "generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/", i also check miss.txt file, and manually download all missing packages. But now, "build-pkgs --serial" still show that some packages building fails. Build-std.log file show 259 pkgs building success, and still have some packages fails as following: Failed to build packages: vm-topology-1.0-1.tis.src.rpm sm-1.0.0-23.tis.src.rpm libvirt-python-3.5.0-1.tis.1.src.rpm libvirt-3.5.0-1.tis.2.src.rpm integrity-kmod-4.12-0.tis.5.src.rpm I check every std/results/user-starling-tis-r5-pike-std/xxxxx/root.log file. I think that one error is obvious, namely Error: No Package found for libvirt-devel >= 0.9.11 at std/results/user-starlingx-tis-r5-pike-std/libvirt-python-3.5.0-1.tis.1/root.log I try to manually download libvirt-devel-4.3.0-1.el7.x86_64.rpm, copy it into $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/x86_64/, create a link to /localdisk/designer/dehao/starlingx/cgcs-root/cgcs-centos-repo/Binary/, then update link using generate-cgcs-centos-repo.sh inside container. But, still fails. Whether still lack of some package? If yes, I should download which version, and which folder these missing package be placed to ? The following is some log information at /std/results/user-starlingx-tis-pike-std/ folder: ========================================== For vm-topology-1.0-1.tis DEBUG util.py:577: Executing command: ['/usr/bin/yum-builddep', '--installroot', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/', '--releasever', '7', '/localdisk/loadbuild/dehao/starlingx/std/mock/root//builddir/build/SRPMS/vm-topology-1.0-1.tis.src.rpm'] with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'LC_MESSAGES': 'C.UTF-8', 'WRS_GIT_BRANCH': 'HEAD', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PS1': ' \\s-\\v\\$ ', 'BUILD_BY': 'dehao'} and shell False DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for vm-topology-1.0-1.tis.src DEBUG util.py:491: --> Already installed : python-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-keyring-5.7.1-1.tis.2.noarch DEBUG util.py:491: --> python2-setuptools-38.5.1-1.el7.tis.0.noarch DEBUG util.py:491: Error: No Package found for libvirt DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 For sm-1.0.0-23.tis [dehao at 64407abf411b std]$ vm-topology-1.0-1.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log [dehao at 64407abf411b std]$ results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log^C [dehao at 64407abf411b std]$ sm-1.0.0-23.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/root.log DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for sm-1.0.0-23.tis.src DEBUG util.py:491: --> Already installed : fm-common-dev-1.0-8.tis.x86_64 DEBUG util.py:491: --> Already installed : gcc-4.8.5-28.el7.x86_64 DEBUG util.py:491: --> Already installed : glib2-devel-2.50.3-3.el7.x86_64 DEBUG util.py:491: --> Already installed : glibc-2.17-222.el7.x86_64 DEBUG util.py:491: --> Already installed : json-c-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : json-c-devel-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : libuuid-devel-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-devel-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : sm-common-dev-1.0.0-19.tis.x86_64 DEBUG util.py:491: --> Already installed : sm-db-dev-1.0.0-25.tis.x86_64 DEBUG util.py:491: --> Already installed : sqlite-devel-3.7.17-8.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True For libvirt-python-3.5.0-1.tis.1 DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for libvirt-python-3.5.0-1.tis.1.src DEBUG util.py:491: --> python-devel-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-lxml-3.2.1-4.el7.x86_64 DEBUG util.py:491: --> python-nose-1.3.7-7.el7.noarch DEBUG util.py:491: Error: No Package found for libvirt-devel >= 0.9.11 DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For libvirt-3.5.0-1.tis.2 DEBUG util.py:491: --> Already installed : sanlock-devel-3.5.0-1.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : scrub-2.5.2-7.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemtap-sdt-devel-3.1-3.el7.x86_64 DEBUG util.py:491: --> Already installed : util-linux-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : xhtml1-dtds-1.0-20020801.11.el7.noarch DEBUG util.py:491: --> Already installed : yajl-devel-2.0.4-4.el7.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/libvirt-3.5.0-1.tis.2/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/pts'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/shm'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For integrity-kmod-4.12-0.tis.5.src.rpm DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for integrity-kmod-4.12-0.tis.5.src DEBUG util.py:491: --> Already installed : kernel-devel-3.10.0-862.6.3.el7.36.tis.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : 4:perl-5.16.3-292.el7.x86_64 DEBUG util.py:491: --> Already installed : redhat-rpm-config-9.1.0-80.el7.centos.noarch DEBUG util.py:491: --> Already installed : tpm-kmod-symbols-4.12-0.tis.5.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/integrity-kmod-4.12-0.tis.5/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None Thanks Dehao -------------- next part -------------- An HTML attachment was scrubbed... URL: From dehao.shang at intel.com Mon Aug 27 01:43:26 2018 From: dehao.shang at intel.com (Shang, Dehao) Date: Mon, 27 Aug 2018 01:43:26 +0000 Subject: [Starlingx-discuss] building starlingx image issue. In-Reply-To: <0AFBA892-368F-454D-A493-702EC4EF9258@intel.com> References: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> <0AFBA892-368F-454D-A493-702EC4EF9258@intel.com> Message-ID: <71AECFE5078153419EB7B8DBE0644B2638634D43@shsmsx102.ccr.corp.intel.com> Hi, yong: Thanks for your explanations. About your suggestions, i have some confuses. if the xx_missing.txt file and xx_fails.txt file at $HOME/stx-tools/centos-mirror-tools/output don’t have any contents, Can I think that all packages have been downloaded? Or how can I ensure that my mirror is completeness? Additional, when run “generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/” , it created missing.txt file which include some missing package. But I have manually downloaded these missing packages. Thanks Dehao From: Hu, Yong Sent: Monday, August 27, 2018 9:06 AM To: Shang, Dehao ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] building starlingx image issue. Libvirt and its related packages should be built out of “~/cgcs-root/stx/git/libvirt/”. If the build of this libvirt src tree was successful, you should see such a file: ./std/rpmbuild/RPMS/libvirt-devel-3.5.0-1.tis.2.x86_64.rpm, which would be used as the dependency for other packages build later. Looking through your log, it seemed “libvirt” build failed. And most likely that’s caused by other packages (required by libvirt building) missing in your mirror. My advice is you go back to check what packages were missing (meaning failed to download) by studying log files in your **mirror download container**, Specially watch out those files named “xx_missing.txt”. From: "Shang, Dehao" dehao.shang at intel.com Date: Sunday, 26 August 2018 at 11:09 PM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] building starlingx image issue. Hi, On 24/7/18, i cleaned up all pkgs and environments, clone the newest stx-tools and re-build image from the first step. When running “bash download_mirror.sh”, in order to ensure mirror completeness, i check all missing.lst and fails.lst content at $HOME/stx-tools/centos-mirror-tools/output folder. When running “generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/”, i also check miss.txt file, and manually download all missing packages. But now, “build-pkgs --serial” still show that some packages building fails. Build-std.log file show 259 pkgs building success, and still have some packages fails as following: Failed to build packages: vm-topology-1.0-1.tis.src.rpm sm-1.0.0-23.tis.src.rpm libvirt-python-3.5.0-1.tis.1.src.rpm libvirt-3.5.0-1.tis.2.src.rpm integrity-kmod-4.12-0.tis.5.src.rpm I check every std/results/user-starling-tis-r5-pike-std/xxxxx/root.log file. I think that one error is obvious, namely Error: No Package found for libvirt-devel >= 0.9.11 at std/results/user-starlingx-tis-r5-pike-std/libvirt-python-3.5.0-1.tis.1/root.log I try to manually download libvirt-devel-4.3.0-1.el7.x86_64.rpm, copy it into $HOME/starlingx/mirror/CentOS/stx-r1/CentOS/pike/Binary/x86_64/, create a link to /localdisk/designer/dehao/starlingx/cgcs-root/cgcs-centos-repo/Binary/, then update link using generate-cgcs-centos-repo.sh inside container. But, still fails. Whether still lack of some package? If yes, I should download which version, and which folder these missing package be placed to ? The following is some log information at /std/results/user-starlingx-tis-pike-std/ folder: ========================================== For vm-topology-1.0-1.tis DEBUG util.py:577: Executing command: ['/usr/bin/yum-builddep', '--installroot', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/', '--releasever', '7', '/localdisk/loadbuild/dehao/starlingx/std/mock/root//builddir/build/SRPMS/vm-topology-1.0-1.tis.src.rpm'] with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'LC_MESSAGES': 'C.UTF-8', 'WRS_GIT_BRANCH': 'HEAD', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PS1': ' \\s-\\v\\$ ', 'BUILD_BY': 'dehao'} and shell False DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for vm-topology-1.0-1.tis.src DEBUG util.py:491: --> Already installed : python-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-keyring-5.7.1-1.tis.2.noarch DEBUG util.py:491: --> python2-setuptools-38.5.1-1.el7.tis.0.noarch DEBUG util.py:491: Error: No Package found for libvirt DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 For sm-1.0.0-23.tis [dehao at 64407abf411b std]$ vm-topology-1.0-1.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log [dehao at 64407abf411b std]$ results/dehao-starlingx-tis-r5-pike-std/vm-topology-1.0-1.tis/root.log^C [dehao at 64407abf411b std]$ sm-1.0.0-23.tis.src.rpm^C [dehao at 64407abf411b std]$ vim results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/root.log DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for sm-1.0.0-23.tis.src DEBUG util.py:491: --> Already installed : fm-common-dev-1.0-8.tis.x86_64 DEBUG util.py:491: --> Already installed : gcc-4.8.5-28.el7.x86_64 DEBUG util.py:491: --> Already installed : glib2-devel-2.50.3-3.el7.x86_64 DEBUG util.py:491: --> Already installed : glibc-2.17-222.el7.x86_64 DEBUG util.py:491: --> Already installed : json-c-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : json-c-devel-0.11-4.el7_0.x86_64 DEBUG util.py:491: --> Already installed : libuuid-devel-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-devel-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : sm-common-dev-1.0.0-19.tis.x86_64 DEBUG util.py:491: --> Already installed : sm-db-dev-1.0.0-25.tis.x86_64 DEBUG util.py:491: --> Already installed : sqlite-devel-3.7.17-8.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/sm-1.0.0-23.tis/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True For libvirt-python-3.5.0-1.tis.1 DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for libvirt-python-3.5.0-1.tis.1.src DEBUG util.py:491: --> python-devel-2.7.5-58.el7.tis.3.x86_64 DEBUG util.py:491: --> python-lxml-3.2.1-4.el7.x86_64 DEBUG util.py:491: --> python-nose-1.3.7-7.el7.noarch DEBUG util.py:491: Error: No Package found for libvirt-devel >= 0.9.11 DEBUG util.py:632: Child return code was: 1 DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For libvirt-3.5.0-1.tis.2 DEBUG util.py:491: --> Already installed : sanlock-devel-3.5.0-1.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : scrub-2.5.2-7.el7.x86_64 DEBUG util.py:491: --> Already installed : systemd-devel-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemd-219-42.el7_4.1.tis.10.x86_64 DEBUG util.py:491: --> Already installed : systemtap-sdt-devel-3.1-3.el7.x86_64 DEBUG util.py:491: --> Already installed : util-linux-2.23.2-43.el7.tis.3.x86_64 DEBUG util.py:491: --> Already installed : xhtml1-dtds-1.0-20020801.11.el7.noarch DEBUG util.py:491: --> Already installed : yajl-devel-2.0.4-4.el7.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/libvirt-3.5.0-1.tis.2/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:226: kill orphans DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/var/cache/yum/'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/pts'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: ['/bin/umount', '-n', '-l', '/localdisk/loadbuild/dehao/starlingx/std/mock/root/dev/shm'] with env {'LANG': 'en_US.UTF-8', 'TERM': 'vt100', 'SHELL': '/bin/sh', 'HOSTNAME': 'mock', 'HOME': '/builddir', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin'} and shell False DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None For integrity-kmod-4.12-0.tis.5.src.rpm DEBUG util.py:489: BUILDSTDERR: Failed to set locale, defaulting to C DEBUG util.py:491: http://127.0.0.1:8088/localdisk/loadbuild/dehao/starlingx/installer/rpmbuild/RPMS/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found DEBUG util.py:491: Trying other mirror. DEBUG util.py:491: To address this issue please refer to the below knowledge base article DEBUG util.py:491: https://access.redhat.com/articles/1320623 DEBUG util.py:491: If above article doesn't help to resolve this issue please create a bug on https://bugs.centos.org/ DEBUG util.py:491: Getting requirements for integrity-kmod-4.12-0.tis.5.src DEBUG util.py:491: --> Already installed : kernel-devel-3.10.0-862.6.3.el7.36.tis.x86_64 DEBUG util.py:491: --> Already installed : 1:openssl-1.0.2k-12.el7.x86_64 DEBUG util.py:491: --> Already installed : 4:perl-5.16.3-292.el7.x86_64 DEBUG util.py:491: --> Already installed : redhat-rpm-config-9.1.0-80.el7.centos.noarch DEBUG util.py:491: --> Already installed : tpm-kmod-symbols-4.12-0.tis.5.x86_64 DEBUG util.py:491: No uninstalled build requires DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None DEBUG util.py:577: Executing command: rpm -qa --root '/localdisk/loadbuild/dehao/starlingx/std/mock/root' --qf '%{nevra} %{buildtime} %{size} %{pkgid} installed\n' > /localdisk/loadbuild/dehao/starlingx/std/results/dehao-starlingx-tis-r5-pike-std/integrity-kmod-4.12-0.tis.5/installed_pkgs.log with env {'LANG': 'en_US.UTF-8', 'BUILD_DATE': '2018-08-26 04:52:14 +0000', 'TERM': 'vt100', 'SHELL': '/bin/bash', 'BUILD_BY': 'dehao', 'HOSTNAME': 'mock', 'REPO': '/localdisk/designer/dehao/starlingx/cgcs-root', 'CGCS_GIT_BRANCH': 'HEAD', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'HOME': '/builddir', 'PS1': ' \\s-\\v\\$ ', 'WRS_GIT_BRANCH': 'HEAD'} and shell True DEBUG util.py:632: Child return code was: 0 DEBUG util.py:651: child environment: None Thanks Dehao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Aug 27 02:41:58 2018 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 27 Aug 2018 02:41:58 +0000 Subject: [Starlingx-discuss] building starlingx image issue. In-Reply-To: <71AECFE5078153419EB7B8DBE0644B2638634D43@shsmsx102.ccr.corp.intel.com> References: <71AECFE5078153419EB7B8DBE0644B2638634CC2@shsmsx102.ccr.corp.intel.com> <0AFBA892-368F-454D-A493-702EC4EF9258@intel.com> <71AECFE5078153419EB7B8DBE0644B2638634D43@shsmsx102.ccr.corp.intel.com> Message-ID: <6F418F04-1723-42D8-96BF-54C6CBECDE89@intel.com> >>> Or how can I ensure that my mirror is completeness? To confirm all RPMs/SRPMs named in these 3 lists indeed exist in your mirror: rpms_from_3rd_parties.lst rpms_from_centos_repo.lst rpms_from_centos_3rd_parties.lst From: "Shang, Dehao" Date: Monday, 27 August 2018 at 9:43 AM To: "Hu, Yong" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] building starlingx image issue. Or how can I ensure that my mirror is completeness? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Aug 27 13:48:13 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 27 Aug 2018 13:48:13 +0000 Subject: [Starlingx-discuss] Launchpad instance is live In-Reply-To: References: <9A85D2917C58154C960D95352B22818BAB57D369@fmsmsx115.amr.corp.intel.com> <62367bcc-f801-2ac2-22c3-a6e2d32857b9@linux.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA446501@ALA-MBD.corp.ad.wrs.com> Austin, We are using tags for the subprojects and the target releases as was done in Storyboard. Bruce already created the needed tags. Ghada -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Saturday, August 25, 2018 11:09 PM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Launchpad instance is live On Sat, Aug 25, 2018 at 8:29 PM, Sun, Austin wrote: > Can we add "Also affects project" for sub project Or we can only use tag to record it ? There 'project' refers to other Launchpad projects not StarlingX projects. Since we only have one Launchpad project we will likely not use that unless we have a bug that crosses to something else in Launchpad (possibly an OpenStack project that has not migrated to Storyboard?) dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From claire at openstack.org Mon Aug 27 17:30:59 2018 From: claire at openstack.org (Claire Massey) Date: Mon, 27 Aug 2018 12:30:59 -0500 Subject: [Starlingx-discuss] Invitation: Aug 30 Community Webinar: OSF Project Updates Message-ID: <6921AF8B-2699-4D37-AE6A-06ECE517FB3A@openstack.org> Hi StarlingX team, This Thursday, August 30, we’ll host an OSF community webinar at 8:00am PT (1500 UTC). This meeting is open to all community members so please join us! We’ll cover: —What’s new in OpenStack’s latest release - Rocky. Featured updates will include Ironic, TripleO and FFU —High level updates from new OSF projects: Airship, Kata Containers, StarlingX, and Zuul —What you can expect at the Berlin Summit in November This meeting will be run over Zoom and will be recorded, so if you can’t make the time, don’t panic! Details below. Thanks, Claire When: Aug 30, 2018 8:00 AM Pacific Time (US and Canada) Topic: OSF Community Meeting Please click the link below to join the webinar: https://zoom.us/j/551803657 Or iPhone one-tap : US: +16699006833,,551803657# or +16468769923,,551803657# Or Telephone: Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 Webinar ID: 551 803 657 International numbers available: https://zoom.us/u/bh2jVweqf -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Aug 27 18:26:15 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 27 Aug 2018 18:26:15 +0000 Subject: [Starlingx-discuss] Networking Team -- Call for members Message-ID: <151EE31B9FCCA54397A757BC674650F0BA44A7E7@ALA-MBD.corp.ad.wrs.com> Hello all, I am volunteering to be the Project Lead for the starlingx networking team. There are a few members identified from Wind River already. Any others wish to join? I will be looking for a timeslot for a regular meeting once I gather the list of participants. Sub-team wiki: https://wiki.openstack.org/wiki/StarlingX/Networking Work Backlog: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.networking&project_group_id=86 I'm particularly interested in having someone work on the following for the stx.2018.10 release: https://storyboard.openstack.org/#!/story/2002946 - OVS LLDP inventory https://storyboard.openstack.org/#!/story/2002944 - OVS-DPDK firewall driver Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Aug 27 18:32:25 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 27 Aug 2018 14:32:25 -0400 Subject: [Starlingx-discuss] Early Bird Pricing Ends Tomorrow - OpenStack Summit Berlin Message-ID: <16F5B044-B2D0-463E-BA67-D0750B2584C5@gmail.com> Hi everyone, Friendly reminder that the early bird ticket price deadline for the OpenStack Summit Berlin is tomorrow, August 28 at 11:59pm PT (August 29, 6:59 UTC). In Berlin, there will be sessions and workshops around open infrastructure use cases, including CI/CD, container infrastructure, edge computing, HPC / AI / GPUs, private & hybrid cloud, public cloud and NFV. In case you haven’t seen it, the agenda is now live and includes sessions and workshops from Ocado Technology, Metronom, Oerlikon, and more! In addition, make sure to check out the Edge Hackathon hosted by Open Telekom Cloud the weekend prior to the Summit. Register NOW before the price increases to $999 USD! Interested in sponsoring the Summit? Find out more here or email summit at openstack.org . Thanks and Best Regards, Ildikó -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Aug 27 20:17:55 2018 From: scott.little at windriver.com (Scott Little) Date: Mon, 27 Aug 2018 16:17:55 -0400 Subject: [Starlingx-discuss] [Build] Build Avoidance Message-ID: <06fb400c-4e93-c765-6385-9e579e2b3ae7@windriver.com> Build Avoidance, a build tool improvement.* * *Purpose:*    Greatly reduce build times after a repo sync for designers working within a regional office.  For a new workspace, build-pkgs typically requires 3+ hours, build avoidance typically reduces this step to ~20min. *Limitations:*    Little or no benefit for designers who refresh a pre-existing workspace at least daily. (download_mirror.sh, repo sync, generate-cgcs-centos-repo.sh, build-pkgs, build-iso).    Not likely to be useful to solo designers, or teleworkers that wish to compile on there home computers.  WAN speeds are generally to slow. *Method (in brief):* 1) Reference builds    - A server performs a regular (daily?), automated builds using existing methods.  Call these the reference builds.    - The builds are timestamped, and preserved for some time. (weeks?)    - A build CONTEXT is captured, consisting of the SHA of each and every git that contributed to the build.    - For each package built, a file shall capture he md5sums of all the source code inputs to the build of that package.    - All these build products are accessible locally (e.g. a regional office) via rsync (other protocols can be added later) 2) Designers    - build-pkgs --build-avoidance   ... will request a build avoidance build.    - Additional arguments, and/or environment variables, and/or a config file unique to the regional office, are used to specify a URL to the reference builds.    - build-pkgs will:      = From newest to oldest, scan the CONTEXTs of the various reference builds.  Select the first (most recent) context which satisfies:  For every git, the SHA specified in the CONTEXT is present.      = The selected context might be slightly out of date, but not by more than a day (assuming daily reference builds).      = If the context has not been previously downloaded, then download it now.  Meaning download select portions of the reference build workspace into the designer's workspace.  This includes all the SRPMS, RPMS, MD5SUMS, and misc supporting files.  (~10 min over office LAN)      = The designer may have additional commits not present in the reference build, or uncommitted changes.   Affected packages will identified by the differing md5sum's, and the package is re-built. (5+ min, depending on what packages have changed) *Requirement:*    - The regional office implements an automated build that pulls the latest StarlingX software and builds it on a regular basis. e.g. a daily.  Perhaps implemented by Jenkins, cron, or similar tools.    - Each build is saved to a unique directory, and preserved for a time that is reflective of how long a designer might be expected to work on a private branch without syncronizing with the master branch.   e.g. 2 weeks.    - The MY_WORKSPACE directory for the build shall have a common root directory, and a leaf directory that is a time stamp of format YYYY-MM-DD_hh-mm-ss.  e.g. MY_WORKSPACE=/localdisk/loadbuild/jenkins/StarlingX/2018-07-19_11-30-21    - Designers can access all build products over the internal network of the regional office.  The current prototype employs rsync. Other protocols that can efficiently share/copy/transfer large directories of content can be added as needed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hayde.martinez.landa at intel.com Mon Aug 27 21:34:02 2018 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Mon, 27 Aug 2018 21:34:02 +0000 Subject: [Starlingx-discuss] Current build compile time error Message-ID: <96DDD44B-FBE9-41AB-AAAD-CE2E6CAFBB54@intel.com> Hi All, I downloaded the mirror and cloned the repo on Friday afternoon (around 5pm PDT), and I only had one issue while building packages Which was solved with this gerrit review https://review.openstack.org/#/c/596554/1 but other than that I had a successful build. Best Hayde On 8/24/18, 9:31 AM, "Xie, Cindy" wrote: Abraham, Try to revert the patch https://review.openstack.org/#/c/594795/ and have a try? Thx. - cindy -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Friday, August 24, 2018 10:25 PM To: McKenna, Jason ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Current build compile time error > As a heads up, I'm seeing the current StarlingX build fail. Problem > seems to be introduced by a package being uprev'd yesterday in the > .lst files, but the dependencies for that package not being uprev'd. > I've got a fix on my system and am doing a test build now. I will post a gerrit review after I've validated. > > Did anyone else observe this in builds over the last ~ 10 hours? I have launched a build environment one hour ago, I will get back to you during the day with my findings including your patch cherry picked, please let me know how I can help. From Ghada.Khalil at windriver.com Mon Aug 27 22:13:49 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 27 Aug 2018 22:13:49 +0000 Subject: [Starlingx-discuss] Launchpad Queries created in wiki Message-ID: <151EE31B9FCCA54397A757BC674650F0BA44A97F@ALA-MBD.corp.ad.wrs.com> Hello all, With Launchpad now live (thanks Bruce!), I have created Launchpad queries on each sub-teams wiki page. You can also easily click on the tags in Launchpad to see the open bugs. Just a reminder that only bugs should be opened in Launchpad. If you are actively working on a feature/initiative that is already tracked in story board (and is still in development), open a task under the story instead of a Launchpad bug. I have also posted the Bug Template that was previously discussed on the list on the wiki (linked on the main page). Please use the template moving forward: https://wiki.openstack.org/wiki/StarlingX/BugTemplate A final reminder to link gerrit reviews to the corresponding story/task or Launchpad bug. Gerrit will automatically update the task/bug when the code merges. I've put this on the wiki as well: https://wiki.openstack.org/wiki/StarlingX/CodeSubmissionGuidelines Linking to StoryBoard Stories: Specify the story and task ID in the commit message as follows: Story: $story_id Task: $task_id Linking to Launchpad Bugs: Specify the Bug ID in the commit message as follows: Closes-Bug: $bug_id Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Aug 27 22:32:05 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 27 Aug 2018 22:32:05 +0000 Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? Message-ID: <151EE31B9FCCA54397A757BC674650F0BA44A9A4@ALA-MBD.corp.ad.wrs.com> Hello all, Is anyone still using this etherpad for planning/tracking activities? https://etherpad.openstack.org/p/stx-planning It appears to have Chinese content and, therefore, is not readable by everybody. Note: It is still linked on the main StarlingX wiki page under the "Overall project planning pages" section. Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 27 22:37:16 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 27 Aug 2018 22:37:16 +0000 Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA44A9A4@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA44A9A4@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB581128@fmsmsx115.amr.corp.intel.com> That file is obsolete and can be removed from the wiki. I have seen several instances of etherpads being translated into various languages. I don't know why or how it is happening but. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, August 27, 2018 3:32 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? Hello all, Is anyone still using this etherpad for planning/tracking activities? https://etherpad.openstack.org/p/stx-planning It appears to have Chinese content and, therefore, is not readable by everybody. Note: It is still linked on the main StarlingX wiki page under the "Overall project planning pages" section. Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Aug 27 22:41:09 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 27 Aug 2018 22:41:09 +0000 Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? In-Reply-To: <9A85D2917C58154C960D95352B22818BAB581128@fmsmsx115.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA44A9A4@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BAB581128@fmsmsx115.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB581141@fmsmsx115.amr.corp.intel.com> I just removed the link to that etherpad. If I knew how to delete an Etherpad I would. From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, August 27, 2018 3:37 PM To: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] stx-planning etherpad -- obsolete? That file is obsolete and can be removed from the wiki. I have seen several instances of etherpads being translated into various languages. I don't know why or how it is happening but. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, August 27, 2018 3:32 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? Hello all, Is anyone still using this etherpad for planning/tracking activities? https://etherpad.openstack.org/p/stx-planning It appears to have Chinese content and, therefore, is not readable by everybody. Note: It is still linked on the main StarlingX wiki page under the "Overall project planning pages" section. Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Aug 27 22:59:18 2018 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 27 Aug 2018 22:59:18 +0000 Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? Message-ID: The translation could happen if someone set the browser with enabling auto-translate (to his/her default system language, here that’s Chinese) and opened this page. Looking into these Chinese words, the translation was doing okay, though many words were obviously translated stiffly ☺ From: "Jones, Bruce E" Date: Tuesday, 28 August 2018 at 6:38 AM To: "Khalil, Ghada" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] stx-planning etherpad -- obsolete? That file is obsolete and can be removed from the wiki. I have seen several instances of etherpads being translated into various languages. I don’t know why or how it is happening but. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, August 27, 2018 3:32 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-planning etherpad -- obsolete? Hello all, Is anyone still using this etherpad for planning/tracking activities? https://etherpad.openstack.org/p/stx-planning It appears to have Chinese content and, therefore, is not readable by everybody. Note: It is still linked on the main StarlingX wiki page under the “Overall project planning pages” section. Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Tue Aug 28 07:09:46 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Tue, 28 Aug 2018 07:09:46 +0000 Subject: [Starlingx-discuss] Networking Team -- Call for members In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA44A7E7@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA44A7E7@ALA-MBD.corp.ad.wrs.com> Message-ID: <6345119E91D5C843A93D64F498ACFA13699B83D6@SHSMSX101.ccr.corp.intel.com> Hi Ghada, Good to see this mail that networking team is created :) My team will join in the networking team together with your team. We're working on up-streaming the patches in stx-neutron to Openstack Neutron. So far we analyzed 140+ patches in stx-neutron and categorized them into 20+ sub-functions. We've finished analyzing the patches for VLAN trunk/SRIOV/QoS/security group/firewall/custom settings/bindings and sent out the analysis report to starlingx-discuss mailing list. Also one analysis report for L2POP was sent out last week; the others are on the way. There're 3 major features to be up-streamed to Neutron in stx-neutron. We finished the initial version of BP/spec for these features proposal. They're attached. We hope WR can co-own them with us. We plan to upload their BP/spec before Neutron PTG in Denver this September. Then we can review them with PTL and core developers at PTG. - Provider network management - System host management - Fault management We also extracted 2 standalone sub-features after analyzing the above 3 major features; their BP/specs were uploaded to Neutron upstream for review. - Segment Range Management of Self-service Networks: https://review.openstack.org/579411 - Rescheduling of DHCP Servers and Routers: https://review.openstack.org/595978 As a joint effort, I recommend myself to be co-lead of the networking team with focus on upstreaming effort; recommend Ruijing, Guo to be tech lead together with Matt. Thanks, Forrest From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, August 28, 2018 2:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Networking Team -- Call for members Hello all, I am volunteering to be the Project Lead for the starlingx networking team. There are a few members identified from Wind River already. Any others wish to join? I will be looking for a timeslot for a regular meeting once I gather the list of participants. Sub-team wiki: https://wiki.openstack.org/wiki/StarlingX/Networking Work Backlog: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.networking&project_group_id=86 I'm particularly interested in having someone work on the following for the stx.2018.10 release: https://storyboard.openstack.org/#!/story/2002946 - OVS LLDP inventory https://storyboard.openstack.org/#!/story/2002944 - OVS-DPDK firewall driver Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Provider Network Management.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 9548 bytes Desc: Provider Network Management.docx URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: System Host Management.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 8669 bytes Desc: System Host Management.docx URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Fault Management.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 7130 bytes Desc: Fault Management.docx URL: From Ghada.Khalil at windriver.com Tue Aug 28 13:45:07 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 28 Aug 2018 13:45:07 +0000 Subject: [Starlingx-discuss] Networking Team -- Call for members In-Reply-To: <6345119E91D5C843A93D64F498ACFA13699B83D6@SHSMSX101.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA44A7E7@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA13699B83D6@SHSMSX101.ccr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA44AB04@ALA-MBD.corp.ad.wrs.com> Sounds good. Thanks Forrest. Looking forward to our first meeting on Thursday. Ghada From: Zhao, Forrest [mailto:forrest.zhao at intel.com] Sent: Tuesday, August 28, 2018 3:10 AM To: Khalil, Ghada; Guo, Ruijing Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Networking Team -- Call for members Hi Ghada, Good to see this mail that networking team is created :) My team will join in the networking team together with your team. We're working on up-streaming the patches in stx-neutron to Openstack Neutron. So far we analyzed 140+ patches in stx-neutron and categorized them into 20+ sub-functions. We've finished analyzing the patches for VLAN trunk/SRIOV/QoS/security group/firewall/custom settings/bindings and sent out the analysis report to starlingx-discuss mailing list. Also one analysis report for L2POP was sent out last week; the others are on the way. There're 3 major features to be up-streamed to Neutron in stx-neutron. We finished the initial version of BP/spec for these features proposal. They're attached. We hope WR can co-own them with us. We plan to upload their BP/spec before Neutron PTG in Denver this September. Then we can review them with PTL and core developers at PTG. - Provider network management - System host management - Fault management We also extracted 2 standalone sub-features after analyzing the above 3 major features; their BP/specs were uploaded to Neutron upstream for review. - Segment Range Management of Self-service Networks: https://review.openstack.org/579411 - Rescheduling of DHCP Servers and Routers: https://review.openstack.org/595978 As a joint effort, I recommend myself to be co-lead of the networking team with focus on upstreaming effort; recommend Ruijing, Guo to be tech lead together with Matt. Thanks, Forrest From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, August 28, 2018 2:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Networking Team -- Call for members Hello all, I am volunteering to be the Project Lead for the starlingx networking team. There are a few members identified from Wind River already. Any others wish to join? I will be looking for a timeslot for a regular meeting once I gather the list of participants. Sub-team wiki: https://wiki.openstack.org/wiki/StarlingX/Networking Work Backlog: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.networking&project_group_id=86 I'm particularly interested in having someone work on the following for the stx.2018.10 release: https://storyboard.openstack.org/#!/story/2002946 - OVS LLDP inventory https://storyboard.openstack.org/#!/story/2002944 - OVS-DPDK firewall driver Thanks, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Aug 28 15:32:08 2018 From: scott.little at windriver.com (Scott Little) Date: Tue, 28 Aug 2018 11:32:08 -0400 Subject: [Starlingx-discuss] breaking .lst files apart among repos In-Reply-To: References: Message-ID: I'd suggest an audit that looks for the same package being include in multiple repo's lst files, but with a different version. Based on https://storyboard.openstack.org/#!/story/2003462,  we would also want to compare RPM vs SRPM versions to ensure our compiled SRPMS are never masked. Scott. On 18-08-22 10:47 AM, McKenna, Jason wrote: > > Hi folks (especially Erich, Scott, Marcela, or anyone else working on > the download tools), > > We’d like to support per-repo .lst files used by download_mirrors.sh.  > This would decouple the stx-tools repo from all the code repos. > Changes (for example, to uprev a package) would only affect a single > repo rather than both the code repo and stx-tools. This would also > allow individual entities to integrate their own repos into products, > or to pick-and-choose versions of the different repos to build into > products.  My initial work would just be to support per-repo .lst > files as an option, with actually breaking the existing .lst files up > among the repos as a later task.  My initial thoughts would be that > .src.rpms would be in per-repo lst files.  Build-time and run time > requirement binary rpms are tougher to nail down, as they might be > used by different repos, and we don’t want one repo asking for one > version, and a second repo asking for a different version, etc. > > Before I start working too deep on this, does anyone have any > thoughts, or work in progress along these lines? > > -Jason > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Tue Aug 28 16:00:57 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 28 Aug 2018 16:00:57 +0000 Subject: [Starlingx-discuss] Decoupling spec and makefiles Message-ID: <0532ac50f4c904aadd8260d7282926ba08f0cd81.camel@intel.com> Hi all, After building all the C/C++ projects by hand (isolated from the build system), I noticed that there's a direct dependency between the spec files and the makefiles. For example, on fm-common some files are installed in the install_non_bb target[1] (some yocto legacy name?), but two additional files are installed by the spec file[2]. There is another example, like this one on guest-comm[3] where all the files are installed by the spec file and not by the makefile. This is understandable as building these components outside the build system is not the common use case. However, decoupling the spec files and the makefile will be one of the first steps on the multi-OS support path. Fixing this issues is a low priority task but we would like to start planning to fix it, what do you think it would be the best way to track this issues in launchpad? Can we create an issue by repository or by project? Thank you in advance. -Erich [1] - http://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/Makefile#n28 [2] - http://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n70 [3] - http://git.starlingx.io/cgit/stx-nfv/tree/guest-comm/centos/host-guest-comm.spec From Brent.Rowsell at windriver.com Tue Aug 28 16:25:01 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 28 Aug 2018 16:25:01 +0000 Subject: [Starlingx-discuss] Decoupling spec and makefiles In-Reply-To: <0532ac50f4c904aadd8260d7282926ba08f0cd81.camel@intel.com> References: <0532ac50f4c904aadd8260d7282926ba08f0cd81.camel@intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB24F66C@ALA-MBD.corp.ad.wrs.com> Multi-OS support will be discussed at the PTG in Denver including its priority. There are many items that will need to be worked through. Until there is an end to end strategy, [lease refrain from creating any stories. Thanks, Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, August 28, 2018 12:01 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Decoupling spec and makefiles Hi all, After building all the C/C++ projects by hand (isolated from the build system), I noticed that there's a direct dependency between the spec files and the makefiles. For example, on fm-common some files are installed in the install_non_bb target[1] (some yocto legacy name?), but two additional files are installed by the spec file[2]. There is another example, like this one on guest-comm[3] where all the files are installed by the spec file and not by the makefile. This is understandable as building these components outside the build system is not the common use case. However, decoupling the spec files and the makefile will be one of the first steps on the multi-OS support path. Fixing this issues is a low priority task but we would like to start planning to fix it, what do you think it would be the best way to track this issues in launchpad? Can we create an issue by repository or by project? Thank you in advance. -Erich [1] - http://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/Makefile#n28 [2] - http://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n70 [3] - http://git.starlingx.io/cgit/stx-nfv/tree/guest-comm/centos/host-guest-comm.spec _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Tue Aug 28 16:25:39 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 28 Aug 2018 11:25:39 -0500 Subject: [Starlingx-discuss] Decoupling spec and makefiles In-Reply-To: <0532ac50f4c904aadd8260d7282926ba08f0cd81.camel@intel.com> References: <0532ac50f4c904aadd8260d7282926ba08f0cd81.camel@intel.com> Message-ID: On Tue, Aug 28, 2018 at 11:00 AM, Cordoba Malibran, Erich wrote: > For example, on fm-common some files are installed in the > install_non_bb target[1] (some yocto legacy name?), but two additional > files are installed by the spec file[2]. There is another example, like > this one on guest-comm[3] where all the files are installed by the spec > file and not by the makefile. I've already posted https://review.openstack.org/595852 for a similar issue that lets fm-common build on Ubuntu (my DevStack platform) and used by https://review.openstack.org/#/c/595865/. > Fixing this issues is a low priority task but we would like to start > planning to fix it, what do you think it would be the best way to track > this issues in launchpad? Can we create an issue by repository or by > project? If these are bugs, it's one per some unit in LP, if this is a feature it's a task under the feature in SB. I don't really have a preference... dt -- Dean Troyer dtroyer at gmail.com From bruce.e.jones at intel.com Tue Aug 28 17:10:39 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 28 Aug 2018 17:10:39 +0000 Subject: [Starlingx-discuss] Weekly call agenda open Message-ID: <9A85D2917C58154C960D95352B22818BAB58155D@fmsmsx115.amr.corp.intel.com> We have our weekly project call tomorrow. Please feel free to add topics to the agenda: https://etherpad.openstack.org/p/stx-status brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Aug 28 17:23:37 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 28 Aug 2018 17:23:37 +0000 Subject: [Starlingx-discuss] Decoupling spec and makefiles In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB24F66C@ALA-MBD.corp.ad.wrs.com> References: <0532ac50f4c904aadd8260d7282926ba08f0cd81.camel@intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB24F66C@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB5815A2@fmsmsx115.amr.corp.intel.com> Brent wrote: > Multi-OS support will be discussed at the PTG in Denver including its priority. > There are many items that will need to be worked through. > Until there is an end to end strategy, [lease refrain from creating any stories. I agree that we need an end to end strategy. But I don’t agree that we defer work toward the goal until the PTG. Please understand that this topic is becoming more and more urgent for us every day. I suggest that the Build team discuss the changes Erich is suggesting on their merits and prioritize accordingly. We don't need to wait for the overall strategy if these changes are otherwise correct or needed. brucej -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Tuesday, August 28, 2018 9:25 AM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Decoupling spec and makefiles Multi-OS support will be discussed at the PTG in Denver including its priority. There are many items that will need to be worked through. Until there is an end to end strategy, [lease refrain from creating any stories. Thanks, Brent -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, August 28, 2018 12:01 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Decoupling spec and makefiles Hi all, After building all the C/C++ projects by hand (isolated from the build system), I noticed that there's a direct dependency between the spec files and the makefiles. For example, on fm-common some files are installed in the install_non_bb target[1] (some yocto legacy name?), but two additional files are installed by the spec file[2]. There is another example, like this one on guest-comm[3] where all the files are installed by the spec file and not by the makefile. This is understandable as building these components outside the build system is not the common use case. However, decoupling the spec files and the makefile will be one of the first steps on the multi-OS support path. Fixing this issues is a low priority task but we would like to start planning to fix it, what do you think it would be the best way to track this issues in launchpad? Can we create an issue by repository or by project? Thank you in advance. -Erich [1] - http://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/Makefile#n28 [2] - http://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n70 [3] - http://git.starlingx.io/cgit/stx-nfv/tree/guest-comm/centos/host-guest-comm.spec _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Tue Aug 28 19:40:04 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 28 Aug 2018 21:40:04 +0200 Subject: [Starlingx-discuss] Preparation for the Forum in Berlin starts now Message-ID: <1E35B105-1CBA-4E0A-AADE-63DA0E1FF301@gmail.com> Hi, In preparation to the OpenStack Summit in Berlin I would like to draw your attention to the Forum (https://wiki.openstack.org/wiki/Forum) that consists of a set of 40-minute long collaborative sessions in parallel to the conference. The Forum provides a feedback loop between developers and users and gives an opportunity to do strategic planning for upcoming releases of OSF projects. The participants use etherpad to plan the agenda and capture the notes from the sessions which have a format of open discussion. We are currently in the brainstorming phase for Forum topics. The purpose of this period is to identify areas for feedback or new requirements and collect ideas for cross-project discussions and planning that are also relevant for users. Please note that the more deep dive technical discussions are supposed to happen at the PTG, the purpose of the Forum is to involve users and operators as well in the conversations. As we have a limited number of slots it is very important to reduce duplications during the brainstorming phase. To collect ideas I created an etherpad: https://etherpad.openstack.org/p/StarlingXBerlinForumBrainstorming __The Forum session proposal period starts on September 12 and closes on September 26.__ Please let me know if you have any questions. Thanks and Best Regards, Ildikó From ildiko.vancsa at gmail.com Tue Aug 28 19:51:59 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 28 Aug 2018 21:51:59 +0200 Subject: [Starlingx-discuss] Berlin Hackathon: Hacking the Edge Message-ID: Hi, I would like to draw your attention to an Edge Hackathon that will be held the weekend prior to the OpenStack Summit in Berlin. You can read further details of the event in this mail: http://lists.openstack.org/pipermail/edge-computing/2018-August/000410.html The organizers are looking for participants as well as ideas for the two-day long event: Registration: https://openstack-hackathon-berlin.eventbrite.com/ Collected ideas/workpad: https://etherpad.openstack.org/p/hacking_the_edge_hackathon_berlin If you have any questions please respond to this thread or reach out to Frank (in cc) directly. Thanks and Best Regards, Ildikó From claire at openstack.org Tue Aug 28 20:22:02 2018 From: claire at openstack.org (Claire Massey) Date: Tue, 28 Aug 2018 15:22:02 -0500 Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit Message-ID: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> Hi everyone, This is a follow-up to the email that Ildiko just sent about preparing topics for the Forum at the Berlin Summit in November (http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000847.html ). We’re also looking for *one* active member of the StarlingX team to volunteer to serve on the Selection Committee for the Forum. If you're interested in volunteering for this role, please reply to this thread by August 30. The Forum Selection Committee is comprised of representatives from the OSF staff, OpenStack TC and UC and one person each from new projects StarlingX, Airship, Zuul and Kata Containers. The Selection Committee will collaboratively select the topics and program the Forum agenda between September 26 - October 24. Once the StarlingX representative is confirmed, our team, led by Jimmy McArthur, will work closely with them on next steps. Full details about the Forum, including the planning timelines can be found at https://wiki.openstack.org/wiki/Forum . Thank you! Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Aug 28 20:34:40 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 28 Aug 2018 15:34:40 -0500 Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit In-Reply-To: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> References: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> Message-ID: On Tue, Aug 28, 2018 at 3:22 PM, Claire Massey wrote: Claire, I've done this in the past and would be happy to serve again in this capacity if you want someone with a little experience. If you're looking for new people to involve that is good too. dt -- Dean Troyer dtroyer at gmail.com From hazzim.i.anaya.casas at intel.com Tue Aug 28 20:34:48 2018 From: hazzim.i.anaya.casas at intel.com (Anaya casas, Hazzim I) Date: Tue, 28 Aug 2018 20:34:48 +0000 Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit In-Reply-To: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> References: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> Message-ID: <92BF6070-0A26-4D1F-9565-2C24C451FED2@intel.com> Hi Claire, I can help like a representative for the committee. Regards. On Aug 28, 2018, at 15:22, Claire Massey > wrote: Hi everyone, This is a follow-up to the email that Ildiko just sent about preparing topics for the Forum at the Berlin Summit in November (http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000847.html). We’re also looking for *one* active member of the StarlingX team to volunteer to serve on the Selection Committee for the Forum. If you're interested in volunteering for this role, please reply to this thread by August 30. The Forum Selection Committee is comprised of representatives from the OSF staff, OpenStack TC and UC and one person each from new projects StarlingX, Airship, Zuul and Kata Containers. The Selection Committee will collaboratively select the topics and program the Forum agenda between September 26 - October 24. Once the StarlingX representative is confirmed, our team, led by Jimmy McArthur, will work closely with them on next steps. Full details about the Forum, including the planning timelines can be found at https://wiki.openstack.org/wiki/Forum. Thank you! Claire _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Tue Aug 28 20:52:49 2018 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 28 Aug 2018 15:52:49 -0500 Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit In-Reply-To: <92BF6070-0A26-4D1F-9565-2C24C451FED2@intel.com> References: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> <92BF6070-0A26-4D1F-9565-2C24C451FED2@intel.com> Message-ID: <5B85B621.7050802@openstack.org> Hi Dean / Anaya - thanks for jumping on this. We appreciate your enthusiasm :) I'll let you decide amongst yourselves as we only require one volunteer. Let me know who it is and I'll follow up with additional details. Cheers, Jimmy McArthur Anaya casas, Hazzim I wrote: > Hi Claire, I can help like a representative for the committee. > > Regards. > > > >> On Aug 28, 2018, at 15:22, Claire Massey > > wrote: >> >> Hi everyone, >> >> This is a follow-up to the email that Ildiko just sent about >> preparing topics for the Forum at the Berlin Summit in November >> (http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000847.html). >> >> >> We’re also looking for **one** active member of the StarlingX team to >> volunteer to serve on the Selection Committee for the Forum. If >> you're interested in volunteering for this role, please reply to this >> thread by _*August 30*_. >> >> The Forum Selection Committee is comprised of representatives from >> the OSF staff, OpenStack TC and UC and one person each from new >> projects StarlingX, Airship, Zuul and Kata Containers. The Selection >> Committee will collaboratively select the topics and program the >> Forum agenda between September 26 - October 24. >> >> Once the StarlingX representative is confirmed, our team, led by >> Jimmy McArthur, will work closely with them on next steps. >> >> Full details about the Forum, including the planning timelines can be >> found at https://wiki.openstack.org/wiki/Forum. >> >> Thank you! >> Claire >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Aug 29 11:01:53 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 29 Aug 2018 11:01:53 +0000 Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit In-Reply-To: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> References: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> Message-ID: <910BFEFC-1BE7-4F91-90C8-C8945EF1CC63@windriver.com> Hey Claire, I will volunteer from starlingx team. Greg. From: Claire Massey Date: Tuesday, August 28, 2018 at 4:22 PM To: "starlingx-discuss at lists.starlingx.io" Cc: Jimmy McArthur Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit Hi everyone, This is a follow-up to the email that Ildiko just sent about preparing topics for the Forum at the Berlin Summit in November (http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000847.html). We’re also looking for *one* active member of the StarlingX team to volunteer to serve on the Selection Committee for the Forum. If you're interested in volunteering for this role, please reply to this thread by August 30. The Forum Selection Committee is comprised of representatives from the OSF staff, OpenStack TC and UC and one person each from new projects StarlingX, Airship, Zuul and Kata Containers. The Selection Committee will collaboratively select the topics and program the Forum agenda between September 26 - October 24. Once the StarlingX representative is confirmed, our team, led by Jimmy McArthur, will work closely with them on next steps. Full details about the Forum, including the planning timelines can be found at https://wiki.openstack.org/wiki/Forum. Thank you! Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Wed Aug 29 14:58:27 2018 From: claire at openstack.org (Claire Massey) Date: Wed, 29 Aug 2018 09:58:27 -0500 Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit In-Reply-To: <910BFEFC-1BE7-4F91-90C8-C8945EF1CC63@windriver.com> References: <81BF8B47-F4FE-40B6-80A1-DB99D7837BE4@openstack.org> <910BFEFC-1BE7-4F91-90C8-C8945EF1CC63@windriver.com> Message-ID: Thanks, Greg. Jimmy will be in touch with you on next steps. > On Aug 29, 2018, at 6:01 AM, Waines, Greg wrote: > > Hey Claire, > I will volunteer from starlingx team. > Greg. > > From: Claire Massey > Date: Tuesday, August 28, 2018 at 4:22 PM > To: "starlingx-discuss at lists.starlingx.io" > Cc: Jimmy McArthur > Subject: [Starlingx-discuss] Call for One Volunteer - Forum Selection Committee - Berlin Summit > > Hi everyone, > > This is a follow-up to the email that Ildiko just sent about preparing topics for the Forum at the Berlin Summit in November (http://lists.starlingx.io/pipermail/starlingx-discuss/2018-August/000847.html ). > > We’re also looking for *one* active member of the StarlingX team to volunteer to serve on the Selection Committee for the Forum. If you're interested in volunteering for this role, please reply to this thread by August 30. > > The Forum Selection Committee is comprised of representatives from the OSF staff, OpenStack TC and UC and one person each from new projects StarlingX, Airship, Zuul and Kata Containers. The Selection Committee will collaboratively select the topics and program the Forum agenda between September 26 - October 24. > > Once the StarlingX representative is confirmed, our team, led by Jimmy McArthur, will work closely with them on next steps. > > Full details about the Forum, including the planning timelines can be found at https://wiki.openstack.org/wiki/Forum . > > Thank you! > Claire > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 29 17:21:52 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 29 Aug 2018 17:21:52 +0000 Subject: [Starlingx-discuss] StarlingX slides for Vancouver LinuxCon demos Message-ID: <9A85D2917C58154C960D95352B22818BAB582311@fmsmsx115.amr.corp.intel.com> Here is a slide deck put together for the demos at LinuxCon this week. It's based on the deck currently linked off the project home page and originally authored by Wind River. I would happily acknowledge the author(s) and thank them, but I don't know who to thank. :) The changes include use of the new StarlingX logos, re-formatting of text within boxes and notices on marks that are owned by 3rd parties. And some QR codes for easy access to our web resources. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: LinuxCon Vancouver 2018 StarlingX 2018-08-29.pptx Type: application/vnd.openxmlformats-officedocument.presentationml.presentation Size: 3629831 bytes Desc: LinuxCon Vancouver 2018 StarlingX 2018-08-29.pptx URL: From bruce.e.jones at intel.com Wed Aug 29 17:26:10 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 29 Aug 2018 17:26:10 +0000 Subject: [Starlingx-discuss] Agenda and notes from our project call Aug 29 2018 Message-ID: <9A85D2917C58154C960D95352B22818BAB582328@fmsmsx115.amr.corp.intel.com> Agenda and notes for the 8/29 meeting * LinuxCon Vancouver demo happening today - Bruce to share deck (Done) * Multi-OS support update o Intel team to accelerate a Clear MVP - early prototype o Python2->3 work should continue, Cindy to update existing stories as to which will be targeted for October release o Glance & Cinder (Pike) are not Python3 compatible yet. AR Bruce to get with Vivian. * CentOS 7.5 upgrade update o Upgrade should continue for Oct o Test build expected EOW o Out of tree drivers & Ceph can remain at current * Release plans - teams should be locked and loaded - issues/blockers? o Docs team meeting today to finalize plans o Devstack team status? o Networking team meeting tomorrow * Bug handling - how is LP working for us? * Sanity test result reporting - update o daily results to be posted to the wiki each day, logs to be posted in a separate repo. We do not have a healthy ISO - build is failing in a yet to be debugged way. Yesterday's build was sucessful we should have results today. * stx-gui update o WR has some devs working on commits into stx-gui and another dev helping with integration. We need to ramp people up on the Intel side who can address configuration issues. * Relevance of "?_tis_dist" and "tis_patch_ver" variables in RPM spec files. o These would be the last items to remove once all patches are cleared. o What is the reason for these changes in packages that don't have patches? Share a list on email please so it can be discussed. * Updates from teams o Documentation - Bruce to update in email later today o Security - had our first meeting Monday morning. Discussed process with Jeremy from OpenStack. Short term focusing on how to handle embargoed (pre-patch) issue discussions. Please treat any possible security issue as private and embargoed. o Build ? Working to make the build more efficient - both automation and documentation. Scott's spec for improvements is a great model for how to document a new feature (spec). o Test * PTG planning - ildikov o Edge Computing Group draft agenda: https://etherpad.openstack.org/p/EdgeComputingGroupPTG4 o Suggestion to put timelines on STX PTG agenda and send out to broader list (openstack dev and edge computing) o Is there interest in enabling remote participation in the STX PTG meeting? * Forum Selection Committee - ildikov - Greg to be our volunteer -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Wed Aug 29 19:39:42 2018 From: claire at openstack.org (Claire Massey) Date: Wed, 29 Aug 2018 14:39:42 -0500 Subject: [Starlingx-discuss] Deadline AUGUST 30 - Berlin Summit Travel Support References: <48D19CD6-6A56-46EA-A33F-E955E2471735@openstack.org> Message-ID: <706AA622-352D-41C8-BC66-120A07BE6466@openstack.org> Hi everyone, OSF covers travel costs (hotel and/or flight) for a limited quantity of applicants for each Summit. The deadline to apply for Travel Support to the November Berlin Summit closes TOMORROW, Thursday, August 30 at 11:59pm PT. If your company will not cover costs for you to attend the Berlin Summit , then you’re welcome to APPLY HERE . The Travel Support Program's aim is to facilitate participation of active community members to the Summit by covering the costs for their travel and accommodation. If you are a key contributor to a project managed by the OpenStack Foundation, and your company does not cover the costs of your travel and accommodation to Berlin, you can apply for the Travel Support Program. Please email summit at openstack.org with any questions. Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 29 22:18:26 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 29 Aug 2018 22:18:26 +0000 Subject: [Starlingx-discuss] Docs team meeting notes 8/29/18 Message-ID: <9A85D2917C58154C960D95352B22818BAB585051@fmsmsx115.amr.corp.intel.com> Summary: We reviewed two main topics today. One is our release readiness. We are on track. Note that many of the documentation stories will result in wiki documents and thus aren't blocking to the code freeze. The second topic is an on-going discussion about how to share test execution results, with the team leaning toward creating (somehow) a web accessible dashboard that anyone looking for test results can access. Agenda and notes for the 8/29 call * Review status of stories for the Oct Release and finalize comitted content: o https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.docs&tags=stx.2018.10 * Process "Validate Inputs" support based on API Reference o Email sent to Cindy Xie and team with html and rst version Michael and team has created. o Waiting for any other request. * Validation Results o "Launchpad" proposal ? Some ideas around it: * 1. Most immediate implementation * 2. Using a platform to "share code, bug reports, translations and ideas" to share results * 3. Publish automated but Launchpad API research is needed ? High level steps: * 1. Create the "script" to upload html files via Launchpad API * 2. Implement the "script" under our Test infrastructure o "OpenStack" proposal ? Some ideas around it: * 1. Not sure if current OpenStack infrastructure is meant to handle results since it uses Gerrit as a trigger to build and publish, but if everyone is ok: * 2. Midterm implementation doc.starlingx.io not in place yet * 3. Reuse of existing OpenStack Documentation infrastructure, using doc/ directory * 4. Some effort on translating Xml to Rst, pandoc our friend? https://pandoc.org/ * 5. Publish automated based on Gerrit, same way as Doc, Api-Ref, Api-Guide and Release Notes ? High level steps for implementation: * 1. Review pandoc project to see if it helps in the translation, if not create a XML to RST translator * 2. Implement the "script" to translate into RST format and check in into stx-docs: stx-docs/doc/source/results/... * 3. Enable doc.starlingx.io infrastructure o "Own" Infrastructure ? How to automatically put html files into doc.starlingx.io? * results.starlingx.io * doc.starlingx.io/results -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Aug 29 22:29:02 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 29 Aug 2018 22:29:02 +0000 Subject: [Starlingx-discuss] RFC - Draft PTG agenda Message-ID: <9A85D2917C58154C960D95352B22818BAB5850AB@fmsmsx115.amr.corp.intel.com> I've updated the draft PTG agenda [1] to add time slots, with my best guess as to how long each topic might need. Please review and update as needed. Thanks! Brucej [1] https://etherpad.openstack.org/p/stx-PTG-agenda -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Wed Aug 29 23:03:21 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 29 Aug 2018 23:03:21 +0000 Subject: [Starlingx-discuss] [Build] Build Avoidance In-Reply-To: <06fb400c-4e93-c765-6385-9e579e2b3ae7@windriver.com> References: <06fb400c-4e93-c765-6385-9e579e2b3ae7@windriver.com> Message-ID: Thanks Scott! > Build Avoidance, a build tool improvement. > > Greatly reduce build times after a repo sync for designers working within a > regional office. For a new workspace, build-pkgs typically requires 3+ hours, > build avoidance typically reduces this step to ~20min. > > Method (in brief): > > 1) Reference builds Regional Office could be the results of 2 components: - Reference Mirror - As designer, I do not want to download packages but compile - This is already enabled [0] by Jason and team - It is being implemented at our office in Mexico, we will send our findings - Reference Build - Described here > 2) Designers > - build-pkgs --build-avoidance ... will request a build avoidance build. > - Additional arguments, and/or environment variables, and/or a config file > unique to the regional office, are used to specify a URL to the reference > builds. Do we need changes at the Build System level? We will run a proof of concept here just let us know how to get started. > - build-pkgs will: > = From newest to oldest, scan the CONTEXTs of the various reference > builds. Select the first (most recent) context which satisfies: For every git, the > SHA specified in the CONTEXT is present. > = The selected context might be slightly out of date, but not by more than a > day (assuming daily reference builds). > = If the context has not been previously downloaded, then download it > now. Meaning download select portions of the reference build workspace > into the designer's workspace. This includes all the SRPMS, RPMS, > MD5SUMS, and misc supporting files. (~10 min over office LAN) Can it take a look at our Reference Mirror? [0] https://review.openstack.org/590781 From shuicheng.lin at intel.com Thu Aug 30 02:51:46 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 30 Aug 2018 02:51:46 +0000 Subject: [Starlingx-discuss] python-eventlet upgrade issue Message-ID: <9700A18779F35F49AF027300A49E7C7655355AEF@SHSMSX101.ccr.corp.intel.com> Hi, Anyone familiar with python-eventlet? I meet a problem when upgrade python-eventlet from "0.18.4-2" to "0.20.1-2", when do CentOS7.5 upgrade. Eventlet will hook/patch some basic python function, such as "call"/"check_call" from subprocess.py The issue is that, after upgrade to 0.20.1-2 version, these functions don't work as expected. For the pass log (with eventlet 0.18.4), below log is as expected: 2018-08-28 05:38:33.816 21142 INFO sysinv.agent.pci [-] Could not determine DPDK support for NIC (vendor 0x8086 device: 0x100e), defaulting to False But for the fail log(with eventlet 0.20.1), it seems the exception is not handled correctly: 2018-08-28 03:48:21.687 21302 ERROR sysinv.openstack.common.periodic_task [-] Error during AgentManager._agent_audit: Command '['query_pci_id', '-v 0x8086', '-d 0x100e']' returned non-zero exit status 1 Is there any code style change with new eventlet? Thanks. Pass log: 2018-08-28 05:38:33.477 21142 INFO sysinv.openstack.common.rpc.common [-] Connected to AMQP server on localhost:5672 2018-08-28 05:38:33.485 21142 INFO sysinv.agent.manager [-] Sysinv Agent audit running inv_get_and_report. 2018-08-28 05:38:33.552 21177 INFO migrate.versioning.api [-] 0 -> 1... 2018-08-28 05:38:33.750 21177 INFO migrate.versioning.api [-] done 2018-08-28 05:38:33.750 21177 INFO migrate.versioning.api [-] 1 -> 2... 2018-08-28 05:38:33.816 21142 INFO sysinv.agent.pci [-] Could not determine DPDK support for NIC (vendor 0x8086 device: 0x100e), defaulting to False 2018-08-28 05:38:34.032 21142 INFO sysinv.agent.pci [-] Could not determine DPDK support for NIC (vendor 0x8086 device: 0x100e), defaulting to False 2018-08-28 05:38:34.033 21142 WARNING sysinv.agent.pci [-] Enabling device enp0s8 to query link speed 2018-08-28 05:38:34.036 21142 WARNING sysinv.agent.pci [-] ATTR speed unknown for: enp0s8 (flags: 0x1002) 2018-08-28 05:38:34.036 21142 WARNING sysinv.agent.pci [-] Disabling device enp0s8 after querying link speed Fail log: 2018-08-28 03:48:21.373 21302 INFO sysinv.openstack.common.rpc.common [-] Connected to AMQP server on localhost:5672 2018-08-28 03:48:21.382 21302 INFO sysinv.agent.manager [-] Sysinv Agent audit running inv_get_and_report. 2018-08-28 03:48:21.463 21337 INFO migrate.versioning.api [-] 0 -> 1... 2018-08-28 03:48:21.631 21337 INFO migrate.versioning.api [-] done 2018-08-28 03:48:21.631 21337 INFO migrate.versioning.api [-] 1 -> 2... 2018-08-28 03:48:21.687 21302 ERROR sysinv.openstack.common.periodic_task [-] Error during AgentManager._agent_audit: Command '['query_pci_id', '-v 0x8086', '-d 0x100e']' returned non-zero exit status 1 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task Traceback (most recent call last): 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/periodic_task.py", line 182, in run_periodic_tasks 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task task(self, context) 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/manager.py", line 1010, in _agent_audit 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task force_updates=None) 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/manager.py", line 1028, in agent_audit 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task self.ihost_inv_get_and_report(icontext) 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/manager.py", line 575, in ihost_inv_get_and_report 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task pci_net_array = self._ipci_operator.pci_get_net_attrs(inic.pciaddr) 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/pci.py", line 470, in pci_get_net_attrs 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task stdout=fnull, stderr=fnull) 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task raise CalledProcessError(retcode, cmd) 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task CalledProcessError: Command '['query_pci_id', '-v 0x8086', '-d 0x100e']' returned non-zero exit status 1 2018-08-28 03:48:21.687 21302 TRACE sysinv.openstack.common.periodic_task 2018-08-28 03:48:21.929 21337 INFO migrate.versioning.api [-] done Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Aug 30 04:08:15 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 30 Aug 2018 04:08:15 +0000 Subject: [Starlingx-discuss] Notes: StarlingX non-OpenStack Distro meeting, 8/29 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B32E24A@SHSMSX104.ccr.corp.intel.com> Agenda & notes for 8/29 meetings: * Review Team objectives & prioritize (Cindy) Review team goal. Concern about the % patch reduction by 50% and 90% by mid of 2019 and end of 2019. Saul: define a more realistic # after patch review. * CentOS 7.5 upgrade status (Cindy) most of sRPM upgrade patches has been merged into f/centos 7.5 branch; 5 to be merged: 2 patches is under review, 1 in progress. 2 patches causes basic deployment issues: lighttpd (https://review.openstack.org/#/c/596263/) & python-eventlet-0.20.1-2.el7.src.rpm (patch# TBD); plan is to keep the SRPM in 7.4 w/ other sRPM for a test build to GDC and WR. Trending: end of Friday China time for test build; ISO to GDC, build instructions (including pending patches if needs to be cherry-picked), branch-name. Shuicheng will send this out. * OS kernel version upgrade: can we upgrade to another kernel (to be planned). Code freeze for 2018.10: Sep 26th. AR: Shuicheng file a story for OS kernel upgrade into 7.5 kernel. * Out of tree kernel driver is part of the scope? AR: Shuicheng to scope what is the kernel drivers needs to be upgraded it's OK if we cannot hit Sep 26 code freeze date. * Ceph package upgrade (Vivian Zhu is working) - part of storage team. AR: Cindy to invite Vivian into this meeting to provide the status update. recommendation from Brent: very high risk of upgrading Ceph to hit Sep 26th code freeze date. * libvirt and qemu upgrade status (Ghada) Jim has a basic build for libvirt w/ patch rebase. Finishing Qemu trending end of this week. will be off next week. Patch will not be merged before WR internal testing, will be merged before Sep 20th? * non-OpenStack patches review status (Saul) master xls: https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing Intel or gmail address to Bruce if you'd like to get access. Goal: to get 1st pass of analysis results by PTG (Sep 12th) * 2018.10 release content review (Ghada) * Active all storyboard: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&project_group_id=86 * Tag 2018.10: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.2018.10&project_group_id=86 * OS independent topics (Saul) - defered EOM -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Aug 30 04:35:37 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 30 Aug 2018 04:35:37 +0000 Subject: [Starlingx-discuss] Notes: StarlingX non-OpenStack Distro meeting, 8/29 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B32E041@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B32E041@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C7655355B1E@SHSMSX101.ccr.corp.intel.com> Hi all, Two story is created as below for my AR. [Feature] Upgrade kernel to CentOS 7.5 version [Feature] Check Out of tree kernel driver upgrade Best Regards Shuicheng _____________________________________________ From: Xie, Cindy Sent: Thursday, August 30, 2018 11:17 AM To: Khalil, Ghada ; Wold, Saul ; Rowsell, Brent ; Sun, Austin ; Wang, Yi C ; Lin, Shuicheng ; Chen, Yan ; Somerville, Jim ; 'Ildiko Vancsa' ; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P ; Perez, Ricardo O ; Perez Rodriguez, Humberto I ; Hu, Yong ; Zhu, Vivian ; 'Chen, Jacky' ; 'Leo Xu' ; 'Waines, Greg' ; 'Eslimi, Dariush' ; 'Komiyama, Takeo' ; Martinez Monroy, Elio ; Jones, Bruce E ; Hernandez Gonzalez, Fernando ; Hu, Wei W ; Qi, Mingyuan ; 'Young, Ken' ; Arce Moreno, Abraham ; 'Seiler, Glenn' Subject: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Agenda & notes for 8/29 meetings: * Review Team objectives & prioritize (Cindy) Review team goal. Concern about the % patch reduction by 50% and 90% by mid of 2019 and end of 2019. Saul: define a more realistic # after patch review. * CentOS 7.5 upgrade status (Cindy) most of sRPM upgrade patches has been merged into f/centos 7.5 branch; 5 to be merged: 2 patches is under review, 1 in progress. 2 patches causes basic deployment issues: lighttpd (https://review.openstack.org/#/c/596263/) & python-eventlet-0.20.1-2.el7.src.rpm (patch# TBD); plan is to keep the SRPM in 7.4 w/ other sRPM for a test build to GDC and WR. Trending: end of Friday China time for test build; ISO to GDC, build instructions (including pending patches if needs to be cherry-picked), branch-name. Shuicheng will send this out. * OS kernel version upgrade: can we upgrade to another kernel (to be planned). Code freeze for 2018.10: Sep 26th. AR: Shuicheng file a story for OS kernel upgrade into 7.5 kernel. * Out of tree kernel driver is part of the scope? AR: Shuicheng to scope what is the kernel drivers needs to be upgraded it's OK if we cannot hit Sep 26 code freeze date. * Ceph package upgrade (Vivian Zhu is working) - part of storage team. AR: Cindy to invite Vivian into this meeting to provide the status update. recommendation from Brent: very high risk of upgrading Ceph to hit Sep 26th code freeze date. * libvirt and qemu upgrade status (Ghada) Jim has a basic build for libvirt w/ patch rebase. Finishing Qemu trending end of this week. will be off next week. Patch will not be merged before WR internal testing, will be merged before Sep 20th? * non-OpenStack patches review status (Saul) master xls: https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing Intel or gmail address to Bruce if you'd like to get access. Goal: to get 1st pass of analysis results by PTG (Sep 12th) * 2018.10 release content review (Ghada) o Active all storyboard: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&project_group_id=86 o Tag 2018.10: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.2018.10&project_group_id=86 * OS independent topics (Saul) - defered -----Original Appointment----- From: Xie, Cindy Sent: Friday, August 24, 2018 9:38 AM To: Xie, Cindy; Khalil, Ghada; Wold, Saul; Rowsell, Brent; Sun, Austin; Wang, Yi C; Lin, Shuicheng; Chen, Yan; Somerville, Jim; 'Ildiko Vancsa'; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P; Perez, Ricardo O; Perez Rodriguez, Humberto I; Hu, Yong; Zhu, Vivian; 'Chen, Jacky'; 'Leo Xu'; 'Waines, Greg'; 'Eslimi, Dariush'; 'Komiyama, Takeo'; Martinez Monroy, Elio; Jones, Bruce E; Hernandez Gonzalez, Fernando; Hu, Wei W; Qi, Mingyuan; 'Young, Ken'; Arce Moreno, Abraham; Seiler, Glenn Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, August 29, 2018 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM EDT (9PM China time) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Aug 30 12:47:03 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 30 Aug 2018 12:47:03 +0000 Subject: [Starlingx-discuss] Notes: StarlingX non-OpenStack Distro meeting, 8/29 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B32E041@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B32E041@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB253276@ALA-MBD.corp.ad.wrs.com> Thanks Cindy. I opened https://storyboard.openstack.org/#!/story/2003605 to track the CEPH upgrade. We had also discussed increasing the patch reduction target for 2018, we should add that to the 1st action below. Thanks, Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 29, 2018 11:17 PM To: Khalil, Ghada ; Wold, Saul ; Rowsell, Brent ; Sun, Austin ; Wang, Yi C ; Lin, Shuicheng ; Chen, Yan ; Somerville, Jim ; 'Ildiko Vancsa' ; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P ; Perez, Ricardo O ; Perez Rodriguez, Humberto I ; Hu, Yong ; Zhu, Vivian ; Chen, Jacky ; 'Leo Xu' ; Waines, Greg ; Eslimi, Dariush ; Komiyama, Takeo ; Martinez Monroy, Elio ; Jones, Bruce E ; Hernandez Gonzalez, Fernando ; Hu, Wei W ; Qi, Mingyuan ; Young, Ken ; Arce Moreno, Abraham ; Seiler, Glenn Subject: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Agenda & notes for 8/29 meetings: * Review Team objectives & prioritize (Cindy) Review team goal. Concern about the % patch reduction by 50% and 90% by mid of 2019 and end of 2019. Saul: define a more realistic # after patch review. * CentOS 7.5 upgrade status (Cindy) most of sRPM upgrade patches has been merged into f/centos 7.5 branch; 5 to be merged: 2 patches is under review, 1 in progress. 2 patches causes basic deployment issues: lighttpd (https://review.openstack.org/#/c/596263/) & python-eventlet-0.20.1-2.el7.src.rpm (patch# TBD); plan is to keep the SRPM in 7.4 w/ other sRPM for a test build to GDC and WR. Trending: end of Friday China time for test build; ISO to GDC, build instructions (including pending patches if needs to be cherry-picked), branch-name. Shuicheng will send this out. * OS kernel version upgrade: can we upgrade to another kernel (to be planned). Code freeze for 2018.10: Sep 26th. AR: Shuicheng file a story for OS kernel upgrade into 7.5 kernel. * Out of tree kernel driver is part of the scope? AR: Shuicheng to scope what is the kernel drivers needs to be upgraded it's OK if we cannot hit Sep 26 code freeze date. * Ceph package upgrade (Vivian Zhu is working) - part of storage team. AR: Cindy to invite Vivian into this meeting to provide the status update. recommendation from Brent: very high risk of upgrading Ceph to hit Sep 26th code freeze date. * libvirt and qemu upgrade status (Ghada) Jim has a basic build for libvirt w/ patch rebase. Finishing Qemu trending end of this week. will be off next week. Patch will not be merged before WR internal testing, will be merged before Sep 20th? * non-OpenStack patches review status (Saul) master xls: https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing Intel or gmail address to Bruce if you'd like to get access. Goal: to get 1st pass of analysis results by PTG (Sep 12th) * 2018.10 release content review (Ghada) * Active all storyboard: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&project_group_id=86 * Tag 2018.10: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.2018.10&project_group_id=86 * OS independent topics (Saul) - defered -----Original Appointment----- From: Xie, Cindy Sent: Friday, August 24, 2018 9:38 AM To: Xie, Cindy; Khalil, Ghada; Wold, Saul; Rowsell, Brent; Sun, Austin; Wang, Yi C; Lin, Shuicheng; Chen, Yan; Somerville, Jim; 'Ildiko Vancsa'; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P; Perez, Ricardo O; Perez Rodriguez, Humberto I; Hu, Yong; Zhu, Vivian; 'Chen, Jacky'; 'Leo Xu'; 'Waines, Greg'; 'Eslimi, Dariush'; 'Komiyama, Takeo'; Martinez Monroy, Elio; Jones, Bruce E; Hernandez Gonzalez, Fernando; Hu, Wei W; Qi, Mingyuan; 'Young, Ken'; Arce Moreno, Abraham; Seiler, Glenn Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, August 29, 2018 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: * Wednesday 9AM EDT (9PM China time) * Call Details: * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: * https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Aug 30 12:54:17 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 30 Aug 2018 12:54:17 +0000 Subject: [Starlingx-discuss] Notes: StarlingX non-OpenStack Distro meeting, 8/29 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB253276@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B32E041@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB253276@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B32EC75@SHSMSX104.ccr.corp.intel.com> Thanks Brent. I will add have Saul to report out the patch reduction revised goal next time. @ Vivian, I cannot find your name in StoryBoard, thus I assigned it to Shane. Please take over this story. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, August 30, 2018 8:47 PM To: Xie, Cindy ; Khalil, Ghada ; Wold, Saul ; Sun, Austin ; Wang, Yi C ; Lin, Shuicheng ; Chen, Yan ; Somerville, Jim ; 'Ildiko Vancsa' ; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P ; Perez, Ricardo O ; Perez Rodriguez, Humberto I ; Hu, Yong ; Zhu, Vivian ; Chen, Jacky ; 'Leo Xu' ; Waines, Greg ; Eslimi, Dariush ; Komiyama, Takeo ; Martinez Monroy, Elio ; Jones, Bruce E ; Hernandez Gonzalez, Fernando ; Hu, Wei W ; Qi, Mingyuan ; Young, Ken ; Arce Moreno, Abraham ; Seiler, Glenn Subject: RE: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Thanks Cindy. I opened https://storyboard.openstack.org/#!/story/2003605 to track the CEPH upgrade. We had also discussed increasing the patch reduction target for 2018, we should add that to the 1st action below. Thanks, Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 29, 2018 11:17 PM To: Khalil, Ghada >; Wold, Saul >; Rowsell, Brent >; Sun, Austin >; Wang, Yi C >; Lin, Shuicheng >; Chen, Yan >; Somerville, Jim >; 'Ildiko Vancsa' >; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P >; Perez, Ricardo O >; Perez Rodriguez, Humberto I >; Hu, Yong >; Zhu, Vivian >; Chen, Jacky >; 'Leo Xu' >; Waines, Greg >; Eslimi, Dariush >; Komiyama, Takeo >; Martinez Monroy, Elio >; Jones, Bruce E >; Hernandez Gonzalez, Fernando >; Hu, Wei W >; Qi, Mingyuan >; Young, Ken >; Arce Moreno, Abraham >; Seiler, Glenn > Subject: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Agenda & notes for 8/29 meetings: * Review Team objectives & prioritize (Cindy) Review team goal. Concern about the % patch reduction by 50% and 90% by mid of 2019 and end of 2019. Saul: define a more realistic # after patch review. * CentOS 7.5 upgrade status (Cindy) most of sRPM upgrade patches has been merged into f/centos 7.5 branch; 5 to be merged: 2 patches is under review, 1 in progress. 2 patches causes basic deployment issues: lighttpd (https://review.openstack.org/#/c/596263/) & python-eventlet-0.20.1-2.el7.src.rpm (patch# TBD); plan is to keep the SRPM in 7.4 w/ other sRPM for a test build to GDC and WR. Trending: end of Friday China time for test build; ISO to GDC, build instructions (including pending patches if needs to be cherry-picked), branch-name. Shuicheng will send this out. * OS kernel version upgrade: can we upgrade to another kernel (to be planned). Code freeze for 2018.10: Sep 26th. AR: Shuicheng file a story for OS kernel upgrade into 7.5 kernel. * Out of tree kernel driver is part of the scope? AR: Shuicheng to scope what is the kernel drivers needs to be upgraded it's OK if we cannot hit Sep 26 code freeze date. * Ceph package upgrade (Vivian Zhu is working) - part of storage team. AR: Cindy to invite Vivian into this meeting to provide the status update. recommendation from Brent: very high risk of upgrading Ceph to hit Sep 26th code freeze date. * libvirt and qemu upgrade status (Ghada) Jim has a basic build for libvirt w/ patch rebase. Finishing Qemu trending end of this week. will be off next week. Patch will not be merged before WR internal testing, will be merged before Sep 20th? * non-OpenStack patches review status (Saul) master xls: https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing Intel or gmail address to Bruce if you'd like to get access. Goal: to get 1st pass of analysis results by PTG (Sep 12th) * 2018.10 release content review (Ghada) * Active all storyboard: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&project_group_id=86 * Tag 2018.10: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.2018.10&project_group_id=86 * OS independent topics (Saul) - defered -----Original Appointment----- From: Xie, Cindy Sent: Friday, August 24, 2018 9:38 AM To: Xie, Cindy; Khalil, Ghada; Wold, Saul; Rowsell, Brent; Sun, Austin; Wang, Yi C; Lin, Shuicheng; Chen, Yan; Somerville, Jim; 'Ildiko Vancsa'; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P; Perez, Ricardo O; Perez Rodriguez, Humberto I; Hu, Yong; Zhu, Vivian; 'Chen, Jacky'; 'Leo Xu'; 'Waines, Greg'; 'Eslimi, Dariush'; 'Komiyama, Takeo'; Martinez Monroy, Elio; Jones, Bruce E; Hernandez Gonzalez, Fernando; Hu, Wei W; Qi, Mingyuan; 'Young, Ken'; Arce Moreno, Abraham; Seiler, Glenn Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, August 29, 2018 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: * Wednesday 9AM EDT (9PM China time) * Call Details: * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: * https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Aug 30 14:28:07 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 30 Aug 2018 14:28:07 +0000 Subject: [Starlingx-discuss] Etherpads and auto-translate Message-ID: <9A85D2917C58154C960D95352B22818BAB5853CE@fmsmsx115.amr.corp.intel.com> We have been using Etherpads as one of our main documentation tools, and we're continuing to find instances of the Etherpads getting auto-translated into other languages. We ask everyone to please be careful with auto-tranlatation. If you do translate an Etherpad, please undo the changes. Meanwhile, teams that want to protect documents from translation should move them to the wiki. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Aug 30 16:36:30 2018 From: scott.little at windriver.com (Scott Little) Date: Thu, 30 Aug 2018 12:36:30 -0400 Subject: [Starlingx-discuss] [Build] Build Avoidance In-Reply-To: References: <06fb400c-4e93-c765-6385-9e579e2b3ae7@windriver.com> Message-ID: <5d7bc3d1-84ff-93db-5041-e7706db2edea@windriver.com> Yes, Build tool changes are required. 1) We must now trigger rebuilds based on changed checksum, rather than time stamps.  The timestamps are not reliable when copying files between different build environments. 2) build-pkgs must capture the git context of the build 3) 'build-pkgs --avoidance' is requested. It must now compare the state of your git tree vs the available reference builds, and download the correct one if available. I have these changes under test now. I will be posting reviews in the next few days. Scott On 18-08-29 07:03 PM, Arce Moreno, Abraham wrote: > Thanks Scott! > >> Build Avoidance, a build tool improvement. >> >> Greatly reduce build times after a repo sync for designers working within a >> regional office. For a new workspace, build-pkgs typically requires 3+ hours, >> build avoidance typically reduces this step to ~20min. >> >> Method (in brief): >> >> 1) Reference builds > Regional Office could be the results of 2 components: > > - Reference Mirror > - As designer, I do not want to download packages but compile > - This is already enabled [0] by Jason and team > - It is being implemented at our office in Mexico, we will send our findings > - Reference Build > - Described here > >> 2) Designers >> - build-pkgs --build-avoidance ... will request a build avoidance build. >> - Additional arguments, and/or environment variables, and/or a config file >> unique to the regional office, are used to specify a URL to the reference >> builds. > Do we need changes at the Build System level? We will run a proof of concept > here just let us know how to get started. > >> - build-pkgs will: >> = From newest to oldest, scan the CONTEXTs of the various reference >> builds. Select the first (most recent) context which satisfies: For every git, the >> SHA specified in the CONTEXT is present. >> = The selected context might be slightly out of date, but not by more than a >> day (assuming daily reference builds). >> = If the context has not been previously downloaded, then download it >> now. Meaning download select portions of the reference build workspace >> into the designer's workspace. This includes all the SRPMS, RPMS, >> MD5SUMS, and misc supporting files. (~10 min over office LAN) > Can it take a look at our Reference Mirror? > > [0] https://review.openstack.org/590781 From chris.friesen at windriver.com Thu Aug 30 19:33:23 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 30 Aug 2018 13:33:23 -0600 Subject: [Starlingx-discuss] python-eventlet upgrade issue In-Reply-To: <9700A18779F35F49AF027300A49E7C7655355AEF@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C7655355AEF@SHSMSX101.ccr.corp.intel.com> Message-ID: <5B884683.5080402@windriver.com> On 08/29/2018 08:51 PM, Lin, Shuicheng wrote: > Hi, > > Anyone familiar with python-eventlet? I meet a problem when upgrade > python-eventlet from “0.18.4-2” to “0.20.1-2”, when do CentOS7.5 upgrade. > > Eventlet will hook/patch some basic python function, such as “call”/”check_call” > from subprocess.py > > The issue is that, after upgrade to 0.20.1-2 version, these functions don’t work > as expected. > > For the pass log (with eventlet 0.18.4), below log is as expected: > > 2018-08-28 05:38:33.816 21142 INFO sysinv.agent.pci [-] Could not determine DPDK > support for NIC (vendor 0x8086 device: 0x100e), defaulting to False > > But for the fail log(with eventlet 0.20.1), it seems the exception is not > handled correctly: > > 2018-08-28 03:48:21.687 21302 ERROR sysinv.openstack.common.periodic_task [-] > Error during AgentManager._agent_audit: Command '['query_pci_id', '-v 0x8086', > '-d 0x100e']' returned non-zero exit status 1 > > Is there any code style change with new eventlet? There are a couple odd things going on here. First, you found a bug. The check at http://git.openstack.org/cgit/openstack/stx-config/tree/sysinv/sysinv/sysinv/sysinv/agent/pci.py#n477 currently reads if e.returncode == '1' when it should be if e.returncode == 1 This explains why you were getting the "Could not determine DPDK support..." message" before, when really you should have been getting "DPDK does support NIC..." I can't explain why the CalledProcessError was caught by sysinv.openstack.common.periodic_task when it should have been caught (and swallowed) by http://git.openstack.org/cgit/openstack/stx-config/tree/sysinv/sysinv/sysinv/sysinv/agent/pci.py#n475 Chris From cesar.lara at intel.com Thu Aug 30 19:34:14 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Thu, 30 Aug 2018 19:34:14 +0000 Subject: [Starlingx-discuss] [Build] Build Avoidance In-Reply-To: <5d7bc3d1-84ff-93db-5041-e7706db2edea@windriver.com> References: <06fb400c-4e93-c765-6385-9e579e2b3ae7@windriver.com> <5d7bc3d1-84ff-93db-5041-e7706db2edea@windriver.com> Message-ID: <0B566C62EC792145B40E29EFEBF1AB47104F9CED@fmsmsx104.amr.corp.intel.com> So is there anything that you need us to test? Are you modifying the scripts? Can you point us to these changes? I want to make sure we are tracking these changes in storyboard, shall I go ahead and create those stories? Regards Cesar Lara -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, August 30, 2018 11:37 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Build] Build Avoidance Yes, Build tool changes are required. 1) We must now trigger rebuilds based on changed checksum, rather than time stamps.  The timestamps are not reliable when copying files between different build environments. 2) build-pkgs must capture the git context of the build 3) 'build-pkgs --avoidance' is requested. It must now compare the state of your git tree vs the available reference builds, and download the correct one if available. I have these changes under test now. I will be posting reviews in the next few days. Scott On 18-08-29 07:03 PM, Arce Moreno, Abraham wrote: > Thanks Scott! > >> Build Avoidance, a build tool improvement. >> >> Greatly reduce build times after a repo sync for designers >> working within a regional office. For a new workspace, build-pkgs >> typically requires 3+ hours, build avoidance typically reduces this step to ~20min. >> >> Method (in brief): >> >> 1) Reference builds > Regional Office could be the results of 2 components: > > - Reference Mirror > - As designer, I do not want to download packages but compile > - This is already enabled [0] by Jason and team > - It is being implemented at our office in Mexico, we will send our > findings > - Reference Build > - Described here > >> 2) Designers >> - build-pkgs --build-avoidance ... will request a build avoidance build. >> - Additional arguments, and/or environment variables, and/or a >> config file unique to the regional office, are used to specify a URL >> to the reference builds. > Do we need changes at the Build System level? We will run a proof of > concept here just let us know how to get started. > >> - build-pkgs will: >> = From newest to oldest, scan the CONTEXTs of the various >> reference builds. Select the first (most recent) context which >> satisfies: For every git, the SHA specified in the CONTEXT is present. >> = The selected context might be slightly out of date, but not >> by more than a day (assuming daily reference builds). >> = If the context has not been previously downloaded, then >> download it now. Meaning download select portions of the reference >> build workspace into the designer's workspace. This includes all the >> SRPMS, RPMS, MD5SUMS, and misc supporting files. (~10 min over >> office LAN) > Can it take a look at our Reference Mirror? > > [0] https://review.openstack.org/590781 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Aug 30 22:51:11 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 30 Aug 2018 22:51:11 +0000 Subject: [Starlingx-discuss] [Build] Build Avoidance In-Reply-To: <0B566C62EC792145B40E29EFEBF1AB47104F9CED@fmsmsx104.amr.corp.intel.com> References: <06fb400c-4e93-c765-6385-9e579e2b3ae7@windriver.com> <5d7bc3d1-84ff-93db-5041-e7706db2edea@windriver.com> <0B566C62EC792145B40E29EFEBF1AB47104F9CED@fmsmsx104.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA44CCA7@ALA-MBD.corp.ad.wrs.com> There is a story for this already: https://storyboard.openstack.org/#!/story/2002835 I've added Scott's proposal as a comment in the story. -----Original Message----- From: Lara, Cesar [mailto:cesar.lara at intel.com] Sent: Thursday, August 30, 2018 3:34 PM To: Little, Scott; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Build] Build Avoidance So is there anything that you need us to test? Are you modifying the scripts? Can you point us to these changes? I want to make sure we are tracking these changes in storyboard, shall I go ahead and create those stories? Regards Cesar Lara -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, August 30, 2018 11:37 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Build] Build Avoidance Yes, Build tool changes are required. 1) We must now trigger rebuilds based on changed checksum, rather than time stamps.  The timestamps are not reliable when copying files between different build environments. 2) build-pkgs must capture the git context of the build 3) 'build-pkgs --avoidance' is requested. It must now compare the state of your git tree vs the available reference builds, and download the correct one if available. I have these changes under test now. I will be posting reviews in the next few days. Scott On 18-08-29 07:03 PM, Arce Moreno, Abraham wrote: > Thanks Scott! > >> Build Avoidance, a build tool improvement. >> >> Greatly reduce build times after a repo sync for designers >> working within a regional office. For a new workspace, build-pkgs >> typically requires 3+ hours, build avoidance typically reduces this step to ~20min. >> >> Method (in brief): >> >> 1) Reference builds > Regional Office could be the results of 2 components: > > - Reference Mirror > - As designer, I do not want to download packages but compile > - This is already enabled [0] by Jason and team > - It is being implemented at our office in Mexico, we will send our > findings > - Reference Build > - Described here > >> 2) Designers >> - build-pkgs --build-avoidance ... will request a build avoidance build. >> - Additional arguments, and/or environment variables, and/or a >> config file unique to the regional office, are used to specify a URL >> to the reference builds. > Do we need changes at the Build System level? We will run a proof of > concept here just let us know how to get started. > >> - build-pkgs will: >> = From newest to oldest, scan the CONTEXTs of the various >> reference builds. Select the first (most recent) context which >> satisfies: For every git, the SHA specified in the CONTEXT is present. >> = The selected context might be slightly out of date, but not >> by more than a day (assuming daily reference builds). >> = If the context has not been previously downloaded, then >> download it now. Meaning download select portions of the reference >> build workspace into the designer's workspace. This includes all the >> SRPMS, RPMS, MD5SUMS, and misc supporting files. (~10 min over >> office LAN) > Can it take a look at our Reference Mirror? > > [0] https://review.openstack.org/590781 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Aug 30 23:18:06 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 30 Aug 2018 23:18:06 +0000 Subject: [Starlingx-discuss] [Release] October Release Prep Kick-Off Meeting -- Call for participants Message-ID: <151EE31B9FCCA54397A757BC674650F0BA44CCD4@ALA-MBD.corp.ad.wrs.com> Hello all, I am looking to schedule a one hour meeting sometime next week (prior to the PTG) to discuss prep items for the starlingx October release. For this meeting, I would like to have representatives from the build, docs, test and releases team. I would also like for Dean, Saul and Brent to attend if they can spare the time. Of course, all are welcome to participate. Note: Release content/priorities will not be discussed in this meeting; only mechanics and delivery mechanism. If you are interested in participating, please let me know and indicate your time preference. Time Options: - Tuesday Sept 4 3:00pm Eastern - Wednesday Sept 5 4:00pm / 5:00pm / 6:00pm Eastern >> we can do an evening meeting if that works for everyone - Thursday Sept 6 2:00pm Eastern - Friday Sept 7 9:00am Eastern Proposed Agenda: (I will create an etherpad and post the agenda) - Align on release deliverables o We have something related to this documented on the wiki already: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Definition o Anything else to add/update ...any implications related to - Mechanics for creating the release branch/tags - Test activities/plan once release branch is created - Bug Management o Use Bug template o Guideline: Always source in master first o Criteria for which bugs gate the release and require a cherry-pick to the release branch. Who is responsible for cherry-picking? - Documentation o Content to include: ? Release Notes * High level list of features * Limitations/Caveats ? Standard Documentation - Developer Guide / API / others o Format ? I believe doc team have a plan for this already Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 31 00:40:29 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 31 Aug 2018 00:40:29 +0000 Subject: [Starlingx-discuss] [Release] October Release Prep Kick-Off Meeting -- Call for participants In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA44CCD4@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA44CCD4@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B32F9F1@SHSMSX104.ccr.corp.intel.com> Preferred option for me: * Friday Sept 7 9:00am Eastern I can do Wed Sep5 6pm Eastern as well if needed. Thx. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Friday, August 31, 2018 7:18 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] October Release Prep Kick-Off Meeting -- Call for participants Hello all, I am looking to schedule a one hour meeting sometime next week (prior to the PTG) to discuss prep items for the starlingx October release. For this meeting, I would like to have representatives from the build, docs, test and releases team. I would also like for Dean, Saul and Brent to attend if they can spare the time. Of course, all are welcome to participate. Note: Release content/priorities will not be discussed in this meeting; only mechanics and delivery mechanism. If you are interested in participating, please let me know and indicate your time preference. Time Options: - Tuesday Sept 4 3:00pm Eastern - Wednesday Sept 5 4:00pm / 5:00pm / 6:00pm Eastern >> we can do an evening meeting if that works for everyone - Thursday Sept 6 2:00pm Eastern - Friday Sept 7 9:00am Eastern Proposed Agenda: (I will create an etherpad and post the agenda) - Align on release deliverables o We have something related to this documented on the wiki already: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Definition o Anything else to add/update ...any implications related to - Mechanics for creating the release branch/tags - Test activities/plan once release branch is created - Bug Management o Use Bug template o Guideline: Always source in master first o Criteria for which bugs gate the release and require a cherry-pick to the release branch. Who is responsible for cherry-picking? - Documentation o Content to include: ? Release Notes * High level list of features * Limitations/Caveats ? Standard Documentation - Developer Guide / API / others o Format ? I believe doc team have a plan for this already Regards, Ghada Ghada Khalil, Manager, Titanium Cloud, Wind River direct 613.270.2273 skype ghada.khalil.ottawa 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivian.zhu at intel.com Fri Aug 31 00:54:24 2018 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Fri, 31 Aug 2018 00:54:24 +0000 Subject: [Starlingx-discuss] Notes: StarlingX non-OpenStack Distro meeting, 8/29 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B32EC75@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B32E041@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB253276@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F2B32EC75@SHSMSX104.ccr.corp.intel.com> Message-ID: <371DF9A763E9F44F924F4A821FC070264C490876@SHSMSX104.ccr.corp.intel.com> Cindy, my name showing on StoryBoard is Zhuweiwei. I have assigned the storyboard to myself. Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 From: Xie, Cindy Sent: Thursday, August 30, 2018 8:54 PM To: Rowsell, Brent ; Wang, Shane ; Zhu, Vivian ; starlingx-discuss at lists.starlingx.io Subject: RE: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Thanks Brent. I will add have Saul to report out the patch reduction revised goal next time. @ Vivian, I cannot find your name in StoryBoard, thus I assigned it to Shane. Please take over this story. Thx. - cindy From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, August 30, 2018 8:47 PM To: Xie, Cindy >; Khalil, Ghada >; Wold, Saul >; Sun, Austin >; Wang, Yi C >; Lin, Shuicheng >; Chen, Yan >; Somerville, Jim >; 'Ildiko Vancsa' >; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P >; Perez, Ricardo O >; Perez Rodriguez, Humberto I >; Hu, Yong >; Zhu, Vivian >; Chen, Jacky >; 'Leo Xu' >; Waines, Greg >; Eslimi, Dariush >; Komiyama, Takeo >; Martinez Monroy, Elio >; Jones, Bruce E >; Hernandez Gonzalez, Fernando >; Hu, Wei W >; Qi, Mingyuan >; Young, Ken >; Arce Moreno, Abraham >; Seiler, Glenn > Subject: RE: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Thanks Cindy. I opened https://storyboard.openstack.org/#!/story/2003605 to track the CEPH upgrade. We had also discussed increasing the patch reduction target for 2018, we should add that to the 1st action below. Thanks, Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, August 29, 2018 11:17 PM To: Khalil, Ghada >; Wold, Saul >; Rowsell, Brent >; Sun, Austin >; Wang, Yi C >; Lin, Shuicheng >; Chen, Yan >; Somerville, Jim >; 'Ildiko Vancsa' >; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P >; Perez, Ricardo O >; Perez Rodriguez, Humberto I >; Hu, Yong >; Zhu, Vivian >; Chen, Jacky >; 'Leo Xu' >; Waines, Greg >; Eslimi, Dariush >; Komiyama, Takeo >; Martinez Monroy, Elio >; Jones, Bruce E >; Hernandez Gonzalez, Fernando >; Hu, Wei W >; Qi, Mingyuan >; Young, Ken >; Arce Moreno, Abraham >; Seiler, Glenn > Subject: Notes: StarlingX non-OpenStack Distro meeting, 8/29 Agenda & notes for 8/29 meetings: * Review Team objectives & prioritize (Cindy) Review team goal. Concern about the % patch reduction by 50% and 90% by mid of 2019 and end of 2019. Saul: define a more realistic # after patch review. * CentOS 7.5 upgrade status (Cindy) most of sRPM upgrade patches has been merged into f/centos 7.5 branch; 5 to be merged: 2 patches is under review, 1 in progress. 2 patches causes basic deployment issues: lighttpd (https://review.openstack.org/#/c/596263/) & python-eventlet-0.20.1-2.el7.src.rpm (patch# TBD); plan is to keep the SRPM in 7.4 w/ other sRPM for a test build to GDC and WR. Trending: end of Friday China time for test build; ISO to GDC, build instructions (including pending patches if needs to be cherry-picked), branch-name. Shuicheng will send this out. * OS kernel version upgrade: can we upgrade to another kernel (to be planned). Code freeze for 2018.10: Sep 26th. AR: Shuicheng file a story for OS kernel upgrade into 7.5 kernel. * Out of tree kernel driver is part of the scope? AR: Shuicheng to scope what is the kernel drivers needs to be upgraded it's OK if we cannot hit Sep 26 code freeze date. * Ceph package upgrade (Vivian Zhu is working) - part of storage team. AR: Cindy to invite Vivian into this meeting to provide the status update. recommendation from Brent: very high risk of upgrading Ceph to hit Sep 26th code freeze date. * libvirt and qemu upgrade status (Ghada) Jim has a basic build for libvirt w/ patch rebase. Finishing Qemu trending end of this week. will be off next week. Patch will not be merged before WR internal testing, will be merged before Sep 20th? * non-OpenStack patches review status (Saul) master xls: https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing Intel or gmail address to Bruce if you'd like to get access. Goal: to get 1st pass of analysis results by PTG (Sep 12th) * 2018.10 release content review (Ghada) * Active all storyboard: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&project_group_id=86 * Tag 2018.10: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.2018.10&project_group_id=86 * OS independent topics (Saul) - defered -----Original Appointment----- From: Xie, Cindy Sent: Friday, August 24, 2018 9:38 AM To: Xie, Cindy; Khalil, Ghada; Wold, Saul; Rowsell, Brent; Sun, Austin; Wang, Yi C; Lin, Shuicheng; Chen, Yan; Somerville, Jim; 'Ildiko Vancsa'; starlingx-discuss at lists.starlingx.io Cc: Gomez, Juan P; Perez, Ricardo O; Perez Rodriguez, Humberto I; Hu, Yong; Zhu, Vivian; 'Chen, Jacky'; 'Leo Xu'; 'Waines, Greg'; 'Eslimi, Dariush'; 'Komiyama, Takeo'; Martinez Monroy, Elio; Jones, Bruce E; Hernandez Gonzalez, Fernando; Hu, Wei W; Qi, Mingyuan; 'Young, Ken'; Arce Moreno, Abraham; Seiler, Glenn Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, August 29, 2018 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: * Wednesday 9AM EDT (9PM China time) * Call Details: * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: * https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 31 09:47:10 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 31 Aug 2018 09:47:10 +0000 Subject: [Starlingx-discuss] mem run our in StarlingX Simplex bare metal install Message-ID: <2FD5DDB5A04D264C80D42CA35194914F2B33075F@SHSMSX104.ccr.corp.intel.com> All, 99Cloud reported that they encountered mem run out issue after StarlingX being installed in their Simplex bare metal. Below are some info provided by them: HW config: - CPU: 56c - Memory: 188GB - Hard Disck: 2400G SAS RAID1*1/480G SSD RAID1*1 - OS: bootimage0727.iso - NIC (see below picture) [cid:image001.png at 01D44152.A33A2ED0] Issue: once the system being installed on bare metal server (Simplex), check the mem usage, they see the issue below: [cid:image002.png at 01D44152.A33A2ED0] They didn't have chance to create any tenant VM yet due to mem is almost fully occupied (170GB). @Li Kai, please add additional info if the above is not complete. Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 44411 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 655405 bytes Desc: image002.png URL: From shuicheng.lin at intel.com Fri Aug 31 13:11:01 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 31 Aug 2018 13:11:01 +0000 Subject: [Starlingx-discuss] CentOS 7.5 upgrade status update Message-ID: <9700A18779F35F49AF027300A49E7C7655356001@SHSMSX101.ccr.corp.intel.com> Hi all, For CentOS7.5 upgrade, now we have 38 out of 42 src rpm upgrade merged to f/centos75 branch. While 3 src rpm (python-eventlet/lighttpd/puppet-haproxy) is under debug due to have deploy issue, and 1 src rpm (openstack-aodh) is still under rebase. Please go ahead to have a try and verify it. I expect there should be some issue due to we just did limited deploy test yet. Please help report any issue you find. Thanks. Here is the build instruction: 1. Switch to f/centos75 branch for stx-tools/stx-integ/stx-upstream. 2. Run mirror script to get the upgraded src rpm and related rpm. 3. Try to build packages and ISO. For the detail status of CentOS7.5 upgrade, please check below story: https://storyboard.openstack.org/#!/story/2003389 Best Regards Shuicheng From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, August 30, 2018 12:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Notes: StarlingX non-OpenStack Distro meeting, 8/29 Agenda & notes for 8/29 meetings: * Review Team objectives & prioritize (Cindy) Review team goal. Concern about the % patch reduction by 50% and 90% by mid of 2019 and end of 2019. Saul: define a more realistic # after patch review. * CentOS 7.5 upgrade status (Cindy) most of sRPM upgrade patches has been merged into f/centos 7.5 branch; 5 to be merged: 2 patches is under review, 1 in progress. 2 patches causes basic deployment issues: lighttpd (https://review.openstack.org/#/c/596263/) & python-eventlet-0.20.1-2.el7.src.rpm (patch# TBD); plan is to keep the SRPM in 7.4 w/ other sRPM for a test build to GDC and WR. Trending: end of Friday China time for test build; ISO to GDC, build instructions (including pending patches if needs to be cherry-picked), branch-name. Shuicheng will send this out. * OS kernel version upgrade: can we upgrade to another kernel (to be planned). Code freeze for 2018.10: Sep 26th. AR: Shuicheng file a story for OS kernel upgrade into 7.5 kernel. * Out of tree kernel driver is part of the scope? AR: Shuicheng to scope what is the kernel drivers needs to be upgraded it's OK if we cannot hit Sep 26 code freeze date. * Ceph package upgrade (Vivian Zhu is working) - part of storage team. AR: Cindy to invite Vivian into this meeting to provide the status update. recommendation from Brent: very high risk of upgrading Ceph to hit Sep 26th code freeze date. * libvirt and qemu upgrade status (Ghada) Jim has a basic build for libvirt w/ patch rebase. Finishing Qemu trending end of this week. will be off next week. Patch will not be merged before WR internal testing, will be merged before Sep 20th? * non-OpenStack patches review status (Saul) master xls: https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing Intel or gmail address to Bruce if you'd like to get access. Goal: to get 1st pass of analysis results by PTG (Sep 12th) * 2018.10 release content review (Ghada) * Active all storyboard: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&project_group_id=86 * Tag 2018.10: https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.2018.10&project_group_id=86 * OS independent topics (Saul) - defered EOM -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 31 14:28:10 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 31 Aug 2018 09:28:10 -0500 Subject: [Starlingx-discuss] [Release] October Release Prep Kick-Off Meeting -- Call for participants In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA44CCD4@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA44CCD4@ALA-MBD.corp.ad.wrs.com> Message-ID: On Thu, Aug 30, 2018 at 6:18 PM, Khalil, Ghada wrote: > If you are interested in participating, please let me know and indicate your > time preference. > > Time Options: > > - Tuesday Sept 4 3:00pm Eastern > > - Wednesday Sept 5 4:00pm / 5:00pm / 6:00pm Eastern >> we can do an > evening meeting if that works for everyone > > - Thursday Sept 6 2:00pm Eastern > > - Friday Sept 7 9:00am Eastern I think we have a regular meeting at the Wed 4:00 EDT, and other than Thursday I am available. > - Align on release deliverables > o We have something related to this documented on the wiki already: > https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Definition FWIW the bulk of that was written assuming STX is a source-only project (officially). I'd like to validate that assumption, could be done beforehand too. > - Mechanics for creating the release branch/tags Included here is how we want to handle an RC period. I didn't see that mentioned so I suppose we should first decide if we do want an RC period. FWIW OpenStack RCs are actually re-tagged for the release if there are no changes at release time so it is actually more of a timing thing... dt -- Dean Troyer dtroyer at gmail.com From Frank.Miller at windriver.com Fri Aug 31 15:04:23 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 31 Aug 2018 15:04:23 +0000 Subject: [Starlingx-discuss] Heads Up: sw-patch command is not working with latest builds Message-ID: Folks: It looks like the sw-patch command is not working which is preventing developers from testing their commits via a designer patch. This bug has been opened: https://bugs.launchpad.net/starlingx/+bug/1790166 We'll ask Yan to revert this commit today to enable development to continue: https://review.openstack.org/#/c/598077/ As it may be too late in Yan's timezone to see this request, I ask a core reviewer for stx-update to revert it if Yan cannot do this in the next 30 minutes or so. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Aug 31 15:13:45 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 31 Aug 2018 15:13:45 +0000 Subject: [Starlingx-discuss] mem run our in StarlingX Simplex bare metal install In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F2B33075F@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F2B33075F@SHSMSX104.ccr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BAB586085@fmsmsx115.amr.corp.intel.com> Does this issue reproduce? Cindy, can you file a bug in Launchpad please? brucej From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Friday, August 31, 2018 2:47 AM To: starlingx-discuss at lists.starlingx.io; li, Subject: [Starlingx-discuss] mem run our in StarlingX Simplex bare metal install All, 99Cloud reported that they encountered mem run out issue after StarlingX being installed in their Simplex bare metal. Below are some info provided by them: HW config: - CPU: 56c - Memory: 188GB - Hard Disck: 2400G SAS RAID1*1/480G SSD RAID1*1 - OS: bootimage0727.iso - NIC (see below picture) [cid:image001.png at 01D440FF.4AE05140] Issue: once the system being installed on bare metal server (Simplex), check the mem usage, they see the issue below: [cid:image002.png at 01D440FF.4AE05140] They didn't have chance to create any tenant VM yet due to mem is almost fully occupied (170GB). @Li Kai, please add additional info if the above is not complete. Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 44411 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 655405 bytes Desc: image002.png URL: From dtroyer at gmail.com Fri Aug 31 15:25:53 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 31 Aug 2018 10:25:53 -0500 Subject: [Starlingx-discuss] Heads Up: sw-patch command is not working with latest builds In-Reply-To: References: Message-ID: On Fri, Aug 31, 2018 at 10:04 AM, Miller, Frank wrote: > It looks like the sw-patch command is not working which is preventing > developers from testing their commits via a designer patch. This bug has > been opened: https://bugs.launchpad.net/starlingx/+bug/1790166 > > We’ll ask Yan to revert this commit today to enable development to continue: > https://review.openstack.org/#/c/598077/ As it may be too late in Yan’s > timezone to see this request, I ask a core reviewer for stx-update to revert > it if Yan cannot do this in the next 30 minutes or so. Let me know if cores are scarce today. And let's make sure Yan sees Al's comment in the original review for a follow-up. dt -- Dean Troyer dtroyer at gmail.com From Don.Penney at windriver.com Fri Aug 31 15:33:44 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 31 Aug 2018 15:33:44 +0000 Subject: [Starlingx-discuss] Heads Up: sw-patch command is not working with latest builds In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA3A0CC0@ALA-MBD.corp.ad.wrs.com> I've started the revert: https://review.openstack.org/#/c/599007/ -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, August 31, 2018 11:26 AM To: Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Heads Up: sw-patch command is not working with latest builds On Fri, Aug 31, 2018 at 10:04 AM, Miller, Frank wrote: > It looks like the sw-patch command is not working which is preventing > developers from testing their commits via a designer patch. This bug has > been opened: https://bugs.launchpad.net/starlingx/+bug/1790166 > > We’ll ask Yan to revert this commit today to enable development to continue: > https://review.openstack.org/#/c/598077/ As it may be too late in Yan’s > timezone to see this request, I ask a core reviewer for stx-update to revert > it if Yan cannot do this in the next 30 minutes or so. Let me know if cores are scarce today. And let's make sure Yan sees Al's comment in the original review for a follow-up. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scottk at optimcloud.com Fri Aug 31 15:38:21 2018 From: scottk at optimcloud.com (scottk at optimcloud.com) Date: Fri, 31 Aug 2018 15:38:21 +0000 Subject: [Starlingx-discuss] cant complete build-iso Message-ID: <97b00b5fa39ba783e2da24777357c96e@optimcloud.com> so i thought id take a run at it, the github documentation appears a bit lacking however the openstack guide appears more well put together, however im having an issue at this point, any insight is appreciated, also curious why you dont publish a built iso for users to test drive with, aside from that looks like it could be a good "distribution" with major telco benefits.... build-iso 10:17:22 10:17:22 ************************* 10:17:22 Create Titanium Cloud/CentOS Boot CD 10:17:22 ************************* 10:17:22 10:17:22 Finding cgcs-root 10:17:22 Checking $MY_REPO (value "/localdisk/designer/dingo/starlingx/cgcs-root") 10:17:22 Found! 10:17:22 10:17:22 Checking that we can access /localdisk/designer/dingo/starlingx/cgcs-root/cgcs-centos-repo/Binary 10:17:22 10:17:22 Okay, input looks fine... 10:17:22 10:17:22 Creating output directory /localdisk/loadbuild/dingo/starlingx/export/dist 10:17:22 Cleaning... 10:17:22 Done 10:17:22 10:17:22 Creating base output directory in /localdisk/loadbuild/dingo/starlingx/export/dist 10:17:22 Preparing package lists 10:17:22 Copying base files 10:17:22 Installing startup files 10:17:22 The image file /localdisk/loadbuild/dingo/starlingx/export/efiboot.img does not exist 10:17:22 *** Error: script update-efiboot-image does not exist *** -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Aug 31 16:08:42 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 31 Aug 2018 12:08:42 -0400 Subject: [Starlingx-discuss] cant complete build-iso In-Reply-To: <97b00b5fa39ba783e2da24777357c96e@optimcloud.com> References: <97b00b5fa39ba783e2da24777357c96e@optimcloud.com> Message-ID: Can you confirm update-efiboot-image exists and is executable. ls -al $MY_REPO/build-tools/update-efiboot-image On 18-08-31 11:38 AM, scottk at optimcloud.com wrote: > so i thought id take a run at it, the github documentation appears a > bit lacking however the openstack guide appears more well put > together, however im having an issue at this point, any insight is > appreciated, also curious why you dont publish a built iso for users > to test drive with, aside from that looks like it could be a good > "distribution" with major telco benefits.... > > > build-iso > 10:17:22 > 10:17:22 ************************* > 10:17:22 Create Titanium Cloud/CentOS Boot CD > 10:17:22 ************************* > 10:17:22 > 10:17:22 Finding cgcs-root > 10:17:22 Checking $MY_REPO (value > "/localdisk/designer/dingo/starlingx/cgcs-root") > 10:17:22 Found! > 10:17:22 > 10:17:22 Checking that we can access > /localdisk/designer/dingo/starlingx/cgcs-root/cgcs-centos-repo/Binary > 10:17:22 > 10:17:22 Okay, input looks fine... > 10:17:22 > 10:17:22 Creating output directory > /localdisk/loadbuild/dingo/starlingx/export/dist > 10:17:22 Cleaning... > 10:17:22 Done > 10:17:22 > 10:17:22 Creating base output directory in > /localdisk/loadbuild/dingo/starlingx/export/dist > 10:17:22 Preparing package lists > 10:17:22 Copying base files > 10:17:22 Installing startup files > 10:17:22 The image file > /localdisk/loadbuild/dingo/starlingx/export/efiboot.img does not exist > 10:17:22 *** Error: script update-efiboot-image does not exist *** > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 31 19:23:37 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 31 Aug 2018 14:23:37 -0500 Subject: [Starlingx-discuss] [docs] Setting up the docs.starlingx.io website Message-ID: I've started the process of getting the docs.starlingx.io website set up, one of the things required is defining the Zuul publish jobs as this is how the content gets from the build job out to the webserver. I'd like to confirm some things and ask for a bit of information: 1. The stx-docs repo will contain most of the docs.starlingx.io site? 2. The stx-specs repo also needs to be published, should it go to docs.starlingx.io/specs? 3. Release notes also need to be published, should they go in docs.starlingx.io/releasenotes/? 4. Does anyone feel like there is a need to not use a Sphinx-generated page as the root page? Given this is a Friday afternoon before a holiday weekend (for me anyway) I am proceeding with default answers of 1=yes and 4=no to get the process rolling, that can be changed and the rest addressed in follow-ups. dt -- Dean Troyer dtroyer at gmail.com