From vm.rod25 at gmail.com Mon Apr 1 02:45:47 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Sun, 31 Mar 2019 20:45:47 -0600 Subject: [Starlingx-discuss] [meetings][multios] Multi-OS team meeting Agenda for 03/25/2019 In-Reply-To: <0B566C62EC792145B40E29EFEBF1AB471061F165@fmsmsx104.amr.corp.intel.com> References: <0B566C62EC792145B40E29EFEBF1AB471061F165@fmsmsx104.amr.corp.intel.com> Message-ID: Hi Cesar and community: In tomorrow multiOS meeting we would like to give the following update: 1) Ubuntu Docker container for build STX in ubuntu 2) First patches to STX flock services to make easy Ubuntu build 3) After a week on the exploration phase of ISO build tools supported by the Ubuntu community we came to an architecture propousal( open for feedback ) described in slide 3 /4 : https://docs.google.com/presentation/d/1ck7vGH50AIAjUx9GNrIGtowG5qg7OYUBNdJyY-5ZvDc/edit?usp=sharing As always we are always happy for any kind of feedback Regards Victor Rodriguez On Fri, Mar 22, 2019 at 5:10 PM Lara, Cesar wrote: > > Multi-OS team meeting > > > > Agenda for 3/25/2019 > > > > - Update on Ubuntu build > > - Git repo with code access and readme documentation for this PoC > > - Update on Stx in a box > > - Opens > > > > > > Regards > > > > Cesar Lara > > Software Engineering Manager > > OpenSource Technology Center > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From kyle.oh95 at gmail.com Mon Apr 1 05:25:52 2019 From: kyle.oh95 at gmail.com (=?UTF-8?B?7Jik7J6s7Jqx?=) Date: Mon, 1 Apr 2019 14:25:52 +0900 Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" Message-ID: Hello StarlingX Team, As I deployed 'all-in-one duplex' completely, I'm now trying to add compute-0 node. However, it seems impossible to update new host. Please have a look at below result. [wrsroot at controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0 compute-0: Reject attempt to configure with invalid personality=compute I think I installed 'all-in-one duplex extended'.. Is there any way to add new compute node? Thanks for any helps in advance. Best Regards, Jaewook ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Apr 1 05:30:04 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 1 Apr 2019 05:30:04 +0000 Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" In-Reply-To: References: Message-ID: Hi Jaewook, Can you try following command: system host-update 3 personality=worker hostname=compute-0 The personality of compute node has changed from compute to woker. Best Regards, Xu, Chenjie From: 오재욱 [mailto:kyle.oh95 at gmail.com] Sent: Monday, April 1, 2019 1:26 PM To: starlingx-discuss Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" Hello StarlingX Team, As I deployed 'all-in-one duplex' completely, I'm now trying to add compute-0 node. However, it seems impossible to update new host. Please have a look at below result. [wrsroot at controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0 compute-0: Reject attempt to configure with invalid personality=compute I think I installed 'all-in-one duplex extended'.. Is there any way to add new compute node? Thanks for any helps in advance. Best Regards, Jaewook ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cm.lists at yandex.com Mon Apr 1 05:30:49 2019 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Sun, 31 Mar 2019 22:30:49 -0700 Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" In-Reply-To: References: Message-ID: <9436101554096649@myt3-2475c4d2af83.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From jwoh95 at dcn.ssu.ac.kr Mon Apr 1 05:32:10 2019 From: jwoh95 at dcn.ssu.ac.kr (Jaewook Oh) Date: Mon, 1 Apr 2019 14:32:10 +0900 Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" In-Reply-To: References: Message-ID: Hi Chenjie (I'm not sure about your first name, sorry for my mistake if I wrote your name wrongly TT) Thanks for advice, I think that command is correct one. It works well now! BR, Jaewook. 2019년 4월 1일 (월) 오후 2:30, Xu, Chenjie 님이 작성: > Hi Jaewook, > > Can you try following command: > > system host-update 3 personality=worker hostname=compute-0 > > The personality of compute node has changed from compute to woker. > > > > Best Regards, > > Xu, Chenjie > > > > *From:* 오재욱 [mailto:kyle.oh95 at gmail.com] > *Sent:* Monday, April 1, 2019 1:26 PM > *To:* starlingx-discuss > *Subject:* [Starlingx-discuss] [starlingx-discuss] [unable to add compute > node] "compute-0: Reject attempt to configure with invalid > personality=compute" > > > > Hello StarlingX Team, > > > > As I deployed 'all-in-one duplex' completely, I'm now trying to add > compute-0 node. > > However, it seems impossible to update new host. > > > > Please have a look at below result. > > > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-update 3 > personality=compute hostname=compute-0 > > compute-0: Reject attempt to configure with invalid personality=compute > > > > I think I installed 'all-in-one duplex extended'.. Is there any way to add > new compute node? > > > > Thanks for any helps in advance. > > > > Best Regards, > > Jaewook > > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 E-mail : jwoh95 at dcn.ssu.ac.kr ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Apr 1 05:38:30 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 1 Apr 2019 05:38:30 +0000 Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" In-Reply-To: References: Message-ID: Hi Jaewook, You are welcome! Best Regards, Xu, Chenjie From: Jaewook Oh [mailto:jwoh95 at dcn.ssu.ac.kr] Sent: Monday, April 1, 2019 1:32 PM To: Xu, Chenjie Cc: kyle.oh95 at gmail.com; starlingx-discuss Subject: Re: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" Hi Chenjie (I'm not sure about your first name, sorry for my mistake if I wrote your name wrongly TT) Thanks for advice, I think that command is correct one. It works well now! BR, Jaewook. 2019년 4월 1일 (월) 오후 2:30, Xu, Chenjie >님이 작성: Hi Jaewook, Can you try following command: system host-update 3 personality=worker hostname=compute-0 The personality of compute node has changed from compute to woker. Best Regards, Xu, Chenjie From: 오재욱 [mailto:kyle.oh95 at gmail.com] Sent: Monday, April 1, 2019 1:26 PM To: starlingx-discuss > Subject: [Starlingx-discuss] [starlingx-discuss] [unable to add compute node] "compute-0: Reject attempt to configure with invalid personality=compute" Hello StarlingX Team, As I deployed 'all-in-one duplex' completely, I'm now trying to add compute-0 node. However, it seems impossible to update new host. Please have a look at below result. [wrsroot at controller-0 ~(keystone_admin)]$ system host-update 3 personality=compute hostname=compute-0 compute-0: Reject attempt to configure with invalid personality=compute I think I installed 'all-in-one duplex extended'.. Is there any way to add new compute node? Thanks for any helps in advance. Best Regards, Jaewook ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 E-mail : jwoh95 at dcn.ssu.ac.kr ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Mon Apr 1 07:48:33 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 1 Apr 2019 07:48:33 +0000 Subject: [Starlingx-discuss] pre-stable version bevor next release? In-Reply-To: <1553872175.13803.23.camel@windriver.com> References: <1553872175.13803.23.camel@windriver.com> Message-ID: Ok, but in the end, if i get an "stable" install with last positiv-sanity-test release I have to complete reinstall the official release in 2 month?! There is no chance to have an update to keep in process with latest version? Volker... -----Ursprüngliche Nachricht----- Von: Michel Thebeau [mailto:michel.thebeau at windriver.com] Gesendet: Freitag, 29. März 2019 16:10 An: von Hoesslin, Volker; 'starlingx-discuss at lists.starlingx.io' Betreff: Re: [Starlingx-discuss] pre-stable version bevor next release? Hi Volker, I expect your requirements are not compatible with the "latest development branch".  However, an approach you can take is: look for "Sanity Test" emails on this list and grab one that isnt' "fail, fail, fail, fail, ..." Here is a quick summary: 20190328  fail 20190327  simplex fail, dedicated storage fail, Duplex/Standard mostly pass 20190325  mostly pass, simplex fail M On Fri, 2019-03-29 at 14:53 +0000, von Hoesslin, Volker wrote: > Hi anybody, > i realy love this project but I have some reason to deploy now an > working stack. Is there currently an working version or have to wait > until new release date? The current stable version isn’t working in > all details for me, look at discuss “Unrecognized attribute(s) > 'port_security_enabled'”. >   > Thx, > Volker… > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michel.thebeau at windriver.com Mon Apr 1 11:10:42 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 1 Apr 2019 07:10:42 -0400 Subject: [Starlingx-discuss] pre-stable version bevor next release? In-Reply-To: References: <1553872175.13803.23.camel@windriver.com> Message-ID: <1554117042.5226.1.camel@windriver.com> Hi Volker, I expect that your requirements are not satisfied by the latest development branch.  If you would like to ask the community about any plans for a patching procedure, then I recommend new tread with that title. M On Mon, 2019-04-01 at 07:48 +0000, von Hoesslin, Volker wrote: > Ok, > but in the end, if i get an "stable" install with last positiv- > sanity-test release I have to complete reinstall the official release > in 2 month?! There is no chance to have an update to keep in process > with latest version? > > Volker... > > -----Ursprüngliche Nachricht----- > Von: Michel Thebeau [mailto:michel.thebeau at windriver.com]  > Gesendet: Freitag, 29. März 2019 16:10 > An: von Hoesslin, Volker; 'starlingx-discuss at lists.starlingx.io' > Betreff: Re: [Starlingx-discuss] pre-stable version bevor next > release? > > Hi Volker, > > I expect your requirements are not compatible with the "latest > development branch".  However, an approach you can take is: look for > "Sanity Test" emails on this list and grab one that isnt' "fail, > fail, > fail, fail, ..." > > Here is a quick summary: > 20190328  fail > 20190327  simplex fail, dedicated storage fail, Duplex/Standard > mostly > pass > 20190325  mostly pass, simplex fail > > > M > > > On Fri, 2019-03-29 at 14:53 +0000, von Hoesslin, Volker wrote: > > > > Hi anybody, > > i realy love this project but I have some reason to deploy now an > > working stack. Is there currently an working version or have to > > wait > > until new release date? The current stable version isn’t working in > > all details for me, look at discuss “Unrecognized attribute(s) > > 'port_security_enabled'”. > >   > > Thx, > > Volker… > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s From Frank.Miller at windriver.com Mon Apr 1 13:15:44 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 1 Apr 2019 13:15:44 +0000 Subject: [Starlingx-discuss] StarlingX Weekly Containerization Meeting Message-ID: For those contributing to or interested in the Containerization subproject, the plan is to meet weekly until the containerization StoryBoards are completed. Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2362 bytes Desc: not available URL: From Frank.Miller at windriver.com Mon Apr 1 13:18:11 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 1 Apr 2019 13:18:11 +0000 Subject: [Starlingx-discuss] April 1 Agenda for StarlingX Weekly Containerization Meeting Message-ID: Please add to the agenda if you have additional topics to discuss: https://etherpad.openstack.org/p/stx-containerization 1. Sanity status: RED - Simplex BM and Virtual - application-apply fails at 95% ceilometer pod: https://bugs.launchpad.net/starlingx/+bug/1820928 - Standard Dedicated Storage BM and Virtual - Could not create VMs: https://bugs.launchpad.net/starlingx/+bug/1821841 and https://bugs.launchpad.net/starlingx/+bug/1822116 - All Configurations BM and Virtual - python-heatclient package is not currently installed on ISO: https://bugs.launchpad.net/starlingx/+bug/1822200 - All Configurations BM and Virtual - Console of instances in Horizon is not available: https://bugs.launchpad.net/starlingx/+bug/1822212 2. StoryBoard status: - Plan status and updates: https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4MbeA_UINnBliumg2Y/edit#gid=991138079 3. Other topics: Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Mon Apr 1 14:40:06 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 1 Apr 2019 14:40:06 +0000 Subject: [Starlingx-discuss] port-security Message-ID: Ok, this is an very intressting point! I would prefere to add port-security maybe an system switch to change this behavior in runtime (of cource, it need an re-provisining). Is there an releation with my other problem? One Instance with multiple Networks and for every Network an floating IP -> only one floating IP is working all other are without any response? Also port-forwarding in the router are broken and do not word … Von: Curtis [mailto:serverascode at gmail.com] Gesendet: Freitag, 29. März 2019 19:44 An: von Hoesslin, Volker Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] pre-stable version bevor next release? On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker > wrote: Hi anybody, i realy love this project but I have some reason to deploy now an working stack. Is there currently an working version or have to wait until new release date? The current stable version isn’t working in all details for me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. With regards to port security, I tried to write this email a couple times, it's tough b/c I don't know the history, but here are my thoughts: - Security groups are effectively disabled in stx (noop driver), at least in my deployment from an ISO from last week - This is probably for performance reasons, ie. iptables, but I'm not sure of the history - Maybe it's time to revisit security groups? eg. k8s is there and uses iptables, or maybe openflow based driver would be an option...or other? - Likely we (the project) just need to make sure it gets properly documented, if it's not already Maybe some others with more history will chip in. :) Thanks, Curtis Thx, Volker… _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Mon Apr 1 17:00:27 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 1 Apr 2019 13:00:27 -0400 Subject: [Starlingx-discuss] port-security In-Reply-To: References: Message-ID: On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker < Volker.Hoesslin at swsn.de> wrote: > Ok, this is an very intressting point! I would prefere to add > port-security maybe an system switch to change this behavior in runtime (of > cource, it need an re-provisining). > I should know this better, but I believe if you'd like to request a feature you could go through this process: https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process If that's not the process we're following for the project hopefully someone on the list will correct me. Once it's there it could be discussed. :) > Is there an releation with my other problem? One Instance with multiple > Networks and for every Network an floating IP -> only one floating IP is > working all other are without any response? Also port-forwarding in the > router are broken and do not word … > > > Is there any chance it's just a routing problem? ie. reply packets for the non-working interfaces are going out the working interface b/c it has the single default gw? Something like that? Thanks, Curtis > *Von:* Curtis [mailto:serverascode at gmail.com] > *Gesendet:* Freitag, 29. März 2019 19:44 > *An:* von Hoesslin, Volker > *Cc:* starlingx-discuss at lists.starlingx.io > *Betreff:* Re: [Starlingx-discuss] pre-stable version bevor next release? > > > > On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker < > Volker.Hoesslin at swsn.de> wrote: > > Hi anybody, > > i realy love this project but I have some reason to deploy now an working > stack. Is there currently an working version or have to wait until new > release date? The current stable version isn’t working in all details for > me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. > > > > > > With regards to port security, I tried to write this email a couple times, > it's tough b/c I don't know the history, but here are my thoughts: > > > > - Security groups are effectively disabled in stx (noop driver), at least > in my deployment from an ISO from last week > > - This is probably for performance reasons, ie. iptables, but I'm not sure > of the history > > - Maybe it's time to revisit security groups? eg. k8s is there and uses > iptables, or maybe openflow based driver would be an option...or other? > > - Likely we (the project) just need to make sure it gets properly > documented, if it's not already > > > > Maybe some others with more history will chip in. :) > > > > Thanks, > > Curtis > > > > > > > > Thx, > > Volker… > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Apr 1 17:08:49 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 1 Apr 2019 17:08:49 +0000 Subject: [Starlingx-discuss] port-security In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> Volker, please follow the process Curtis mentioned below and submit a StoryBoard Story. Then I’d suggest you send the story link out to the mailing list and ask the Networking sub-project to work with you to fill in any additional details needed. Meanwhile Curtis can you add this to the ethercalc as an item for the next release? brucej From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, April 1, 2019 10:00 AM To: von Hoesslin, Volker Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker > wrote: Ok, this is an very intressting point! I would prefere to add port-security maybe an system switch to change this behavior in runtime (of cource, it need an re-provisining). I should know this better, but I believe if you'd like to request a feature you could go through this process: https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process If that's not the process we're following for the project hopefully someone on the list will correct me. Once it's there it could be discussed. :) Is there an releation with my other problem? One Instance with multiple Networks and for every Network an floating IP -> only one floating IP is working all other are without any response? Also port-forwarding in the router are broken and do not word … Is there any chance it's just a routing problem? ie. reply packets for the non-working interfaces are going out the working interface b/c it has the single default gw? Something like that? Thanks, Curtis Von: Curtis [mailto:serverascode at gmail.com] Gesendet: Freitag, 29. März 2019 19:44 An: von Hoesslin, Volker Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] pre-stable version bevor next release? On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker > wrote: Hi anybody, i realy love this project but I have some reason to deploy now an working stack. Is there currently an working version or have to wait until new release date? The current stable version isn’t working in all details for me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. With regards to port security, I tried to write this email a couple times, it's tough b/c I don't know the history, but here are my thoughts: - Security groups are effectively disabled in stx (noop driver), at least in my deployment from an ISO from last week - This is probably for performance reasons, ie. iptables, but I'm not sure of the history - Maybe it's time to revisit security groups? eg. k8s is there and uses iptables, or maybe openflow based driver would be an option...or other? - Likely we (the project) just need to make sure it gets properly documented, if it's not already Maybe some others with more history will chip in. :) Thanks, Curtis Thx, Volker… _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Mon Apr 1 17:51:19 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 1 Apr 2019 17:51:19 +0000 Subject: [Starlingx-discuss] Error when add a new host Message-ID: <8557B550001AFB46A43A0CCC314BF85153CAFA6F@FMSMSX108.amr.corp.intel.com> Hi, There is an intermittent issue during STX provisioning when add a new host (controller, compute or storage). During provisioning, when add a new host: $ system host-add -n ${host_name} -p ${personality} -m ${mac_address} Got the following error: 'Maintenance has returned with a status of fail, reason: no response, recommended action: retry' This issue is intermittent. After it failed, try to add the host again but got: 'error: Host already exists' When check the hosts available can see host installed correctly: $ system host-list Then, got an error when added a new host, got an error when retry to add the host because it was correctly installed. This issue sometimes breaks our test execution. I already open a Launchpad: https://bugs.launchpad.net/starlingx/+bug/1822657 Did someone faced this issue before? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Apr 1 18:10:39 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 1 Apr 2019 18:10:39 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Test meeting - 9:00 PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDB1B2E@FMSMSX114.amr.corp.intel.com> Weekly meetings on Tuesdays at 9am PDT / 1600 UTC * Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2001 bytes Desc: not available URL: From serverascode at gmail.com Mon Apr 1 18:21:39 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 1 Apr 2019 14:21:39 -0400 Subject: [Starlingx-discuss] port-security In-Reply-To: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> Message-ID: On Mon, Apr 1, 2019 at 1:08 PM Jones, Bruce E wrote: > Volker, please follow the process Curtis mentioned below and submit a > StoryBoard Story. Then I’d suggest you send the story link out to the > mailing list and ask the Networking sub-project to work with you to fill in > any additional details needed. > > > > Meanwhile Curtis can you add this to the ethercalc as an item for the next > release? > > > I added it into the ethercalc. Thank, Curtis > brucej > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Monday, April 1, 2019 10:00 AM > *To:* von Hoesslin, Volker > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] port-security > > > > On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker < > Volker.Hoesslin at swsn.de> wrote: > > Ok, this is an very intressting point! I would prefere to add > port-security maybe an system switch to change this behavior in runtime (of > cource, it need an re-provisining). > > > > I should know this better, but I believe if you'd like to request a > feature you could go through this process: > > > > https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process > > > > If that's not the process we're following for the project hopefully > someone on the list will correct me. > > > > Once it's there it could be discussed. :) > > > > > > Is there an releation with my other problem? One Instance with multiple > Networks and for every Network an floating IP -> only one floating IP is > working all other are without any response? Also port-forwarding in the > router are broken and do not word … > > > > > > Is there any chance it's just a routing problem? ie. reply packets for the > non-working interfaces are going out the working interface b/c it has the > single default gw? Something like that? > > > > Thanks, > > Curtis > > > > > > *Von:* Curtis [mailto:serverascode at gmail.com] > *Gesendet:* Freitag, 29. März 2019 19:44 > *An:* von Hoesslin, Volker > *Cc:* starlingx-discuss at lists.starlingx.io > *Betreff:* Re: [Starlingx-discuss] pre-stable version bevor next release? > > > > On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker < > Volker.Hoesslin at swsn.de> wrote: > > Hi anybody, > > i realy love this project but I have some reason to deploy now an working > stack. Is there currently an working version or have to wait until new > release date? The current stable version isn’t working in all details for > me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. > > > > > > With regards to port security, I tried to write this email a couple times, > it's tough b/c I don't know the history, but here are my thoughts: > > > > - Security groups are effectively disabled in stx (noop driver), at least > in my deployment from an ISO from last week > > - This is probably for performance reasons, ie. iptables, but I'm not sure > of the history > > - Maybe it's time to revisit security groups? eg. k8s is there and uses > iptables, or maybe openflow based driver would be an option...or other? > > - Likely we (the project) just need to make sure it gets properly > documented, if it's not already > > > > Maybe some others with more history will chip in. :) > > > > Thanks, > > Curtis > > > > > > > > Thx, > > Volker… > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Mon Apr 1 11:40:49 2019 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 1 Apr 2019 11:40:49 +0000 Subject: [Starlingx-discuss] Remote access to APIs, Horizon In-Reply-To: References: Message-ID: <612D7491-88EE-4345-BBC3-448D6028D4EA@windriver.com> I don’t believe this is documented yet. I believe the approach will be: * All the OpenStack external endpoints will be behind an nginx ingress controller * i.e. all openstack service api endpoints and horizon * I know horizon is currently on a host port, but will be moved behind nginx eventually * Nginx is on ports 80 and 443 (http and https) * And NOTE uses the FQDN of the destination IP to route to the specific service * i.e. remote client MUST use FQDN * This also IMPLIES that the domain of the platform’s coredns server must be uniquely configured for the cloud, and the DNS server that you are using remotely must either delegate this domain to the platform’s coredns server or have these FQDNs configured on its own. Here’s the pic I drew: [cid:image001.png at 01D4E85E.385C2600] Greg. From: Curtis Date: Friday, March 29, 2019 at 12:54 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Remote access to APIs, Horizon Hi All, Are there docs or what is the current thinking around remote access of APIs and Horizon for OpenStack deployments in STX? I looked around a bit but might of missed it if there are some. I see endpoints with cluster ips and a nodeport for Horizon on port 31000, after that I'm outta the loop. :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 166006 bytes Desc: image001.png URL: From serverascode at gmail.com Mon Apr 1 17:01:46 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 1 Apr 2019 13:01:46 -0400 Subject: [Starlingx-discuss] Remote access to APIs, Horizon In-Reply-To: <612D7491-88EE-4345-BBC3-448D6028D4EA@windriver.com> References: <612D7491-88EE-4345-BBC3-448D6028D4EA@windriver.com> Message-ID: On Mon, Apr 1, 2019 at 7:41 AM Waines, Greg wrote: > I don’t believe this is documented yet. > > > > I believe the approach will be: > > - All the OpenStack external endpoints will be behind an nginx ingress > controller > - i.e. all openstack service api endpoints and horizon > - I know horizon is currently on a host port, but will be moved > behind nginx eventually > - Nginx is on ports 80 and 443 (http and https) > - And NOTE uses the FQDN of the destination IP to route to the > specific service > - i.e. remote client MUST use FQDN > - This also IMPLIES that the domain of the platform’s coredns > server must be uniquely configured for the cloud, and > the DNS server that you are using remotely must either delegate > this domain to the platform’s coredns server or have these FQDNs configured > on its own. > > > Ok thanks Greg, that is helpful. Thanks, Curtis > Here’s the pic I drew: > > > > Greg. > > > > *From: *Curtis > *Date: *Friday, March 29, 2019 at 12:54 PM > *To: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *[Starlingx-discuss] Remote access to APIs, Horizon > > > > Hi All, > > > > Are there docs or what is the current thinking around remote access of > APIs and Horizon for OpenStack deployments in STX? I looked around a bit > but might of missed it if there are some. > > > > I see endpoints with cluster ips and a nodeport for Horizon on port 31000, > after that I'm outta the loop. :) > > > > Thanks, > > Curtis > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 166006 bytes Desc: not available URL: From ada.cabrales at intel.com Mon Apr 1 21:19:06 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 1 Apr 2019 21:19:06 +0000 Subject: [Starlingx-discuss] [ Test ] meeting agenda - 4/2/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDB1ECD@FMSMSX114.amr.corp.intel.com> Agenda for 4/2 1. Regression tests submission - all 2. Feature testing: OVS-DPDK upversion - Elio Containerized OVS DPDK firewall OVS process monitoring SDN enabling Ceph upgrade - Fernando Containerized OpenStack services - Jose, Numan OpenStack path elimination - JC, Numan Ansible bootstrap deployment - Numan Collectd infra - Numan Barbican for keystone - Numan Distributed cloud - Numan 3. Performance - Victor 4. Opens From Ghada.Khalil at windriver.com Mon Apr 1 22:52:07 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 1 Apr 2019 22:52:07 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Mon Apr 1 23:43:32 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Mon, 1 Apr 2019 23:43:32 +0000 Subject: [Starlingx-discuss] Edge Computing Use Case, Deployment Advice Needed In-Reply-To: References: Message-ID: > I added some points/questions inline. Thanks Curtis for your time! > > We are integrating this demo in our spare time to ramp up in cloud > > technologies and one of its imperatives is a working solution. It started as a use > > case proposal around unmanned aerial systems [0], then decided to avoid some > > of the complexity involved in flying the drones, and finally landed it as a use case > > around home automation / smart cities at the network edge. > First off, I'd like to let people know that we are planning on doing some kind of > "edge" proof-of-concept with Packet.com resources, so perhaps the project you > discuss could fit in with that. I'm sure we'll chat about it at some point here. > > At the next TSC meeting we'll discuss how to get the packet projects off the > ground, so feel free to attend. :) Awesome! We will be paying attention to community communications about this topic. > > This demo has currently integrated the following acceleration resources: > > - GPU > > - VPU (Movidius NCS) > I would not expect a USB device like the Movidius NCS to be available in most > STX deployments, but maybe? Maybe, Movidius NCS seems to be one of one those exploration paths to offload some workloads, and where budget could make a difference in comparison with FPGAs. > > [ StarlingX Deployment ] [ Offload ] > > What would be the preferred way to deploy this use case proposal in > > StarlingX? We understand the following options are available including its > > preference: > > > > 1. Via Kubernetes (Not Preferred) > > 2. Via Virtual Machine (Preferred) > > 3. Via Bare Metal (Preferred) > > > > Are the above options and their preference, correct? If not, can you > > please give us some hints behind your answer. > From my standpoint, I think #3 would be the least common option. #2 would be > a good place to start, but I don't think #1 is "not preferred", I guess it depends > on where these preferences are coming from. Understood, we think it is worth to try option 2 initially at least for the core applications of the use case. > > [ StarlingX Deployment ] [ Provisioning ] > > > > As mentioned at the beginning, another of our imperatives, is to > > exercise zero touch provisioning. > > > > Does it makes sense to split the provisioning in 2 parts based in the > > required time for the demo components to live? > > > > - The core applications 100% uptime > > - Services on demand / 100 uptime in some cases > By zero touch provisioning do you just mean automation using IaaS APIs? eg. > the docker compose file you link to? Or something else? We understand the term from its definition but that "something else" is not in our knowledge yet. From our current understanding, that zero touch provisioning will allow us to deploy with one single instruction: - The core applications part of the use case (e.g. access to the different dashboards) - The services part of the use case: the start and stop of X service (e.g. face recognition, object recognition, etc.) for each of the wanted video streams. We will appreciate if you can share any online resource where we can learn more about this zero touch concept in a practical way (e.g. whitepaper, use case) so we can land into our use case. Again, thank you Curtis for your time and help to answer our questions. From Ghada.Khalil at windriver.com Tue Apr 2 00:18:54 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 2 Apr 2019 00:18:54 +0000 Subject: [Starlingx-discuss] DRAFT release policy In-Reply-To: <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0709720@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A26A@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> Hi Bruce, I have a comment regarding this point: The severity and number of bugs open against the release Proposal: No open Critical or High severity bugs against the release candidate. Or maybe 1-3 Highs if we have a clear resolution plan (and a plan to release a patch against the release?) (ghada) We use an explicit tag to identify which bugs gate a particular release (regardless of severity). The whole list will have to be reviewed and scrubbed prior to reaching the release milestone. I don't feel it is sufficient to only review Critical / Major issues. [Example: On April 1/2019, there are 85 release gating bugs: only 13 are Critical/High. Yet it wouldn't be sufficient to only fix those 13 to ensure a quality release]. In the Release Planning wiki[1] , we have previously stated this policy: All release gating issues are addressed or reviewed/accepted for deferral I feel we need to keep this. [1] https://wiki.openstack.org/wiki/StarlingX/Release_Plan -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 2:10 PM To: Jones, Bruce E; Dean Troyer; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy I've updated the etherpad with changes that reflect the current feedback. Please review and add any additional feedback there. Thank you! https://etherpad.openstack.org/p/stx-release-policy-draft brucej -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 10:06 AM To: Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy Dean, Glenn - thank you for the feedback. I agree with it. There is also some feedback in the etherpad. I'm going to respond to both sets in the etherpad and try to improve the policy and the wording. https://etherpad.openstack.org/p/stx-release-policy-draft Thanks! brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, March 28, 2019 7:29 PM To: Seiler, Glenn Cc: Jones, Bruce E ; starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy On Thu, Mar 28, 2019 at 6:39 PM Seiler, Glenn wrote: > 1- We need to move away from time-based releases > 2- We need to do twice a year releases. This stmt, by definition, implies a time-gated release. Maybe it isn’t a specific date, but it is still time-gated. The wording does need work, yes. After a short conversation with Bruce this afternoon (Bruce, correct me if I'm wrong here) I came away with the intention being more of increasing the lag from OpenStack releases rather than separating completely from the OpenStack release cycle which is likely to stay at approx 6 months for a while (that's a rabbit hole under the bike shed I'd like to avoid just now). > As a nascent project, I think we need to show gradual and consistent progress. ++ > I did listen to much of the release team meeting today, and realize the trade-offs between big-rocks and timing are very difficult. > > Given the difficult choice of functionality versus timing, I personally think we need to show progress in getting to Stein and a container based distribution as major milestones in 1H and perhaps defer the Distributed Cloud capability to a 2H release. > > I don’t see anything intrinsically wrong with moving a specific date out; it happens all the time. But I also think a release should have some gate; i.e. we don’t move out of 1H. And if some functionality isn’t ready, then we move the functionality to another release in 2H. We took a stab at estimating and missed, making adjustments now is normal and to be expected. I agree with considering pushing distcloud to the next release because it is a) new functionality, and b) devs overlap with the container work and I think making the k8s infrastructure rock solid is much more important. If we are too far off with system stability the ramifications will be harder to overcome than delaying a new feature. > Anyway, that would be my vote, if I have one. You totally have a voice as part of the community, I would like to hear from more folks here... dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Tue Apr 2 02:58:42 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 2 Apr 2019 02:58:42 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190331 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Mar-31 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 55 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 63 TCs FAIL ] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 01 TCs [FAIL] Sanity Platform 04 TCs [PASS] | 01 TCs [FAIL] TOTAL: [ 56 TCs PASS | 02 TCs FAIL ] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 16 TCs [PASS] | 36 TCs [FAIL] Sanity Platform 04 TCs [PASS] | 01 TCs [FAIL] TOTAL: [ 21 TCs PASS | 37 TCs FAIL ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 62 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 62 TCs PASS ] ------------------------------------------------------------------ Simplex BM and Virtual - application-apply fails at 95% ceilometer pod Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1820928 Standard Dedicated Storage BM and Virtual - Could not create VMs Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1821841 ------------------------------------------------------------------ This is the list of test cases executed: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From wei.w.hu at intel.com Tue Apr 2 03:19:54 2019 From: wei.w.hu at intel.com (Hu, Wei W) Date: Tue, 2 Apr 2019 03:19:54 +0000 Subject: [Starlingx-discuss] StarlingX @Open Source Hackathon Event in PRC Shenzhen on 4.18~4.20 Message-ID: Hi, StarlingXers: The 9th Open Source Hackathon Event will be held from April 18 to 20, 2019, we are very pleased to invite you to attend the Event in Shenzhen. Hackathon has always been a community activity that focuses on "Engineer Output", provides a face-to-face exchange platform for engineers in the community and core and maintainers. The 9th Hackathon Event in 2019 is jointly held by Intel, HUAWEI, Tencent and CESI, covering 12 open source projects including StarlingX. All engineers could do mission critical bug fixes, hands-on training and new feature development in 3 days. We sincerely look forward to your participation. More info see below links: https://etherpad.openstack.org/p/OpenSource-Hackathon-9-Shenzhen If any questions you can reach me or Shuquan(cced), we will help. -Wei Hu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Apr 2 11:43:32 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 2 Apr 2019 11:43:32 +0000 Subject: [Starlingx-discuss] Community Call (April 3, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A2306B@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190403T1400 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Apr 2 13:13:17 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 2 Apr 2019 13:13:17 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/3 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35EF58BC@SHSMSX104.ccr.corp.intel.com> Agenda for 4/3 meeting: - Ceph upgrade update 1. generate PR for stx-Ceph on staging (Changcheng) 2. patch submitted to StarlingX repos (Daniel) 3. System testing prepration (Fernando) - DevStack update (Dean/Yi) - Libvirt/qemu patch removal: SB#2005212 (Jim) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 3, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From fernando.hernandez.gonzalez at intel.com Tue Apr 2 16:30:10 2019 From: fernando.hernandez.gonzalez at intel.com (Hernandez Gonzalez, Fernando) Date: Tue, 2 Apr 2019 16:30:10 +0000 Subject: [Starlingx-discuss] Horizon Patch elimination Test Case development. In-Reply-To: <03D458D5BAFF6041973594B00B4E58CE5D6969D0@CRSMSX104.amr.corp.intel.com> References: <03D458D5BAFF6041973594B00B4E58CE5D6969D0@CRSMSX104.amr.corp.intel.com> Message-ID: <03D458D5BAFF6041973594B00B4E58CE5D696D46@CRSMSX104.amr.corp.intel.com> Hi all/Kristine, I was assigned to develop the test cases for horizon patch elimination Link. Could you please help me out with following questions/comments, please see yellow highlighted. # Type Task Name Project Prime Group Prime / Dev Lead May Release Forecast Backport Candidate Next Step 1 Feature Move Items to stx-gui Horizon StarlingX Kristine Bujold Likely N/A Fernando_4_01_19: @Kristine, could you please share the list of the tabs to verify and how to get them? on Horizon:8080 I went through Admin --> Host inventory and I did not find Port Forwarding tab. Regarding services tab, are you talking about API services? Feb 19 (bz): in progress, some work to be done after Stein cutover (re: port forwarding); it'll be done by mid-March, it's not upstreaming, so no risk 2 Feature Move Branding Extensions to stx-gui Horizon StarlingX Kristine Bujold Done N/A Fernando_4_01_19: @Kristine, this is more like aesthetic validation, do we need to test this every build or just check one time if the Starlingx theme/Logo/Style were applied correctly? Against what I can check the Starlingx theme? Where are the images themes? Per Brent, this is done. 3 Feature Refactor FM Panel to Use AngularJS Horizon StarlingX Kristine Bujold Done N/A Fernando_4_01_19: @Kristine, could you please provide more information regarding this requirement description is generic not sure what to test. Merged, just needs testing post branch cut-over Thanks in advance. Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Apr 2 17:02:07 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 2 Apr 2019 17:02:07 +0000 Subject: [Starlingx-discuss] DRAFT release policy In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD0709720@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A26A@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD070C76C@fmsmsx123.amr.corp.intel.com> Good feedback, Ghada, thank you. I have updated the document accordingly. I've also made a few other changes like adding test case readiness as a release criteria and proposed a method for handing anchor features that aren't completed by MS-3. I've removed the previously resolved (and much appreciated) feedback in the interests of readability. Link and updated text below. Brucej This file is: https://etherpad.openstack.org/p/stx-release-policy-draft This is a draft document for release planning. Comments and feedback welcomed! Openstack Release Policy ==================== The StarlingX project follows the release model defined in https://docs.openstack.org/project-team-guide/release-management.html using the "Trailing the common cycle" due to our dependency on upstream OpenStack projects. Release Planning ============== Initial release planning starts at the Open Infrastructure PTG meetings, where the TSC and community members discuss candidate features for the next release. The TSC then will review and approve a feature list for the release, will identify any release gating "anchor" features, and set a target date. The recommended target date is the date of the next OpenStack release plus 6 weeks. The PTG meeting is also an opportunity to review the community's goals and to define goals for the release. The overall Release Plan is created and managed by the Release sub-team by combining the TSC's input on content and target dates with input from the feature developers in the community and the Test team. The plan will include a standard set of milestones as per the usual OpenStack release management process. The Release team will actively manage the plan over the course of the release, recommending any adjustments in content and dates to the community and to the TSC for approval. We recognize that we are a new community working in a highly dynamic technology and that changes in our plans over time are normal and expected. We will work as a community to be open and transparent about our release process, and to minimize change from the original plan. Open issue: We should consider changing our release naming convention to something that isn't a date. Defect Tracking ============ The release team shall review active and incoming bug reports and make an initial call as to whether or not the bug needs to be fixed in the next release. If so, the bugs shall be tagged and tracked as the work on the release progresses. The list of release gating bugs will be actively managed, reviewed and scrubbed by the Release team to ensure that bugs are properly categorized as release gating. Release Policy =========== The Release team, together with the Test team TLs/PLs, shall make a recommendation to the community and TSC that a release is ready to go. Upon TSC approval, the release branches are tagged and the release documented. That recommendation should be based on: * Whether or not all anchor features in the release are complete, as per the input of the team(s) implementing the features and the results of Test team testing of the features * Proposal: All features identified as anchor features for a release need to be completed by the feature freeze milestone (MS-3). In the event that an anchor feature is not complete before the release feature freeze milestone, the Release team will make a recommendation to the TSC to extend the milestone date or to defer the feature to the next release. * Whether or not all test cases planned for the release are complete and ready to run * Proposal: All planned test cases shall be ready before the start of formal release candidate testing (RC1 milestone) * The completion and results of formal Testing performed by the Test team, measured by the percentage of planned tests attempted and the test pass rate * Proposal: 100% test cases attempted and 95% test cases passing in all configurations * The status of release gating bugs * Proposal: All release gating bugs must be fixed prior to a StarlingX release, ideally before the RC1 milestone but certainly before the final release. Bugfix Releases ============= The Release team, in conjunction with the community, can create a plan for a bug fix release. This would be an update to a previous release to address important defects that are impacting our users. The content would be fixes backported from master to the release branch, and would be based on both community and developer input regarding which fixes should be included. Testing of a bug fix release should include at least verification testing of the fixes and any additional testing needed as determined by the Test team. -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, April 1, 2019 5:19 PM To: Jones, Bruce E ; Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, I have a comment regarding this point: The severity and number of bugs open against the release Proposal: No open Critical or High severity bugs against the release candidate. Or maybe 1-3 Highs if we have a clear resolution plan (and a plan to release a patch against the release?) (ghada) We use an explicit tag to identify which bugs gate a particular release (regardless of severity). The whole list will have to be reviewed and scrubbed prior to reaching the release milestone. I don't feel it is sufficient to only review Critical / Major issues. [Example: On April 1/2019, there are 85 release gating bugs: only 13 are Critical/High. Yet it wouldn't be sufficient to only fix those 13 to ensure a quality release]. In the Release Planning wiki[1] , we have previously stated this policy: All release gating issues are addressed or reviewed/accepted for deferral I feel we need to keep this. [1] https://wiki.openstack.org/wiki/StarlingX/Release_Plan -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 2:10 PM To: Jones, Bruce E; Dean Troyer; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy I've updated the etherpad with changes that reflect the current feedback. Please review and add any additional feedback there. Thank you! https://etherpad.openstack.org/p/stx-release-policy-draft brucej -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 10:06 AM To: Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy Dean, Glenn - thank you for the feedback. I agree with it. There is also some feedback in the etherpad. I'm going to respond to both sets in the etherpad and try to improve the policy and the wording. https://etherpad.openstack.org/p/stx-release-policy-draft Thanks! brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, March 28, 2019 7:29 PM To: Seiler, Glenn Cc: Jones, Bruce E ; starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy On Thu, Mar 28, 2019 at 6:39 PM Seiler, Glenn wrote: > 1- We need to move away from time-based releases > 2- We need to do twice a year releases. This stmt, by definition, implies a time-gated release. Maybe it isn’t a specific date, but it is still time-gated. The wording does need work, yes. After a short conversation with Bruce this afternoon (Bruce, correct me if I'm wrong here) I came away with the intention being more of increasing the lag from OpenStack releases rather than separating completely from the OpenStack release cycle which is likely to stay at approx 6 months for a while (that's a rabbit hole under the bike shed I'd like to avoid just now). > As a nascent project, I think we need to show gradual and consistent progress. ++ > I did listen to much of the release team meeting today, and realize the trade-offs between big-rocks and timing are very difficult. > > Given the difficult choice of functionality versus timing, I personally think we need to show progress in getting to Stein and a container based distribution as major milestones in 1H and perhaps defer the Distributed Cloud capability to a 2H release. > > I don’t see anything intrinsically wrong with moving a specific date out; it happens all the time. But I also think a release should have some gate; i.e. we don’t move out of 1H. And if some functionality isn’t ready, then we move the functionality to another release in 2H. We took a stab at estimating and missed, making adjustments now is normal and to be expected. I agree with considering pushing distcloud to the next release because it is a) new functionality, and b) devs overlap with the container work and I think making the k8s infrastructure rock solid is much more important. If we are too far off with system stability the ramifications will be harder to overcome than delaying a new feature. > Anyway, that would be my vote, if I have one. You totally have a voice as part of the community, I would like to hear from more folks here... dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From tyler.smith at windriver.com Tue Apr 2 17:28:00 2019 From: tyler.smith at windriver.com (Smith, Tyler) Date: Tue, 2 Apr 2019 17:28:00 +0000 Subject: [Starlingx-discuss] Horizon Patch elimination Test Case development. In-Reply-To: <03D458D5BAFF6041973594B00B4E58CE5D696D46@CRSMSX104.amr.corp.intel.com> References: <03D458D5BAFF6041973594B00B4E58CE5D6969D0@CRSMSX104.amr.corp.intel.com> <03D458D5BAFF6041973594B00B4E58CE5D696D46@CRSMSX104.amr.corp.intel.com> Message-ID: Hi, I responded to 1 & 2 inline Tyler From: Hernandez Gonzalez, Fernando [mailto:fernando.hernandez.gonzalez at intel.com] Sent: Tuesday, April 2, 2019 12:30 PM To: Waheed, Numan ; Bujold, Kristine ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Horizon Patch elimination Test Case development. Hi all/Kristine, I was assigned to develop the test cases for horizon patch elimination Link. Could you please help me out with following questions/comments, please see yellow highlighted. # Type Task Name Project Prime Group Prime / Dev Lead May Release Forecast Backport Candidate Next Step 1 Feature Move Items to stx-gui Horizon StarlingX Kristine Bujold Likely N/A Fernando_4_01_19: @Kristine, could you please share the list of the tabs to verify and how to get them? on Horizon:8080 I went through Admin --> Host inventory and I did not find Port Forwarding tab. Regarding services tab, are you talking about API services? [TS] Ended up not having to port these additions to the gui, other than the controller services tab which David did (It can be found under admin->system info on the platform horizon) Feb 19 (bz): in progress, some work to be done after Stein cutover (re: port forwarding); it'll be done by mid-March, it's not upstreaming, so no risk 2 Feature Move Branding Extensions to stx-gui Horizon StarlingX Kristine Bujold Done N/A Fernando_4_01_19: @Kristine, this is more like aesthetic validation, do we need to test this every build or just check one time if the Starlingx theme/Logo/Style were applied correctly? Against what I can check the Starlingx theme? Where are the images themes? [TS] This was a behind-the-scenes change, I recently did more work on this too with the rebase to stein. I would think that a one-time test would be sufficient. As for how to test I suppose you could compare it to an earlier release from before this work was submitted. Note that some things were intentionally changed, such as the alarm banner positioning and lack of time display Per Brent, this is done. 3 Feature Refactor FM Panel to Use AngularJS Horizon StarlingX Kristine Bujold Done N/A Fernando_4_01_19: @Kristine, could you please provide more information regarding this requirement description is generic not sure what to test. Merged, just needs testing post branch cut-over Thanks in advance. Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Kristine.Bujold at windriver.com Tue Apr 2 17:35:21 2019 From: Kristine.Bujold at windriver.com (Bujold, Kristine) Date: Tue, 2 Apr 2019 17:35:21 +0000 Subject: [Starlingx-discuss] Horizon Patch elimination Test Case development. In-Reply-To: <03D458D5BAFF6041973594B00B4E58CE5D696D46@CRSMSX104.amr.corp.intel.com> References: <03D458D5BAFF6041973594B00B4E58CE5D6969D0@CRSMSX104.amr.corp.intel.com> <03D458D5BAFF6041973594B00B4E58CE5D696D46@CRSMSX104.amr.corp.intel.com> Message-ID: <5ECD8395442B0C4FB807F9737625BB6761BB4D7B@ALA-MBD.corp.ad.wrs.com> Hi Fernando, The FaultManagement Panel page was originally build using Django framework. It was refactored to use the AngularJS framework. This means all GUI pages regarding FM panels should be re-rested. The "Related Alarms" tabs under "Admin/Platform/Data Network Topology", still uses Django but users will be redirected to the new FM Active Alarms panel if an alarm is clicked on for more details. Thanks, Kristine From: Gonzalez, Fernando [mailto:fernando.hernandez.gonzalez at intel.com] Sent: Tuesday, April 2, 2019 12:30 PM To: Waheed, Numan ; Bujold, Kristine ; starlingx-discuss at lists.starlingx.io Subject: Horizon Patch elimination Test Case development. Hi all/Kristine, I was assigned to develop the test cases for horizon patch elimination Link. Could you please help me out with following questions/comments, please see yellow highlighted. # Type Task Name Project Prime Group Prime / Dev Lead May Release Forecast Backport Candidate Next Step 1 Feature Move Items to stx-gui Horizon StarlingX Kristine Bujold Likely N/A Fernando_4_01_19: @Kristine, could you please share the list of the tabs to verify and how to get them? on Horizon:8080 I went through Admin --> Host inventory and I did not find Port Forwarding tab. Regarding services tab, are you talking about API services? Feb 19 (bz): in progress, some work to be done after Stein cutover (re: port forwarding); it'll be done by mid-March, it's not upstreaming, so no risk 2 Feature Move Branding Extensions to stx-gui Horizon StarlingX Kristine Bujold Done N/A Fernando_4_01_19: @Kristine, this is more like aesthetic validation, do we need to test this every build or just check one time if the Starlingx theme/Logo/Style were applied correctly? Against what I can check the Starlingx theme? Where are the images themes? Per Brent, this is done. 3 Feature Refactor FM Panel to Use AngularJS Horizon StarlingX Kristine Bujold Done N/A Fernando_4_01_19: @Kristine, could you please provide more information regarding this requirement description is generic not sure what to test. Merged, just needs testing post branch cut-over Thanks in advance. Fernando Hernandez Gonzalez Software Engineer Avenida del Bosque #1001 Col, El Bajío Zapopan, Jalisco MX, 45019 ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Apr 2 18:23:43 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 2 Apr 2019 18:23:43 +0000 Subject: [Starlingx-discuss] DRAFT release policy In-Reply-To: <9A85D2917C58154C960D95352B22818BD070C76C@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0709720@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A26A@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD070C76C@fmsmsx123.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A23418@ALA-MBD.corp.ad.wrs.com> Hi Bruce, If none of the anchor features will be ready by MS-3, then we have no choice but to reforecast. Then we get into the question of by how much, and which, if any anchor features can be excluded (which doesn't totally make sense to me - why would we have called it an anchor in the first place). This Release Churn is another heading that we need to add to the release policy, and whatever we come up with, we'll need to get it approved by the TSC, I think. Bill... -----Original Message----- From: Jones, Bruce E Sent: Tuesday, April 2, 2019 1:02 PM To: Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: Re: [Starlingx-discuss] DRAFT release policy Good feedback, Ghada, thank you. I have updated the document accordingly. I've also made a few other changes like adding test case readiness as a release criteria and proposed a method for handing anchor features that aren't completed by MS-3. I've removed the previously resolved (and much appreciated) feedback in the interests of readability. Link and updated text below. Brucej This file is: https://etherpad.openstack.org/p/stx-release-policy-draft This is a draft document for release planning. Comments and feedback welcomed! Openstack Release Policy ==================== The StarlingX project follows the release model defined in https://docs.openstack.org/project-team-guide/release-management.html using the "Trailing the common cycle" due to our dependency on upstream OpenStack projects. Release Planning ============== Initial release planning starts at the Open Infrastructure PTG meetings, where the TSC and community members discuss candidate features for the next release. The TSC then will review and approve a feature list for the release, will identify any release gating "anchor" features, and set a target date. The recommended target date is the date of the next OpenStack release plus 6 weeks. The PTG meeting is also an opportunity to review the community's goals and to define goals for the release. The overall Release Plan is created and managed by the Release sub-team by combining the TSC's input on content and target dates with input from the feature developers in the community and the Test team. The plan will include a standard set of milestones as per the usual OpenStack release management process. The Release team will actively manage the plan over the course of the release, recommending any adjustments in content and dates to the community and to the TSC for approval. We recognize that we are a new community working in a highly dynamic technology and that changes in our plans over time are normal and expected. We will work as a community to be open and transparent about our release process, and to minimize change from the original plan. Open issue: We should consider changing our release naming convention to something that isn't a date. Defect Tracking ============ The release team shall review active and incoming bug reports and make an initial call as to whether or not the bug needs to be fixed in the next release. If so, the bugs shall be tagged and tracked as the work on the release progresses. The list of release gating bugs will be actively managed, reviewed and scrubbed by the Release team to ensure that bugs are properly categorized as release gating. Release Policy =========== The Release team, together with the Test team TLs/PLs, shall make a recommendation to the community and TSC that a release is ready to go. Upon TSC approval, the release branches are tagged and the release documented. That recommendation should be based on: * Whether or not all anchor features in the release are complete, as per the input of the team(s) implementing the features and the results of Test team testing of the features * Proposal: All features identified as anchor features for a release need to be completed by the feature freeze milestone (MS-3). In the event that an anchor feature is not complete before the release feature freeze milestone, the Release team will make a recommendation to the TSC to extend the milestone date or to defer the feature to the next release. * Whether or not all test cases planned for the release are complete and ready to run * Proposal: All planned test cases shall be ready before the start of formal release candidate testing (RC1 milestone) * The completion and results of formal Testing performed by the Test team, measured by the percentage of planned tests attempted and the test pass rate * Proposal: 100% test cases attempted and 95% test cases passing in all configurations * The status of release gating bugs * Proposal: All release gating bugs must be fixed prior to a StarlingX release, ideally before the RC1 milestone but certainly before the final release. Bugfix Releases ============= The Release team, in conjunction with the community, can create a plan for a bug fix release. This would be an update to a previous release to address important defects that are impacting our users. The content would be fixes backported from master to the release branch, and would be based on both community and developer input regarding which fixes should be included. Testing of a bug fix release should include at least verification testing of the fixes and any additional testing needed as determined by the Test team. -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, April 1, 2019 5:19 PM To: Jones, Bruce E ; Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, I have a comment regarding this point: The severity and number of bugs open against the release Proposal: No open Critical or High severity bugs against the release candidate. Or maybe 1-3 Highs if we have a clear resolution plan (and a plan to release a patch against the release?) (ghada) We use an explicit tag to identify which bugs gate a particular release (regardless of severity). The whole list will have to be reviewed and scrubbed prior to reaching the release milestone. I don't feel it is sufficient to only review Critical / Major issues. [Example: On April 1/2019, there are 85 release gating bugs: only 13 are Critical/High. Yet it wouldn't be sufficient to only fix those 13 to ensure a quality release]. In the Release Planning wiki[1] , we have previously stated this policy: All release gating issues are addressed or reviewed/accepted for deferral I feel we need to keep this. [1] https://wiki.openstack.org/wiki/StarlingX/Release_Plan -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 2:10 PM To: Jones, Bruce E; Dean Troyer; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy I've updated the etherpad with changes that reflect the current feedback. Please review and add any additional feedback there. Thank you! https://etherpad.openstack.org/p/stx-release-policy-draft brucej -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 10:06 AM To: Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy Dean, Glenn - thank you for the feedback. I agree with it. There is also some feedback in the etherpad. I'm going to respond to both sets in the etherpad and try to improve the policy and the wording. https://etherpad.openstack.org/p/stx-release-policy-draft Thanks! brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, March 28, 2019 7:29 PM To: Seiler, Glenn Cc: Jones, Bruce E ; starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy On Thu, Mar 28, 2019 at 6:39 PM Seiler, Glenn wrote: > 1- We need to move away from time-based releases > 2- We need to do twice a year releases. This stmt, by definition, implies a time-gated release. Maybe it isn’t a specific date, but it is still time-gated. The wording does need work, yes. After a short conversation with Bruce this afternoon (Bruce, correct me if I'm wrong here) I came away with the intention being more of increasing the lag from OpenStack releases rather than separating completely from the OpenStack release cycle which is likely to stay at approx 6 months for a while (that's a rabbit hole under the bike shed I'd like to avoid just now). > As a nascent project, I think we need to show gradual and consistent progress. ++ > I did listen to much of the release team meeting today, and realize the trade-offs between big-rocks and timing are very difficult. > > Given the difficult choice of functionality versus timing, I personally think we need to show progress in getting to Stein and a container based distribution as major milestones in 1H and perhaps defer the Distributed Cloud capability to a 2H release. > > I don’t see anything intrinsically wrong with moving a specific date out; it happens all the time. But I also think a release should have some gate; i.e. we don’t move out of 1H. And if some functionality isn’t ready, then we move the functionality to another release in 2H. We took a stab at estimating and missed, making adjustments now is normal and to be expected. I agree with considering pushing distcloud to the next release because it is a) new functionality, and b) devs overlap with the container work and I think making the k8s infrastructure rock solid is much more important. If we are too far off with system stability the ramifications will be harder to overcome than delaying a new feature. > Anyway, that would be my vote, if I have one. You totally have a voice as part of the community, I would like to hear from more folks here... dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Tue Apr 2 19:37:34 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 2 Apr 2019 19:37:34 +0000 Subject: [Starlingx-discuss] DRAFT release policy In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC0A23418@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD0709720@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A26A@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD070C76C@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC0A23418@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD070C8D8@fmsmsx123.amr.corp.intel.com> Bill, yes if all anchor features are unready, there will be a delay for MS-3. Regarding the how much, and which features, and the churn you describe, I think (hope!) that the topic is covered already in the draft. The proposal is that it is managed by the Release team who makes recommendations to the TSC. brucej -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, April 2, 2019 11:24 AM To: Jones, Bruce E ; Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, If none of the anchor features will be ready by MS-3, then we have no choice but to reforecast. Then we get into the question of by how much, and which, if any anchor features can be excluded (which doesn't totally make sense to me - why would we have called it an anchor in the first place). This Release Churn is another heading that we need to add to the release policy, and whatever we come up with, we'll need to get it approved by the TSC, I think. Bill... -----Original Message----- From: Jones, Bruce E Sent: Tuesday, April 2, 2019 1:02 PM To: Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: Re: [Starlingx-discuss] DRAFT release policy Good feedback, Ghada, thank you. I have updated the document accordingly. I've also made a few other changes like adding test case readiness as a release criteria and proposed a method for handing anchor features that aren't completed by MS-3. I've removed the previously resolved (and much appreciated) feedback in the interests of readability. Link and updated text below. Brucej This file is: https://etherpad.openstack.org/p/stx-release-policy-draft This is a draft document for release planning. Comments and feedback welcomed! Openstack Release Policy ==================== The StarlingX project follows the release model defined in https://docs.openstack.org/project-team-guide/release-management.html using the "Trailing the common cycle" due to our dependency on upstream OpenStack projects. Release Planning ============== Initial release planning starts at the Open Infrastructure PTG meetings, where the TSC and community members discuss candidate features for the next release. The TSC then will review and approve a feature list for the release, will identify any release gating "anchor" features, and set a target date. The recommended target date is the date of the next OpenStack release plus 6 weeks. The PTG meeting is also an opportunity to review the community's goals and to define goals for the release. The overall Release Plan is created and managed by the Release sub-team by combining the TSC's input on content and target dates with input from the feature developers in the community and the Test team. The plan will include a standard set of milestones as per the usual OpenStack release management process. The Release team will actively manage the plan over the course of the release, recommending any adjustments in content and dates to the community and to the TSC for approval. We recognize that we are a new community working in a highly dynamic technology and that changes in our plans over time are normal and expected. We will work as a community to be open and transparent about our release process, and to minimize change from the original plan. Open issue: We should consider changing our release naming convention to something that isn't a date. Defect Tracking ============ The release team shall review active and incoming bug reports and make an initial call as to whether or not the bug needs to be fixed in the next release. If so, the bugs shall be tagged and tracked as the work on the release progresses. The list of release gating bugs will be actively managed, reviewed and scrubbed by the Release team to ensure that bugs are properly categorized as release gating. Release Policy =========== The Release team, together with the Test team TLs/PLs, shall make a recommendation to the community and TSC that a release is ready to go. Upon TSC approval, the release branches are tagged and the release documented. That recommendation should be based on: * Whether or not all anchor features in the release are complete, as per the input of the team(s) implementing the features and the results of Test team testing of the features * Proposal: All features identified as anchor features for a release need to be completed by the feature freeze milestone (MS-3). In the event that an anchor feature is not complete before the release feature freeze milestone, the Release team will make a recommendation to the TSC to extend the milestone date or to defer the feature to the next release. * Whether or not all test cases planned for the release are complete and ready to run * Proposal: All planned test cases shall be ready before the start of formal release candidate testing (RC1 milestone) * The completion and results of formal Testing performed by the Test team, measured by the percentage of planned tests attempted and the test pass rate * Proposal: 100% test cases attempted and 95% test cases passing in all configurations * The status of release gating bugs * Proposal: All release gating bugs must be fixed prior to a StarlingX release, ideally before the RC1 milestone but certainly before the final release. Bugfix Releases ============= The Release team, in conjunction with the community, can create a plan for a bug fix release. This would be an update to a previous release to address important defects that are impacting our users. The content would be fixes backported from master to the release branch, and would be based on both community and developer input regarding which fixes should be included. Testing of a bug fix release should include at least verification testing of the fixes and any additional testing needed as determined by the Test team. -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, April 1, 2019 5:19 PM To: Jones, Bruce E ; Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, I have a comment regarding this point: The severity and number of bugs open against the release Proposal: No open Critical or High severity bugs against the release candidate. Or maybe 1-3 Highs if we have a clear resolution plan (and a plan to release a patch against the release?) (ghada) We use an explicit tag to identify which bugs gate a particular release (regardless of severity). The whole list will have to be reviewed and scrubbed prior to reaching the release milestone. I don't feel it is sufficient to only review Critical / Major issues. [Example: On April 1/2019, there are 85 release gating bugs: only 13 are Critical/High. Yet it wouldn't be sufficient to only fix those 13 to ensure a quality release]. In the Release Planning wiki[1] , we have previously stated this policy: All release gating issues are addressed or reviewed/accepted for deferral I feel we need to keep this. [1] https://wiki.openstack.org/wiki/StarlingX/Release_Plan -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 2:10 PM To: Jones, Bruce E; Dean Troyer; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy I've updated the etherpad with changes that reflect the current feedback. Please review and add any additional feedback there. Thank you! https://etherpad.openstack.org/p/stx-release-policy-draft brucej -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 10:06 AM To: Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy Dean, Glenn - thank you for the feedback. I agree with it. There is also some feedback in the etherpad. I'm going to respond to both sets in the etherpad and try to improve the policy and the wording. https://etherpad.openstack.org/p/stx-release-policy-draft Thanks! brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, March 28, 2019 7:29 PM To: Seiler, Glenn Cc: Jones, Bruce E ; starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy On Thu, Mar 28, 2019 at 6:39 PM Seiler, Glenn wrote: > 1- We need to move away from time-based releases > 2- We need to do twice a year releases. This stmt, by definition, implies a time-gated release. Maybe it isn’t a specific date, but it is still time-gated. The wording does need work, yes. After a short conversation with Bruce this afternoon (Bruce, correct me if I'm wrong here) I came away with the intention being more of increasing the lag from OpenStack releases rather than separating completely from the OpenStack release cycle which is likely to stay at approx 6 months for a while (that's a rabbit hole under the bike shed I'd like to avoid just now). > As a nascent project, I think we need to show gradual and consistent progress. ++ > I did listen to much of the release team meeting today, and realize the trade-offs between big-rocks and timing are very difficult. > > Given the difficult choice of functionality versus timing, I personally think we need to show progress in getting to Stein and a container based distribution as major milestones in 1H and perhaps defer the Distributed Cloud capability to a 2H release. > > I don’t see anything intrinsically wrong with moving a specific date out; it happens all the time. But I also think a release should have some gate; i.e. we don’t move out of 1H. And if some functionality isn’t ready, then we move the functionality to another release in 2H. We took a stab at estimating and missed, making adjustments now is normal and to be expected. I agree with considering pushing distcloud to the next release because it is a) new functionality, and b) devs overlap with the container work and I think making the k8s infrastructure rock solid is much more important. If we are too far off with system stability the ramifications will be harder to overcome than delaying a new feature. > Anyway, that would be my vote, if I have one. You totally have a voice as part of the community, I would like to hear from more folks here... dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Tue Apr 2 19:38:41 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 2 Apr 2019 19:38:41 +0000 Subject: [Starlingx-discuss] [ Test ] meeting notes - 4/2/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDB2922@FMSMSX114.amr.corp.intel.com> Agenda for 4/2 Attendees: Jose, Numan, Cristopher, JC, Saul, Ada, Fernando, Mawrer, Maria P, Bruce, JP, Elio, Bill 1. Regression tests submission - all Note: include people interested for reviewing (both teams) Nova - working on formatting - not merged yet. Abraham working on it. System inventory - waiting for review System tests - writing of test cases WIP - partial submit expected by EOW (Include Nimalini in the reviewers) Networking - NUMA, SRIOV, Cluster WIP. (Include Chris Winnicki) - for questions include Chris and Numan. 2. Feature testing: ** For all the feature testing owners - get in contact with the development team responsible for the feature and ask for delivery dates. Also, deliver the proposed testing plan by EOD tomorrow. ** OVS-DPDK upversion - Elio Testing WIP - some bugs (4) we found cannot be reproduced by WR. Info requested to be sent today. We have reproduced 3 of them. A report of status will be sent tomorrow (networking team + community). Include the results for each test and the configurations used. Please send the test plan to Numan by email. Containerized OVS DPDK firewall OVS process monitoring SDN enabling Ceph upgrade - Fernando Fernando took this ownership. Work on the dry-run (deployment of the ISO). Let us know when you estimate to finish the config setup. Closing details for the test plan. Containerized OpenStack services - Jose, Numan Reviewing the containers plan. An email asking for details on the new features will be sent soon. Numan - some test cases created (~25) and sent for reviewing and waiting for feedback. Setup information is missing and waiting for it. Test plan WIP - to be done EOW OpenStack patch elimination - JC, Numan JC - first round of test cases defined. Reviewing info on the features. Fernando - working on Horizon test cases - send questions to the mailing list. Numan - working on https (that was recently released). DHCP l3 - not ready yet numa pinning - not ready Ansible bootstrap deployment - Numan Testing in progress: Simplex. Some issues found and reported. Rest of the configs not available yet. Collectd infra - Numan Recently done - starting testing this week. Barbican for keystone - Numan Recently done - testing in progress. Distributed cloud - Numan Keystone - development will be done soon. Then testing will begin. QAT upgrade - Ricardo Plan to be done this Week. Automation - Networking: 48 tests (new) + 21 fixes (previous batch) creating steps for test cases with no content. Please share the list of test cases done. Build a report and place it in the test-docs folder. 3. Performance - Victor First approach is to include footprint metrics into the daily sanity. github.com/starling-stagin/stx-contrib Trying to setup yardstick in a plain openstack setup. Having problems with deployment. Ask Victor to provide an update tomorrow. 4. Opens Numan - about sanity - failures seen on 2+2+2 (bug reviewed) Work on a plan for having the test code out in the repo - Ada Numan to setup a meeting with Ada for testing alignment Regards Ada From maria.g.perez.ibarra at intel.com Wed Apr 3 00:04:31 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 3 Apr 2019 00:04:31 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190402 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-01 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 55 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 63 TCs FAIL ] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 58 TCs PASS ] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 58 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 58 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 55 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 63 TCs FAIL ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 62 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 62 TCs PASS ] ------------------------------------------------------------------ Simplex BM and Virtual - application-apply fails at 95% ceilometer pod Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1820928 ------------------------------------------------------------------ This is the list of test cases executed: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Mon Apr 1 23:33:42 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Mon, 1 Apr 2019 23:33:42 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <0483622846A57742B81A944248DD69042FC28FFD@fmsmsx101.amr.corp.intel.com> Message-ID: Hi Chenjie, We have dumped the VM xml file, and as you mentioned we haven’t found the NUMA sections either. So we are going to follow the workaround described in your launchpad and share the results as soon as we have it. Regards -Ricardo From: Xu, Chenjie Sent: Thursday, March 28, 2019 11:32 PM To: Gomez, Juan P ; Martinez Monroy, Elio ; Peters, Matt ; Khalil, Ghada ; Lin, Shuicheng ; Cabrales, Ada ; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io' ; Zhao, Forrest ; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi JP, Could you please run following commands on your compute node to check if the problem in our environment is caused by the same cause? sudo bash virsh list virsh dumpxml $vm_num > vm.xml In my environment, following sections can’t be found in vm.xml and this is the root cause of the problem: I have a workaround for this problem and it may be useful to you. You can find my workaround in the comment #7 in following bug report: https://bugs.launchpad.net/starlingx/+bug/1820378 Best Regards, Xu, Chenjie From: Gomez, Juan P Sent: Friday, March 29, 2019 7:01 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Khalil, Ghada >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Ricardo and I were working in different systems ( Duplex and Standard Controller 2+2 ) and We were able to reproduced the issues, bugs have been updated Also We have run the Sanity Test on both configurations with no issues Matt, Also We have attached the logs from both configurations Best Regards, JP From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 7:59 PM To: Martinez Monroy, Elio >; Peters, Matt >; Khalil, Ghada >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie From: Martinez Monroy, Elio Sent: Thursday, March 28, 2019 3:36 AM To: Peters, Matt >; Xu, Chenjie >; Khalil, Ghada >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing So, in order to develop our NUMA test cases, we should configure that topology in our VM’s? How can we do it? Do you have the steps? BR Elio From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, March 27, 2019 1:19 PM To: Martinez Monroy, Elio >; Xu, Chenjie >; Khalil, Ghada >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello, From the output below, this data is taken from a virtual system. In a QEMU/KVM environment, it is not expected to have a NUMA topology unless it has been explicitly configured in the domain configuration of the emulated host. I’m assuming the tests being performed by Chenjie are in a HW lab. This can be confirmed by providing the same set of data for those test systems. Thanks, Matt From: "Martinez Monroy, Elio" > Date: Wednesday, March 27, 2019 at 2:47 PM To: "Peters, Matt" >, "Xu, Chenjie" >, Ghada Khalil >, "Lin, Shuicheng" >, "Cabrales, Ada" >, "Perez, Ricardo O" > Cc: "'starlingx-discuss at lists.starlingx.io'" >, "Zhao, Forrest" >, Brent Rowsell > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt Just exercising same stuff, please check my outputs from compute-0, please let me know if something else is needed sudo /usr/sbin/dmdecode Password: sudo: /usr/sbin/dmdecode: command not found compute-0:~$ sudo /usr/sbin/dmidecode # dmidecode 3.1 Getting SMBIOS data from sysfs. SMBIOS 2.8 present. 10 structures occupying 419 bytes. Table at 0x000F6460. Handle 0x0000, DMI type 0, 24 bytes BIOS Information Vendor: SeaBIOS Version: Ubuntu-1.8.2-1ubuntu1 Release Date: 04/01/2014 Address: 0xE8000 Runtime Size: 96 kB ROM Size: 64 kB Characteristics: BIOS characteristics not supported Targeted content distribution is supported BIOS Revision: 0.0 Handle 0x0100, DMI type 1, 27 bytes System Information Manufacturer: QEMU Product Name: Standard PC (i440FX + PIIX, 1996) Version: pc-i440fx-2.5 Serial Number: Not Specified UUID: 71d54a5d-b569-4e1c-83e3-9babf169e824 Wake-up Type: Power Switch SKU Number: Not Specified Family: Not Specified Handle 0x0300, DMI type 3, 21 bytes Chassis Information Manufacturer: QEMU Type: Other Lock: Not Present Version: pc-i440fx-2.5 Serial Number: Not Specified Asset Tag: Not Specified Boot-up State: Safe Power Supply State: Safe Thermal State: Safe Security Status: Unknown OEM Information: 0x00000000 Height: Unspecified Number Of Power Cords: Unspecified Contained Elements: 0 Handle 0x0400, DMI type 4, 42 bytes Processor Information Socket Designation: CPU 0 Type: Central Processor Family: Other Manufacturer: QEMU ID: A3 06 01 00 FF FB 8B 07 Version: pc-i440fx-2.5 Voltage: Unknown External Clock: Unknown Max Speed: 2000 MHz Current Speed: 2000 MHz Status: Populated, Enabled Upgrade: Other L1 Cache Handle: Not Provided L2 Cache Handle: Not Provided L3 Cache Handle: Not Provided Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Core Count: 6 Core Enabled: 6 Thread Count: 1 Characteristics: None Handle 0x1000, DMI type 16, 23 bytes Physical Memory Array Location: Other Use: System Memory Error Correction Type: Multi-bit ECC Maximum Capacity: 10 GB Error Information Handle: Not Provided Number Of Devices: 1 Handle 0x1100, DMI type 17, 40 bytes Memory Device Array Handle: 0x1000 Error Information Handle: Not Provided Total Width: Unknown Data Width: Unknown Size: 10240 MB Form Factor: DIMM Set: None Locator: DIMM 0 Bank Locator: Not Specified Type: RAM Type Detail: Other Speed: Unknown Manufacturer: QEMU Serial Number: Not Specified Asset Tag: Not Specified Part Number: Not Specified Rank: Unknown Configured Clock Speed: Unknown Minimum Voltage: Unknown Maximum Voltage: Unknown Configured Voltage: Unknown Handle 0x1300, DMI type 19, 31 bytes Memory Array Mapped Address Starting Address: 0x00000000000 Ending Address: 0x000BFFFFFFF Range Size: 3 GB Physical Array Handle: 0x1000 Partition Width: 1 Handle 0x1301, DMI type 19, 31 bytes Memory Array Mapped Address Starting Address: 0x00100000000 Ending Address: 0x002BFFFFFFF Range Size: 7 GB Physical Array Handle: 0x1000 Partition Width: 1 Handle 0x2000, DMI type 32, 11 bytes System Boot Information Status: No errors detected Handle 0x7F00, DMI type 127, 4 bytes End Of Table compute-0:~$ virsh nodeinfo CPU model: x86_64 CPU(s): 6 CPU frequency: 3792 MHz CPU socket(s): 1 Core(s) per socket: 6 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 10485236 KiB compute-0:~$ /usr/bin/topology TOPOLOGY: logical cpus : 6 sockets : 1 cores_per_pkg : 6 threads_per_core : 1 numa_nodes : 1 total_memory : 9.61 GiB memory_per_node : 10.00 GiB LOGICAL CPU TOPOLOGY: cpu_id : 0 1 2 3 4 5 socket_id : 0 0 0 0 0 0 core_id : 0 1 2 3 4 5 thread_id : 0 0 0 0 0 0 CORE TOPOLOGY: cpu_id socket_id core_id thread_id affinity 0 0 0 0 0x1 1 0 1 0 0x2 2 0 2 0 0x4 3 0 3 0 0x8 4 0 4 0 0x10 5 0 5 0 0x20 compute-0:~$ grep -i numa /var/log/dmesg [ 0.000000] No NUMA configuration found From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, March 27, 2019 6:10 AM To: Xu, Chenjie >; Khalil, Ghada >; Martinez Monroy, Elio >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello Chenjie, Can you run the following set of commands on your test system compute hosts and provide the output for each? sudo /usr/sbin/dmidecode virsh nodeinfo /usr/bin/topology grep -i numa /var/log/dmesg From: "Xu, Chenjie" > Date: Wednesday, March 27, 2019 at 5:45 AM To: "Peters, Matt" >, Ghada Khalil >, "Martinez Monroy, Elio" >, "Lin, Shuicheng" >, "Cabrales, Ada" >, "Perez, Ricardo O" > Cc: "'starlingx-discuss at lists.starlingx.io'" >, "Zhao, Forrest" > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I tried standard 0322 ISO image on 4 bare metals and “NUMA sections are still missing”. The docker images versions and build baseline have been attached. The ISO image link is following: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie From: Xu, Chenjie Sent: Wednesday, March 27, 2019 10:38 AM To: 'Peters, Matt' >; Khalil, Ghada >; Martinez Monroy, Elio >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, Sorry for misleading! I mean “NUMA sections are still missing”. I have multiple systems experiencing this issue: 1. 0306 ISO image for OVSDPDK Upgrade Testing on 4 bare metals 2. 0322 ISO image for OVSDPDK Upgrade Testing on 4 bare metals 3. 0305 ISO image on 4 bare metals 4. 0315 ISO image one 1 bare metals The item 1, 2 and 3 use same bare metals. The item 4 use a separate bare metal. I will try 0322 ISO image on 4 bare metals. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, March 26, 2019 11:10 PM To: Khalil, Ghada >; Xu, Chenjie >; Martinez Monroy, Elio >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello Chenjie, Can you clarify this statement? “NUMA sections missing bug still exits”. Did you mean to say that the missing NUMA sections in the domain XML are now present, but you still can’t ping over the vhost port, or are you saying the NUMA sections are still missing? Do you have multiple systems that are experiencing this issue, or is it isolated to a single system that you are having problems with the NUMA topology? I believe you mentioned that you saw the same connectivity issues on a standard load from the mirror, so that still points to a test system / environment issue. As Ghada mentioned, I think the next steps are to get results from the sanity systems with the same load baseline and to test with your custom load on a system that has been proven to work on the standard load. -Matt From: Ghada Khalil > Date: Tuesday, March 26, 2019 at 9:28 AM To: "Xu, Chenjie" >, "Peters, Matt" >, "Martinez Monroy, Elio" >, "Lin, Shuicheng" >, "Cabrales, Ada" >, "Perez, Ricardo O" > Cc: "'starlingx-discuss at lists.starlingx.io'" >, "Zhao, Forrest" > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Has Elio or Rich from the test team tried the new load as well on their systems? Do they see the same issues you reported? I think we need some data for another system to compare. Ada, As part of this thread, I also requested that a VM connectivity test be added to regular sanity so that there is data from the daily cengn builds w/o the new version of ovs-dpdk. Has the sanity suite been updated? Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, March 26, 2019 7:09 AM To: Peters, Matt; Martinez Monroy, Elio; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I installed StarlingX using the new 0322 ISO image for OVSDPDK Upgrade Testing on 4 bare metals. NUMA sections missing bug still exits: https://bugs.launchpad.net/starlingx/+bug/1820378 This time I update the docker images to latest before installing StarlingX. And the docker images versions and build baseline have been attached. Could you please help review the docker image versions? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, March 22, 2019 8:31 PM To: Xu, Chenjie >; Martinez Monroy, Elio >; Khalil, Ghada >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello Chenjie, Comparing the image list to our labs shows a difference in the image IDs (sha), but I’m not sure if that is just a difference in build baselines or an actual difference in the images. Since some of them align, I think you might be running a different set of images. Can you confirm what the build baseline is that you pulled the images from? I have attached a sample list from our labs for comparison. -Matt From: "Xu, Chenjie" > Date: Friday, March 22, 2019 at 7:25 AM To: "Peters, Matt" >, "Martinez Monroy, Elio" >, Ghada Khalil >, "Lin, Shuicheng" >, "Cabrales, Ada" >, "Perez, Ricardo O" > Cc: "'starlingx-discuss at lists.starlingx.io'" >, "Zhao, Forrest" > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, The docker image versions in my environment and helm-charts-manifest-no-tests.tgz has been attached. The helm-charts-manifest-no-tests.tgz is built by Shuicheng for the OVSDPDK Upgrade Testing ISO image. Best Regards, Xu, Chenjie From: Martinez Monroy, Elio [mailto:elio.martinez.monroy at intel.com] Sent: Thursday, March 21, 2019 11:20 PM To: Khalil, Ghada >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Perfect, I will be cheking my mail for that new ISO. Adding Richo , he is the one that is going to execute the tests. Thanks BR Elio From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, March 21, 2019 9:15 AM To: Lin, Shuicheng >; Cabrales, Ada >; Martinez Monroy, Elio > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Shuicheng, We discussed this in the networking bi-weekly team meeting today. Please re-enable the mlx pmd – re-apply the following patch: https://github.com/openstack/stx-integ/blob/master/networking/openvswitch/centos/meta_patches/0005-enable-mlx-pmds.patch Then please rebase to the latest master and build a new load for Ada’s team to start testing Hi Elio, One a new build is available, please start testing as per our previous plans. You need to run a networking regression on various NICs – including the mellanox. Testing was planned for about two weeks with the target for the code to merge after testing is complete by April 5. Please provide regular updates. Please also verify if you see the 3 bugs that Chenjie reports below (please test on both a cengn load as well as the load with the ovs-dpdk upgrade). Please note that none of these issues are reproducible in WR labs: https://bugs.launchpad.net/starlingx/+bug/1821150 https://bugs.launchpad.net/starlingx/+bug/1821135 https://bugs.launchpad.net/starlingx/+bug/1820378 Thanks, Ghada From: Khalil, Ghada Sent: Thursday, March 21, 2019 9:12 AM To: 'Lin, Shuicheng'; Xu, Chenjie; Liu, ZhipengS; Xie, Cindy; Cabrales, Ada Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Peters, Matt; Richard, Joseph; Winnicki, Chris; Peng, Peng; Martinez Monroy, Elio; Jones, Bruce E; Qin, Kailun; Guo, Ruijing; Le, Huifeng Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Shuicheng, There is still testing planned by Ada’s team before merging the code related to networking regression. This will cover the mellanox NIC as well. Can you confirm that the load being used has the mellanox pmd enabled in it? Thanks, Ghada From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Thursday, March 21, 2019 9:03 AM To: Xu, Chenjie; Khalil, Ghada; Liu, ZhipengS; Xie, Cindy; Cabrales, Ada Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Peters, Matt; Richard, Joseph; Winnicki, Chris; Peng, Peng; Martinez Monroy, Elio; Jones, Bruce E; Qin, Kailun; Guo, Ruijing; Le, Huifeng Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie for the testing. Hi all, Does it mean greenlight for code merge, or any other test is needed? Here is the patch list: https://review.openstack.org/642672 https://review.openstack.org/642673 Best Regards Shuicheng From: Xu, Chenjie Sent: Thursday, March 21, 2019 5:46 PM To: Khalil, Ghada >; Liu, ZhipengS >; Xie, Cindy >; Cabrales, Ada > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Peters, Matt >; Richard, Joseph >; Winnicki, Chris >; Lin, Shuicheng >; Peng, Peng >; Martinez Monroy, Elio >; Jones, Bruce E >; Qin, Kailun >; Guo, Ruijing >; Le, Huifeng > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi team, I have finished the Basic Functional Testing for OVSDPDK Upgrade. Please check following summaries: Bug Summary: No bug related to OVSDPDK Upgrade. 3 bugs related to cutting over to container and configuration. https://bugs.launchpad.net/starlingx/+bug/1821150 https://bugs.launchpad.net/starlingx/+bug/1821135 https://bugs.launchpad.net/starlingx/+bug/1820378 The following features have been tested: N/S traffic for FLAT/VLAN network E/W traffic for FLAT/VLAN/VXLAN network (E/W traffic within same host and across hosts) QOS Security Group MTU Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Apr 2 15:41:53 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 2 Apr 2019 15:41:53 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting notes for Apr 2 Message-ID: <9A85D2917C58154C960D95352B22818BD070C5D4@fmsmsx123.amr.corp.intel.com> Meeting notes and agenda for the 4/2 meeting Any feedback or reaction from the Nova community regarding Dean's email? No update. Did it get sent? * PCI Affinity dependency on Nova NUMA topology - Zhipeng * This affinity agent could not get both pci_device info and numa info of server from nova. * In nova stage version, we added below two for server. * server["wrs-res:topology"] * server["wrs-res:pci_devices"] * In nova master, no these attributions for server. * For topology, I can see that there is a patch of adding numa topology pending for merge * https://review.openstack.org/#/c/621476 Add server sub-resource topology API * Next step - Zhipeng to investigate alternative implementations that don't have dependencies on Nova. * Help with test items: * Replace SR-IOV/PT best effort scheduling policy with Queens feature ? https://review.openstack.org/#/c/555000/3..3/specs/queens/implemented/share-pci-between-numa-nodes.rst ? https://storyboard.openstack.org/#!/story/2004888 * Replace vswitch affinity with Rocky feature ? https://review.openstack.org/#/c/541290/18/specs/rocky/approved/numaaware-vswitches.rst ? https://storyboard.openstack.org/#!/story/2004889 * DB purge ? https://blueprints.launchpad.net/nova/+spec/purge-db * Discuss Horizon configurable page refresh * Please socialize this feature with the Horizon community for discussion at the PTG as a new feature. Let's see if we can get a positive discussion going in that community. * Please add Gerry Kopec to pending / new Nova reviews * Review tracking sheet * Discuss expected schedule of upstream work and report back to Release team * Review open stories and cleanup * https://storyboard.openstack.org/#!/story/2003108 - please have a lead accept the feature -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Tue Apr 2 15:50:27 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 2 Apr 2019 15:50:27 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don’t have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1’s server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn’t exist in my BIOS. Only NUMA Optimized exists. An image for “Memory RAS and Performance Configuration” has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Apr 2 06:55:13 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 2 Apr 2019 06:55:13 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, Compute-0 and compute-1 don’t have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1’s server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn’t exist in my BIOS. Only NUMA Optimized exists. An image for “Memory RAS and Performance Configuration” has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie ; Martinez Monroy, Elio ; Peters, Matt ; Lin, Shuicheng ; Cabrales, Ada ; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io' ; Zhao, Forrest ; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: BIOS.jpg Type: image/jpeg Size: 84248 bytes Desc: BIOS.jpg URL: From chenjie.xu at intel.com Wed Apr 3 03:48:22 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 3 Apr 2019 03:48:22 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Matt, Zhipeng is working on "PCI Affinity dependency on Nova NUMA topology". This may relate to BUG "VM can't send packet through vhostuser port due to missing numa settings in domain xml": https://bugs.launchpad.net/starlingx/+bug/1820378 According to him, after cutting over to nova master, the NUMA topology and PCI device info has been removed from nova. Before nova master, StarlingX uses nova stage which has NUMA topology. Some detailed information are listed below: * PCI Affinity dependency on Nova NUMA topology - Zhipeng o This affinity agent could not get both pci_device info and numa info of server from nova. o In nova stage version, we added below two for server. o server["wrs-res:topology"] o server["wrs-res:pci_devices"] o In nova master, no these attributions for server. o For topology, I can see that there is a patch of adding numa topology pending for merge o https://review.openstack.org/#/c/621476 Add server sub-resource topology API o Next step - Zhipeng to investigate alternative implementations that don't have dependencies on Nova. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 11:50 PM To: Xu, Chenjie ; Martinez Monroy, Elio ; Peters, Matt ; Lin, Shuicheng ; Cabrales, Ada ; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io' ; Zhao, Forrest ; Rowsell, Brent ; Gauld, James Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don't have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1's server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn't exist in my BIOS. Only NUMA Optimized exists. An image for "Memory RAS and Performance Configuration" has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: "Jones, Bruce E" Subject: [Starlingx-discuss] Distro.openstack meeting notes for Apr 2 Date: Tue, 2 Apr 2019 15:41:53 +0000 Size: 77348 URL: From serverascode at gmail.com Wed Apr 3 13:20:50 2019 From: serverascode at gmail.com (Curtis) Date: Wed, 3 Apr 2019 09:20:50 -0400 Subject: [Starlingx-discuss] Edge Computing Use Case, Deployment Advice Needed In-Reply-To: References: Message-ID: On Mon, Apr 1, 2019 at 7:43 PM Arce Moreno, Abraham < abraham.arce.moreno at intel.com> wrote: > > I added some points/questions inline. > > Thanks Curtis for your time! > > > > We are integrating this demo in our spare time to ramp up in cloud > > > technologies and one of its imperatives is a working solution. It > started as a use > > > case proposal around unmanned aerial systems [0], then decided to > avoid some > > > of the complexity involved in flying the drones, and finally landed it > as a use case > > > around home automation / smart cities at the network edge. > > > First off, I'd like to let people know that we are planning on doing > some kind of > > "edge" proof-of-concept with Packet.com resources, so perhaps the > project you > > discuss could fit in with that. I'm sure we'll chat about it at some > point here. > > > > At the next TSC meeting we'll discuss how to get the packet projects off > the > > ground, so feel free to attend. :) > > Awesome! We will be paying attention to community communications about > this topic. > > > > This demo has currently integrated the following acceleration > resources: > > > - GPU > > > - VPU (Movidius NCS) > > > I would not expect a USB device like the Movidius NCS to be available in > most > > STX deployments, but maybe? > > Maybe, Movidius NCS seems to be one of one those exploration paths to > offload some workloads, and where budget could make a difference in > comparison with FPGAs. > Oh for sure, cost effective. I see what you mean. > > > [ StarlingX Deployment ] [ Offload ] > > > What would be the preferred way to deploy this use case proposal in > > > StarlingX? We understand the following options are available including > its > > > preference: > > > > > > 1. Via Kubernetes (Not Preferred) > > > 2. Via Virtual Machine (Preferred) > > > 3. Via Bare Metal (Preferred) > > > > > > Are the above options and their preference, correct? If not, can you > > > please give us some hints behind your answer. > > > From my standpoint, I think #3 would be the least common option. #2 > would be > > a good place to start, but I don't think #1 is "not preferred", I guess > it depends > > on where these preferences are coming from. > > Understood, we think it is worth to try option 2 initially at least for > the core applications of the use case. > > > > [ StarlingX Deployment ] [ Provisioning ] > > > > > > As mentioned at the beginning, another of our imperatives, is to > > > exercise zero touch provisioning. > > > > > > Does it makes sense to split the provisioning in 2 parts based in the > > > required time for the demo components to live? > > > > > > - The core applications 100% uptime > > > - Services on demand / 100 uptime in some cases > > > By zero touch provisioning do you just mean automation using IaaS APIs? > eg. > > the docker compose file you link to? Or something else? > > We understand the term from its definition but that "something else" is > not in our knowledge yet. From our current understanding, that zero touch > provisioning will allow us to deploy with one single instruction: > > - The core applications part of the use case (e.g. access to the different > dashboards) > - The services part of the use case: the start and stop of X service (e.g. > face recognition, object recognition, etc.) for each of the wanted video > streams. > > We will appreciate if you can share any online resource where we can learn > more about this zero touch concept in a practical way (e.g. whitepaper, use > case) so we can land into our use case. > I'm interested in "zero touch" and I'll be doing some research over the next while. This is also potentially something that can benefit stx. This is just me talking, but I think there is a difference between zero touch and automation. To me the canonical example of ZT would be turning on a device, typically physical, and that device starts up, registers, and then is scheduled and takes on some kind of personality for whatever workload is scheduled to it, all without any human intervention. Manually initiating an automation workflow, like say a docker compose run, doesn't feel like ZT to me, but again I'm still working to define it for myself. :) > > Again, thank you Curtis for your time and help to answer our questions. > No thank you, I think this is great. :) Are you going to be doing your work in the public, like in a public git repo? Thanks, Curtis -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Apr 3 13:33:17 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 3 Apr 2019 13:33:17 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 4/3 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35EF7B59@SHSMSX104.ccr.corp.intel.com> Agenda & notes for 4/3 meeting: - Ceph upgrade update 1. generate PR for stx-Ceph on staging (Changcheng) all old patches abandoned. rebase stx-Ceph on staging: offical branch https://github.com/starlingx-staging/stx-ceph/tree/stx/v13.2.2 - done 2. patch submitted to StarlingX repos (Daniel) path uploaded using topic "ceph-mimic-upgrade", please review: https://review.openstack.org/#/q/topic:ceph-mimic-upgrade+(status:open+OR+status:merged) still WIP not passing full testing yet. But collecting code review comments. Will abandon the old Gerrit reviews. All patches Daniel has all uploaded. all Zuul testing has been passing and they are ready for review. squash the commit message or not? Dean's comments is to keep the commit seperately with the commit message. For cases like 5 changes in 1 file, merge them into one so that it is easier to review. Have logical split into several reviews but not to merged them into a huge one. 3. System testing prepration (Fernando) AR: Daniel/Yong to build ISO to Fernando dev_build. Using master and cherry-pick the pending reviews. Tingjie: test cases review w/ Ada's team, proposed functional cases to Fernando. Goal is not have regressions, Openstack services needs to work on top of new Ceph. Openstack deployment and other storage related Cinder, Glance etc needs to work. Daniel to do: to remove the helm-chart for ceph jewel (Ceph 10 version). AR: Tingjie to send the test case design thread to the community to provide feedback. 4. release notes for Ceph upgrade changes: AR: Tingjie send changes on release notes to Bruce & community regarding Ceph version changes. - DevStack update (Dean/Yi) https://review.openstack.org/#/q/status:open+branch:master+topic:devstack 2 stx-nfv and 3 stx-fault are to fix the dependencies broken but now it's fixed. Shuicheng will upload another review for stx-ha after Dean's patch merged. - Libvirt/qemu patch removal: SB#2005212 (Jim) will check status offline - Opens (all) - None -----Original Message----- From: Xie, Cindy Sent: Tuesday, April 2, 2019 9:13 PM To: starlingx-discuss at lists.starlingx.io Subject: Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/3 Agenda for 4/3 meeting: - Ceph upgrade update 1. generate PR for stx-Ceph on staging (Changcheng) 2. patch submitted to StarlingX repos (Daniel) 3. System testing prepration (Fernando) - DevStack update (Dean/Yi) - Libvirt/qemu patch removal: SB#2005212 (Jim) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 3, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Frank.Miller at windriver.com Wed Apr 3 14:48:27 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Apr 2019 14:48:27 +0000 Subject: [Starlingx-discuss] Sanity updates Message-ID: Folks: I took an action on the containers community call to send out an update on the current sanity issues. 1. AIO-SX: This configuration should now be ready for use. * Bart Wensley solved LP 1820928 which turned out to be a bug in kubelet where it was hitting a limit of 250 http2 streams in a single connection. 2. Other multi-server configs: An intermittent issue still exists when launching VMs resulting in the VMs failing to be scheduled. Tracked under LPs 1821841 & 1822116 * Gerry Kopec continues to investigate intermittent issues with the nova-placement pod. When issue occurs VMs cannot be launched. * Issue is with nova-compute unable to get requests processed to one of the nova-placement pods running on each controller. * Current theory is this is related to our docker images using OpenStack master and a recent nova commit in the nova placement area is impacting the placement pod. Gerry expects to prove or disprove the theory later today. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Apr 3 15:37:45 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 3 Apr 2019 15:37:45 +0000 Subject: [Starlingx-discuss] Community Call (April 3, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A23836@ALA-MBD.corp.ad.wrs.com> Notes from the April 3rd call. Bill... Performance Footprint Metrics Demo (Victor) - Code: https://github.com/starlingx-staging/tools-contrib/tree/master/stx-metrics/footprint - Example Output: https://gist.github.com/VictorRodriguez/b9260ad223b176c323363ac06d0afdde - 3 phase plan, Victor starting on phase 0 now Release Naming (Ghada) - For tagging, move from a naming convention that includes a date given the discussion around content-based releases - stx.2018.10 >> stx.R1 / stx.Rel1 / stx.rel1 / stx.1.0 - stx.2019.05 >> stx.R2 / stx.Rel2 / stx.rel2 / stx.2.0 - Leave internal SW_VERSION in software as is - currently: 19.01 - Branch naming: ?? (we can decide this later at the time of RC1) - If agreed, bulk update StoryBoard, Launchpad and wiki references - we converged on this... - Release Name: 1.0, 2.0 >> 2.1 (First stable release after Intial Release) - - Launchpad/StoryBoard Tags: stx.1.0 (was stx.2018.10), stx.2.0 (was stx.2019.05), stx.2.0.rc1, stx.2.1 - SW_VERSION: it'll remain in the same format as now Reminder of repo renaming for OpenDev on 19 Apr 2019 (Dean) - https://etherpad.openstack.org/p/stx-opendev - expecting the downtime to be small, Dean expects they'll publish the timing next week Bugs for Openstack core components (Ghada) - Propose to use the same stx launchpad and add the affected openstack component (ex: nova, neutron) as "Affects Project". - This will make the bug visible in StarlingX as well as the affected openstack project. - Allows us to track progress of the bugs directly in our own queries without the need to have 2 bugs (one in STX and one in the openstack project) - Example: https://bugs.launchpad.net/starlingx/+bug/1821938 Bug Tagging (Ghada) - Problem Description: Launchpad does not have a distinction between "Fix Resolved" and "Fix Verified". - Once code merges in master, the bug is automatically updated to "Fix Released" and considered Closed. - This doesn't provide a way to query bugs that need to be explicitly retested by the reporter - Proposal: - New optional tag: stx.retestNeeded which is added by the screener at the time the bug is screened - (or the reporter at the time the bug is created) - To find bugs that need verification, query: Status = "Fix Released" AND labels has "stx.retestNeeded" - Once the bug is verified by the reporter/test team, a note is added to the bug and the label is removed by the tester. We didn't get to the sub-project updates, will start with those next week. ------------------------------------------------------------------------------------------------------------------------------------- From: Zvonar, Bill Sent: Tuesday, April 2, 2019 7:44 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (April 3, 2019) Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190403T1400 From Ghada.Khalil at windriver.com Wed Apr 3 15:40:42 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 3 Apr 2019 15:40:42 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190402 In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D46D9@ALA-MBD.corp.ad.wrs.com> Hi Maria, Thank you for the sanity report. Can the link to the ISO (included in the report) be updated to point to the explicit path of the ISO used in the sanity? The current link points to latest_build which is a symlink that gets updates daily with new builds. So if someone uses this link, they won't necessarily get the same ISO used in sanity. So, instead of: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/bootimage.iso The link would be: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/outputs/iso/ Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Tuesday, April 02, 2019 8:05 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190402 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-01 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 55 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 63 TCs FAIL ] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 58 TCs PASS ] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 58 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 58 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 55 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 63 TCs FAIL ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 62 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 62 TCs PASS ] ------------------------------------------------------------------ Simplex BM and Virtual - application-apply fails at 95% ceilometer pod Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1820928 ------------------------------------------------------------------ This is the list of test cases executed: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Wed Apr 3 15:53:32 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 3 Apr 2019 09:53:32 -0600 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> Message-ID: <822d14b1-65a0-d9f0-93b2-ea05495f6fb7@windriver.com> On 4/2/2019 9:50 AM, Khalil, Ghada wrote: > > Thanks Chenjie. > > Can you please run the following on your compute nodes and attach the > output? > > sudo lscpu > sudo virsh capabilities > > Richardo / Juan P, > > Please provide the cpu models and the above output from your two > hardware systems as well. > In addition to the above, can you please provide the logs from one of the nova-compute pods on an affected system? You can get the logs by running : POD=`kubectl -n openstack get pod -l application=nova,component=compute \ -o=jsonpath='{.items[0].metadata.name'}` kubectl -n openstack logs -c nova-compute $POD Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Wed Apr 3 19:28:49 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Wed, 3 Apr 2019 15:28:49 -0400 Subject: [Starlingx-discuss] Error when add a new host In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153CAFA6F@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153CAFA6F@FMSMSX108.amr.corp.intel.com> Message-ID: <1554319729.27123.13.camel@windriver.com> Hi Juan Carlos, It is the sort of thing we'd expect to see in the sanities, particularly if that host-add command is used... it sounds like that is what you are referring to "sometimes breaks our test execution". If that's the case, you could mention the sanities in the bug report. Sometimes that's a good source of information for triage and debugging. M On Mon, 2019-04-01 at 17:51 +0000, Alonso, Juan Carlos wrote: > Hi, >   > There is an intermittent issue during STX provisioning when add a new > host (controller, compute or storage). >   > During provisioning, when add a new host: > $ system host-add -n ${host_name} -p ${personality} -m ${mac_address} > > Got the following error: > 'Maintenance has returned with a status of fail, reason: no response, > recommended action: retry' > > This issue is intermittent. After it failed, try to add the host > again but got: > 'error: Host already exists' > > When check the hosts available can see host installed correctly: > $ system host-list > > Then, got an error when added a new host, got an error when retry to > add the host because it was correctly installed. > This issue sometimes breaks our test execution. > > I already open a Launchpad: https://bugs.launchpad.net/starlingx/+bug > /1822657 > Did someone faced this issue before? >   > Regards. > Juan Carlos Alonso > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Al.Bailey at windriver.com Wed Apr 3 20:09:35 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 3 Apr 2019 20:09:35 +0000 Subject: [Starlingx-discuss] Newer versions of kubernetes, tiller, helm, calico merging shortly Message-ID: As part of StoryBoard https://storyboard.openstack.org/#!/story/2005198 several thirdparty components related to containers are about to merge. These include Kubernetes 1.13.5 (was previously 1.12.3) Helm/Tiller 2.13.1 (was previously 2.12.1 Python-kubernetes 8.0.0 (was previously 6.0.0) Docker-ce 18.06 (was previously 18.03) Calico 3.6.1 (was previously 3.1.4) CoreDNS 1.2.6 (was previously 1.2.2) Sanity has run for AIO-SX, AIO-DX, and Standard (2+3) systems. No new issues were identified. Please notify right away if any new problems arise. Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Apr 3 20:18:51 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 3 Apr 2019 20:18:51 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 4/3/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1ADE3D4@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Apr 3 21:27:31 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 3 Apr 2019 21:27:31 +0000 Subject: [Starlingx-discuss] Debug ideas/enhancements for containers Message-ID: Folks: I'd like to get a list of ideas going to add tools, logs, techniques for debugging in our new containers world. I've heard and seen a few challenges lately and would like to capture a list. Then we can prioritize the list and work on the items over time. I've decided to split the list into 3 major categories: Collect Tool, Containers Platform, OpenStack App I've started the list here: https://etherpad.openstack.org/p/stx-containerization-debug Please take a look and add your ideas. Thanks. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Apr 3 21:42:35 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 3 Apr 2019 21:42:35 +0000 Subject: [Starlingx-discuss] Newer versions of kubernetes, tiller, helm, calico merging shortly In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD070D996@fmsmsx123.amr.corp.intel.com> Nice to see this happening. Are these the versions we plan to include in the release? brucej From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Wednesday, April 3, 2019 1:10 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Newer versions of kubernetes, tiller, helm, calico merging shortly As part of StoryBoard https://storyboard.openstack.org/#!/story/2005198 several thirdparty components related to containers are about to merge. These include Kubernetes 1.13.5 (was previously 1.12.3) Helm/Tiller 2.13.1 (was previously 2.12.1 Python-kubernetes 8.0.0 (was previously 6.0.0) Docker-ce 18.06 (was previously 18.03) Calico 3.6.1 (was previously 3.1.4) CoreDNS 1.2.6 (was previously 1.2.2) Sanity has run for AIO-SX, AIO-DX, and Standard (2+3) systems. No new issues were identified. Please notify right away if any new problems arise. Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Wed Apr 3 22:28:51 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Wed, 3 Apr 2019 22:28:51 +0000 Subject: [Starlingx-discuss] Keystone command question Message-ID: Hi StarlingXers, Does anyone know what is the current command under StarlingX to perform this action?: keystone tenant-get Thanks in advance -Richo -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Apr 3 23:04:52 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 3 Apr 2019 18:04:52 -0500 Subject: [Starlingx-discuss] Keystone command question In-Reply-To: References: Message-ID: On Wed, Apr 3, 2019 at 5:29 PM Perez, Ricardo O wrote: > Does anyone know what is the current command under StarlingX to perform this action?: > > keystone tenant-get That's a bit of a flashback... the keystone command has been gone for a long time... Look at 'openstack project show'. In general, OSC uses 'project' in place of 'tenants'. dt -- Dean Troyer dtroyer at gmail.com From cindy.xie at intel.com Wed Apr 3 23:12:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 3 Apr 2019 23:12:51 +0000 Subject: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up In-Reply-To: <6594B51DBE477C48AAE23675314E6C466459F616@fmsmsx107.amr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466459EDD6@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35ED90C1@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466459F616@fmsmsx107.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35EF86FB@SHSMSX104.ccr.corp.intel.com> Hi, Mario, I see that you made very good progress in uploading several patches against SB#2004008 - anything needs help for the remaining 3 tasks so far? Thx. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Wednesday, March 27, 2019 3:38 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, The first version of the patches will take around 2 weeks, after that, a validation step will start. In this step I am going to update the patches according to the feedback received from the community and Luis Botello will help to validate the functionality of the patches. As final step, I would like to execute the sanity when all patches are reviewed by the community an they are ready to be merged. This final step could vary around 2-3 weeks, it will depend on the response time from the community and the complexity of the required updates, in addition to the validation tasks. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Monday, March 25, 2019 5:44 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Mario, Nice to know that you're getting all information and having better understanding for the tasks. We probably needs to get a little bit more detail granularity of your plan, for each task in the storyboard: - when the patches will be uploaded for review; - what tests you're planning to do? Any support required from Ada's team? and when... - when you expect the patch review comments can be addressed and patch merged to master. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Tuesday, March 26, 2019 8:38 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com Subject: Re: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi team, Thank you for your feedback from our last meeting, and this is my update. I am checking all points described in this thread. Actually I have got progress in the topic related to snmp and the relation with oamcontroller, sysinv and cgts-client. I plan to send a PR with information about findings/architecture to the stx-fault/doc in a future. I think, it is not necessary another meeting as it was mentioned, I think I have enough information to continue and I am going to update the current reviews and send news according to the points discussed until today, and contact Tao for specific questions. Thanks Tao, Abraham and Frank. Best regards. Mario. ________________________________________ From: Arce Moreno, Abraham Sent: Friday, March 22, 2019 10:37 AM To: starlingx-discuss at lists.starlingx.io Cc: Arevalo, Mario Alfredo C; Tao Liu Subject: Fault Management Containerization (SB 2004008) Follow Up Thanks Frank for setting this up. Thanks everyone for your attendance to this meeting, here you have high level notes and ToDos based in the topics covered. In Summary - The presentation Stx-Fault/Containers is located at [0]. - Tao will kindly update the Fault Management architecture diagram, slide 8. - Mario will send an email no later than Monday afternoon with the latest findings / questions based in his 5 ToDos. - We will meet again on Tuesday to finalize on tasks and implementation details. If we are forgetting about any key point in this email, please do not hesitate to reply. StarlingX Architecture - 2 instances for each of the following projects: - Keystone - Horizon - Barbican - Fault Management will have 2 instances as well. Fault Management Architecture - [ToDo] [Tao] to modify the Fault Management architecture (Slide 8) Thanks Tao! - fm-api runs in compute node, snmp provide interfaces - [ToDo] [Mario] to check these statements Fault Management REST API - [ToDo] [Mario] to write the next level of details for REST API mapping / implementation, consider to include PUT to Event Log. Fault Management Architecture - python-fmclient is a wrapper to fm_cli / fm_api - [ToDo] [Mario] to understand more about fm_cli as a wrapper and how does it interact and affects fault management containerized strategy. FM Proposal - Remove mysql, fm-api, fm-common - [ToDo] [Mario] to understand about the removal of fm-api and fm-common from the containerized instance. - Dependency to cgts-client - [ToDo] [Mario] to understand what is cgtc-client and how does it interacts with fault management and the new containerized instance. OpenStack Applications The following 2 projects will make use of the Fault Management containerized: - starlingx-dashboard - stx-nfv [0] https://docs.google.com/presentation/d/1_vG83aHTToXlIdJxaJpVL-MHWfRGnxLuyEdFDt-nfwo/edit?usp=sharing _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cesar.lara at intel.com Wed Apr 3 23:34:08 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 3 Apr 2019 23:34:08 +0000 Subject: [Starlingx-discuss] [multios][meetings] Multi-OS Team meeting minutes 4/1/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710FE412B@fmsmsx123.amr.corp.intel.com> Multi-OS team meeting Agenda 4/1/2019 Ubuntu PoC update -Docker -patches for STX in Ubuntu https://review.openstack.org/#/c/647619 / https://review.openstack.org/#/c/647616/ Tools for ISO creation STX in a box update Notes Ubuntu PoC update: Docker image containing all the tools and scripts to create a development machine has been created to facilitate contributors to test drive the build for Debian based StarlingX. Two patches were created for fm-api and fm-rest-api, add to setup.py to facilitate the creation of deb packages for the STX-Fault service This phase of the PoC based in the build tools is about to be completed, next phase will be focused on emulating the base platform for StarlingX, taking the modified Ubuntu image start integrating a K8 cluster and build an OpenStack instance on top of that. This work is going to be planned and steps will be reflected in storyboards. Tools for ISO creation the set of tools to be used as part of the PoC are being picked based on the tools widely adopted by the Ubuntu community, we now have a defined set of tools and that's why we can mark the 1st phase of this Ubuntu PoC completed. STX in a box update We are trying to automate the flow of an QCOW image as part of the CVE scan for StarlingX, this will be implemented as well as part of the stx-in-a-box effort since to run this scan we require a fully installed, contained instance of StarlingX. Stx-in-a-box then will be based on this effort and we are just working on the final automation steps around this since our current scripts are user interactive and require input from a user to have a fully deployed instance. Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Apr 4 00:56:51 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 4 Apr 2019 00:56:51 +0000 Subject: [Starlingx-discuss] Newer versions of kubernetes, tiller, helm, calico merging shortly In-Reply-To: <9A85D2917C58154C960D95352B22818BD070D996@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD070D996@fmsmsx123.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB4455EE@ALA-MBD.corp.ad.wrs.com> Bruce, There is likely one more rebase before the end of the release. Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, April 3, 2019 5:43 PM To: Bailey, Henry Albert (Al) ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Newer versions of kubernetes, tiller, helm, calico merging shortly Nice to see this happening. Are these the versions we plan to include in the release? brucej From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Wednesday, April 3, 2019 1:10 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Newer versions of kubernetes, tiller, helm, calico merging shortly As part of StoryBoard https://storyboard.openstack.org/#!/story/2005198 several thirdparty components related to containers are about to merge. These include Kubernetes 1.13.5 (was previously 1.12.3) Helm/Tiller 2.13.1 (was previously 2.12.1 Python-kubernetes 8.0.0 (was previously 6.0.0) Docker-ce 18.06 (was previously 18.03) Calico 3.6.1 (was previously 3.1.4) CoreDNS 1.2.6 (was previously 1.2.2) Sanity has run for AIO-SX, AIO-DX, and Standard (2+3) systems. No new issues were identified. Please notify right away if any new problems arise. Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From mario.alfredo.c.arevalo at intel.com Thu Apr 4 02:59:40 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Thu, 4 Apr 2019 02:59:40 +0000 Subject: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35EF86FB@SHSMSX104.ccr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466459EDD6@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35ED90C1@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466459F616@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35EF86FB@SHSMSX104.ccr.corp.intel.com> Message-ID: <6594B51DBE477C48AAE23675314E6C46645A3E23@fmsmsx107.amr.corp.intel.com> Hi Cindy, Actually, Luis and me have had some issues related to the integration of the FM chart with armada system in some local tests, I have been working on some patches updates to solve this. Right now I am creating an ISO image from scratch with these patches in order to test them in a clean environment. At this moment I would like to focus on this issue during the rest of the week and I will continue with the other patches related to horizon and another one about the implementation of the PUT method for the FM restful API.. At this moment my progress in the pending patches is research, however if there are someone interested about these pending patches, let me know. Thank you for your attention. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Wednesday, April 03, 2019 4:12 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi, Mario, I see that you made very good progress in uploading several patches against SB#2004008 - anything needs help for the remaining 3 tasks so far? Thx. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Wednesday, March 27, 2019 3:38 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, The first version of the patches will take around 2 weeks, after that, a validation step will start. In this step I am going to update the patches according to the feedback received from the community and Luis Botello will help to validate the functionality of the patches. As final step, I would like to execute the sanity when all patches are reviewed by the community an they are ready to be merged. This final step could vary around 2-3 weeks, it will depend on the response time from the community and the complexity of the required updates, in addition to the validation tasks. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Monday, March 25, 2019 5:44 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Mario, Nice to know that you're getting all information and having better understanding for the tasks. We probably needs to get a little bit more detail granularity of your plan, for each task in the storyboard: - when the patches will be uploaded for review; - what tests you're planning to do? Any support required from Ada's team? and when... - when you expect the patch review comments can be addressed and patch merged to master. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Tuesday, March 26, 2019 8:38 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com Subject: Re: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi team, Thank you for your feedback from our last meeting, and this is my update. I am checking all points described in this thread. Actually I have got progress in the topic related to snmp and the relation with oamcontroller, sysinv and cgts-client. I plan to send a PR with information about findings/architecture to the stx-fault/doc in a future. I think, it is not necessary another meeting as it was mentioned, I think I have enough information to continue and I am going to update the current reviews and send news according to the points discussed until today, and contact Tao for specific questions. Thanks Tao, Abraham and Frank. Best regards. Mario. ________________________________________ From: Arce Moreno, Abraham Sent: Friday, March 22, 2019 10:37 AM To: starlingx-discuss at lists.starlingx.io Cc: Arevalo, Mario Alfredo C; Tao Liu Subject: Fault Management Containerization (SB 2004008) Follow Up Thanks Frank for setting this up. Thanks everyone for your attendance to this meeting, here you have high level notes and ToDos based in the topics covered. In Summary - The presentation Stx-Fault/Containers is located at [0]. - Tao will kindly update the Fault Management architecture diagram, slide 8. - Mario will send an email no later than Monday afternoon with the latest findings / questions based in his 5 ToDos. - We will meet again on Tuesday to finalize on tasks and implementation details. If we are forgetting about any key point in this email, please do not hesitate to reply. StarlingX Architecture - 2 instances for each of the following projects: - Keystone - Horizon - Barbican - Fault Management will have 2 instances as well. Fault Management Architecture - [ToDo] [Tao] to modify the Fault Management architecture (Slide 8) Thanks Tao! - fm-api runs in compute node, snmp provide interfaces - [ToDo] [Mario] to check these statements Fault Management REST API - [ToDo] [Mario] to write the next level of details for REST API mapping / implementation, consider to include PUT to Event Log. Fault Management Architecture - python-fmclient is a wrapper to fm_cli / fm_api - [ToDo] [Mario] to understand more about fm_cli as a wrapper and how does it interact and affects fault management containerized strategy. FM Proposal - Remove mysql, fm-api, fm-common - [ToDo] [Mario] to understand about the removal of fm-api and fm-common from the containerized instance. - Dependency to cgts-client - [ToDo] [Mario] to understand what is cgtc-client and how does it interacts with fault management and the new containerized instance. OpenStack Applications The following 2 projects will make use of the Fault Management containerized: - starlingx-dashboard - stx-nfv [0] https://docs.google.com/presentation/d/1_vG83aHTToXlIdJxaJpVL-MHWfRGnxLuyEdFDt-nfwo/edit?usp=sharing _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Thu Apr 4 04:29:24 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 4 Apr 2019 04:29:24 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-03 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 Sanity Platform 07 TCs [FAIL] | 03 TOTAL: 56 [PASS : 43] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 6 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 46] [Fail : 7] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 50] [Fail : 11] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] | 1 FAIL Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 49] [Fail : 12] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 2 FAIL Sanity Platform 05 TCs [PASS] TOTAL: 61 [PASS : 59] [Fail : 2] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 33 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 61 [PASS : 27] [Fail : 34] ------------------------------------------------------------------ Since the bug https://bugs.launchpad.net/starlingx/+bug/1822657 came up, I had to add the hosts manually "2+2+2 Bare Metal" and then I was able to continue with the automated execution. Standard Dedicated Storage BM and Virtual - Could not create VMs Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1821841 ------------------------------------------------------------------ Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Thu Apr 4 11:39:36 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 4 Apr 2019 11:39:36 +0000 Subject: [Starlingx-discuss] port-security In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> Message-ID: <14259A4D-50DA-4B9A-854E-28BFD7DC438E@windriver.com> The Neutron security group support is already being tracked and in development under the following storyboard. https://storyboard.openstack.org/#!/story/2002944 A Storyboard is not required to enable/disable the port_security extension. It can be configured through Helm overrides for the neutron service. You would use the “system helm-override-xxxx” commands to show and modify the configuration overrides. For the port security extension, you would issue the following: system helm-override-update neutron openstack --set conf.plugins.ml2_conf.ml2.extension_drivers="port_security" From: Curtis Date: Monday, April 1, 2019 at 2:22 PM To: "Jones, Bruce E" Cc: "von Hoesslin, Volker" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 1:08 PM Jones, Bruce E > wrote: Volker, please follow the process Curtis mentioned below and submit a StoryBoard Story. Then I’d suggest you send the story link out to the mailing list and ask the Networking sub-project to work with you to fill in any additional details needed. Meanwhile Curtis can you add this to the ethercalc as an item for the next release? I added it into the ethercalc. Thank, Curtis brucej From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, April 1, 2019 10:00 AM To: von Hoesslin, Volker > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker > wrote: Ok, this is an very intressting point! I would prefere to add port-security maybe an system switch to change this behavior in runtime (of cource, it need an re-provisining). I should know this better, but I believe if you'd like to request a feature you could go through this process: https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process If that's not the process we're following for the project hopefully someone on the list will correct me. Once it's there it could be discussed. :) Is there an releation with my other problem? One Instance with multiple Networks and for every Network an floating IP -> only one floating IP is working all other are without any response? Also port-forwarding in the router are broken and do not word … Is there any chance it's just a routing problem? ie. reply packets for the non-working interfaces are going out the working interface b/c it has the single default gw? Something like that? Thanks, Curtis Von: Curtis [mailto:serverascode at gmail.com] Gesendet: Freitag, 29. März 2019 19:44 An: von Hoesslin, Volker Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] pre-stable version bevor next release? On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker > wrote: Hi anybody, i realy love this project but I have some reason to deploy now an working stack. Is there currently an working version or have to wait until new release date? The current stable version isn’t working in all details for me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. With regards to port security, I tried to write this email a couple times, it's tough b/c I don't know the history, but here are my thoughts: - Security groups are effectively disabled in stx (noop driver), at least in my deployment from an ISO from last week - This is probably for performance reasons, ie. iptables, but I'm not sure of the history - Maybe it's time to revisit security groups? eg. k8s is there and uses iptables, or maybe openflow based driver would be an option...or other? - Likely we (the project) just need to make sure it gets properly documented, if it's not already Maybe some others with more history will chip in. :) Thanks, Curtis Thx, Volker… _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -- Blog: serverascode.com -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Thu Apr 4 12:48:40 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 4 Apr 2019 12:48:40 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 In-Reply-To: References: Message-ID: Maria: Can you indicate which 13 TCS failed for the AIO - Simplex and what LP bug is tracking the cause of those failures? The LPs at the end of the report don't look to be related to the AIO - Simplex configuration. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 04, 2019 12:29 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-03 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 Sanity Platform 07 TCs [FAIL] | 03 TOTAL: 56 [PASS : 43] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 6 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 46] [Fail : 7] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 50] [Fail : 11] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] | 1 FAIL Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 49] [Fail : 12] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 2 FAIL Sanity Platform 05 TCs [PASS] TOTAL: 61 [PASS : 59] [Fail : 2] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 33 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 61 [PASS : 27] [Fail : 34] ------------------------------------------------------------------ Since the bug https://bugs.launchpad.net/starlingx/+bug/1822657 came up, I had to add the hosts manually "2+2+2 Bare Metal" and then I was able to continue with the automated execution. Standard Dedicated Storage BM and Virtual - Could not create VMs Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1821841 ------------------------------------------------------------------ Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Thu Apr 4 13:22:03 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Thu, 4 Apr 2019 13:22:03 +0000 Subject: [Starlingx-discuss] port-security In-Reply-To: <14259A4D-50DA-4B9A-854E-28BFD7DC438E@windriver.com> References: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> <14259A4D-50DA-4B9A-854E-28BFD7DC438E@windriver.com> Message-ID: WOW! This is realy great stuff! I hope the new official release will come in short time! Volker… Von: Peters, Matt [mailto:Matt.Peters at windriver.com] Gesendet: Donnerstag, 4. April 2019 13:40 An: Curtis; Jones, Bruce E Cc: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] port-security The Neutron security group support is already being tracked and in development under the following storyboard. https://storyboard.openstack.org/#!/story/2002944 A Storyboard is not required to enable/disable the port_security extension. It can be configured through Helm overrides for the neutron service. You would use the “system helm-override-xxxx” commands to show and modify the configuration overrides. For the port security extension, you would issue the following: system helm-override-update neutron openstack --set conf.plugins.ml2_conf.ml2.extension_drivers="port_security" From: Curtis Date: Monday, April 1, 2019 at 2:22 PM To: "Jones, Bruce E" Cc: "von Hoesslin, Volker" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 1:08 PM Jones, Bruce E > wrote: Volker, please follow the process Curtis mentioned below and submit a StoryBoard Story. Then I’d suggest you send the story link out to the mailing list and ask the Networking sub-project to work with you to fill in any additional details needed. Meanwhile Curtis can you add this to the ethercalc as an item for the next release? I added it into the ethercalc. Thank, Curtis brucej From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, April 1, 2019 10:00 AM To: von Hoesslin, Volker > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker > wrote: Ok, this is an very intressting point! I would prefere to add port-security maybe an system switch to change this behavior in runtime (of cource, it need an re-provisining). I should know this better, but I believe if you'd like to request a feature you could go through this process: https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process If that's not the process we're following for the project hopefully someone on the list will correct me. Once it's there it could be discussed. :) Is there an releation with my other problem? One Instance with multiple Networks and for every Network an floating IP -> only one floating IP is working all other are without any response? Also port-forwarding in the router are broken and do not word … Is there any chance it's just a routing problem? ie. reply packets for the non-working interfaces are going out the working interface b/c it has the single default gw? Something like that? Thanks, Curtis Von: Curtis [mailto:serverascode at gmail.com] Gesendet: Freitag, 29. März 2019 19:44 An: von Hoesslin, Volker Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] pre-stable version bevor next release? On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker > wrote: Hi anybody, i realy love this project but I have some reason to deploy now an working stack. Is there currently an working version or have to wait until new release date? The current stable version isn’t working in all details for me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. With regards to port security, I tried to write this email a couple times, it's tough b/c I don't know the history, but here are my thoughts: - Security groups are effectively disabled in stx (noop driver), at least in my deployment from an ISO from last week - This is probably for performance reasons, ie. iptables, but I'm not sure of the history - Maybe it's time to revisit security groups? eg. k8s is there and uses iptables, or maybe openflow based driver would be an option...or other? - Likely we (the project) just need to make sure it gets properly documented, if it's not already Maybe some others with more history will chip in. :) Thanks, Curtis Thx, Volker… _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -- Blog: serverascode.com -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Apr 4 13:56:49 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 4 Apr 2019 13:56:49 +0000 Subject: [Starlingx-discuss] DRAFT release policy In-Reply-To: <9A85D2917C58154C960D95352B22818BD070C8D8@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0709720@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A26A@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD070C76C@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC0A23418@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD070C8D8@fmsmsx123.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A23DE3@ALA-MBD.corp.ad.wrs.com> Hi Bruce - I guess I'm saying that we need to consider a wider churn policy than just what to do if stuff isn't ready at MS-3. -----Original Message----- From: Jones, Bruce E Sent: Tuesday, April 2, 2019 3:38 PM To: Zvonar, Bill ; Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: RE: [Starlingx-discuss] DRAFT release policy Bill, yes if all anchor features are unready, there will be a delay for MS-3. Regarding the how much, and which features, and the churn you describe, I think (hope!) that the topic is covered already in the draft. The proposal is that it is managed by the Release team who makes recommendations to the TSC. brucej -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, April 2, 2019 11:24 AM To: Jones, Bruce E ; Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, If none of the anchor features will be ready by MS-3, then we have no choice but to reforecast. Then we get into the question of by how much, and which, if any anchor features can be excluded (which doesn't totally make sense to me - why would we have called it an anchor in the first place). This Release Churn is another heading that we need to add to the release policy, and whatever we come up with, we'll need to get it approved by the TSC, I think. Bill... -----Original Message----- From: Jones, Bruce E Sent: Tuesday, April 2, 2019 1:02 PM To: Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: Re: [Starlingx-discuss] DRAFT release policy Good feedback, Ghada, thank you. I have updated the document accordingly. I've also made a few other changes like adding test case readiness as a release criteria and proposed a method for handing anchor features that aren't completed by MS-3. I've removed the previously resolved (and much appreciated) feedback in the interests of readability. Link and updated text below. Brucej This file is: https://etherpad.openstack.org/p/stx-release-policy-draft This is a draft document for release planning. Comments and feedback welcomed! Openstack Release Policy ==================== The StarlingX project follows the release model defined in https://docs.openstack.org/project-team-guide/release-management.html using the "Trailing the common cycle" due to our dependency on upstream OpenStack projects. Release Planning ============== Initial release planning starts at the Open Infrastructure PTG meetings, where the TSC and community members discuss candidate features for the next release. The TSC then will review and approve a feature list for the release, will identify any release gating "anchor" features, and set a target date. The recommended target date is the date of the next OpenStack release plus 6 weeks. The PTG meeting is also an opportunity to review the community's goals and to define goals for the release. The overall Release Plan is created and managed by the Release sub-team by combining the TSC's input on content and target dates with input from the feature developers in the community and the Test team. The plan will include a standard set of milestones as per the usual OpenStack release management process. The Release team will actively manage the plan over the course of the release, recommending any adjustments in content and dates to the community and to the TSC for approval. We recognize that we are a new community working in a highly dynamic technology and that changes in our plans over time are normal and expected. We will work as a community to be open and transparent about our release process, and to minimize change from the original plan. Open issue: We should consider changing our release naming convention to something that isn't a date. Defect Tracking ============ The release team shall review active and incoming bug reports and make an initial call as to whether or not the bug needs to be fixed in the next release. If so, the bugs shall be tagged and tracked as the work on the release progresses. The list of release gating bugs will be actively managed, reviewed and scrubbed by the Release team to ensure that bugs are properly categorized as release gating. Release Policy =========== The Release team, together with the Test team TLs/PLs, shall make a recommendation to the community and TSC that a release is ready to go. Upon TSC approval, the release branches are tagged and the release documented. That recommendation should be based on: * Whether or not all anchor features in the release are complete, as per the input of the team(s) implementing the features and the results of Test team testing of the features * Proposal: All features identified as anchor features for a release need to be completed by the feature freeze milestone (MS-3). In the event that an anchor feature is not complete before the release feature freeze milestone, the Release team will make a recommendation to the TSC to extend the milestone date or to defer the feature to the next release. * Whether or not all test cases planned for the release are complete and ready to run * Proposal: All planned test cases shall be ready before the start of formal release candidate testing (RC1 milestone) * The completion and results of formal Testing performed by the Test team, measured by the percentage of planned tests attempted and the test pass rate * Proposal: 100% test cases attempted and 95% test cases passing in all configurations * The status of release gating bugs * Proposal: All release gating bugs must be fixed prior to a StarlingX release, ideally before the RC1 milestone but certainly before the final release. Bugfix Releases ============= The Release team, in conjunction with the community, can create a plan for a bug fix release. This would be an update to a previous release to address important defects that are impacting our users. The content would be fixes backported from master to the release branch, and would be based on both community and developer input regarding which fixes should be included. Testing of a bug fix release should include at least verification testing of the fixes and any additional testing needed as determined by the Test team. -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, April 1, 2019 5:19 PM To: Jones, Bruce E ; Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, I have a comment regarding this point: The severity and number of bugs open against the release Proposal: No open Critical or High severity bugs against the release candidate. Or maybe 1-3 Highs if we have a clear resolution plan (and a plan to release a patch against the release?) (ghada) We use an explicit tag to identify which bugs gate a particular release (regardless of severity). The whole list will have to be reviewed and scrubbed prior to reaching the release milestone. I don't feel it is sufficient to only review Critical / Major issues. [Example: On April 1/2019, there are 85 release gating bugs: only 13 are Critical/High. Yet it wouldn't be sufficient to only fix those 13 to ensure a quality release]. In the Release Planning wiki[1] , we have previously stated this policy: All release gating issues are addressed or reviewed/accepted for deferral I feel we need to keep this. [1] https://wiki.openstack.org/wiki/StarlingX/Release_Plan -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 2:10 PM To: Jones, Bruce E; Dean Troyer; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy I've updated the etherpad with changes that reflect the current feedback. Please review and add any additional feedback there. Thank you! https://etherpad.openstack.org/p/stx-release-policy-draft brucej -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 10:06 AM To: Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy Dean, Glenn - thank you for the feedback. I agree with it. There is also some feedback in the etherpad. I'm going to respond to both sets in the etherpad and try to improve the policy and the wording. https://etherpad.openstack.org/p/stx-release-policy-draft Thanks! brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, March 28, 2019 7:29 PM To: Seiler, Glenn Cc: Jones, Bruce E ; starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy On Thu, Mar 28, 2019 at 6:39 PM Seiler, Glenn wrote: > 1- We need to move away from time-based releases > 2- We need to do twice a year releases. This stmt, by definition, implies a time-gated release. Maybe it isn’t a specific date, but it is still time-gated. The wording does need work, yes. After a short conversation with Bruce this afternoon (Bruce, correct me if I'm wrong here) I came away with the intention being more of increasing the lag from OpenStack releases rather than separating completely from the OpenStack release cycle which is likely to stay at approx 6 months for a while (that's a rabbit hole under the bike shed I'd like to avoid just now). > As a nascent project, I think we need to show gradual and consistent progress. ++ > I did listen to much of the release team meeting today, and realize the trade-offs between big-rocks and timing are very difficult. > > Given the difficult choice of functionality versus timing, I personally think we need to show progress in getting to Stein and a container based distribution as major milestones in 1H and perhaps defer the Distributed Cloud capability to a 2H release. > > I don’t see anything intrinsically wrong with moving a specific date out; it happens all the time. But I also think a release should have some gate; i.e. we don’t move out of 1H. And if some functionality isn’t ready, then we move the functionality to another release in 2H. We took a stab at estimating and missed, making adjustments now is normal and to be expected. I agree with considering pushing distcloud to the next release because it is a) new functionality, and b) devs overlap with the container work and I think making the k8s infrastructure rock solid is much more important. If we are too far off with system stability the ramifications will be harder to overcome than delaying a new feature. > Anyway, that would be my vote, if I have one. You totally have a voice as part of the community, I would like to hear from more folks here... dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chenjie.xu at intel.com Thu Apr 4 14:16:22 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Thu, 4 Apr 2019 14:16:22 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <822d14b1-65a0-d9f0-93b2-ea05495f6fb7@windriver.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> <822d14b1-65a0-d9f0-93b2-ea05495f6fb7@windriver.com> Message-ID: Hi Chris, I have reinstalled StarlingX on the 4 bare metals but meet some problems. So I can’t execute your commands now. As soon as I deploy StarlingX successfully, I will execute those command and let you know the result. Best Regards, Xu, Chenjie From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, April 3, 2019 11:54 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing On 4/2/2019 9:50 AM, Khalil, Ghada wrote: Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. In addition to the above, can you please provide the logs from one of the nova-compute pods on an affected system? You can get the logs by running : POD=`kubectl -n openstack get pod -l application=nova,component=compute \ -o=jsonpath='{.items[0].metadata.name'}` kubectl -n openstack logs -c nova-compute $POD Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Thu Apr 4 15:34:57 2019 From: serverascode at gmail.com (Curtis) Date: Thu, 4 Apr 2019 11:34:57 -0400 Subject: [Starlingx-discuss] port-security In-Reply-To: <14259A4D-50DA-4B9A-854E-28BFD7DC438E@windriver.com> References: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> <14259A4D-50DA-4B9A-854E-28BFD7DC438E@windriver.com> Message-ID: On Thu, Apr 4, 2019 at 7:40 AM Peters, Matt wrote: > The Neutron security group support is already being tracked and in > development under the following storyboard. > > https://storyboard.openstack.org/#!/story/2002944 > > > > A Storyboard is not required to enable/disable the port_security > extension. It can be configured through Helm overrides for the neutron > service. > > You would use the “system helm-override-xxxx” commands to show and modify > the configuration overrides. > > > > For the port security extension, you would issue the following: > > system helm-override-update neutron openstack --set > conf.plugins.ml2_conf.ml2.extension_drivers="port_security" > Thanks Matt, that's great to know. Is the idea that security groups will be enabled by default? Thanks, Curtis > > > *From: *Curtis > *Date: *Monday, April 1, 2019 at 2:22 PM > *To: *"Jones, Bruce E" > *Cc: *"von Hoesslin, Volker" , " > starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] port-security > > > > > > > > On Mon, Apr 1, 2019 at 1:08 PM Jones, Bruce E > wrote: > > Volker, please follow the process Curtis mentioned below and submit a > StoryBoard Story. Then I’d suggest you send the story link out to the > mailing list and ask the Networking sub-project to work with you to fill in > any additional details needed. > > > > Meanwhile Curtis can you add this to the ethercalc as an item for the next > release? > > > > > > I added it into the ethercalc. > > > > Thank, > > Curtis > > > > > > brucej > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Monday, April 1, 2019 10:00 AM > *To:* von Hoesslin, Volker > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] port-security > > > > On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker < > Volker.Hoesslin at swsn.de> wrote: > > Ok, this is an very intressting point! I would prefere to add > port-security maybe an system switch to change this behavior in runtime (of > cource, it need an re-provisining). > > > > I should know this better, but I believe if you'd like to request a > feature you could go through this process: > > > > https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process > > > > If that's not the process we're following for the project hopefully > someone on the list will correct me. > > > > Once it's there it could be discussed. :) > > > > > > Is there an releation with my other problem? One Instance with multiple > Networks and for every Network an floating IP -> only one floating IP is > working all other are without any response? Also port-forwarding in the > router are broken and do not word … > > > > > > Is there any chance it's just a routing problem? ie. reply packets for the > non-working interfaces are going out the working interface b/c it has the > single default gw? Something like that? > > > > Thanks, > > Curtis > > > > > > *Von:* Curtis [mailto:serverascode at gmail.com] > *Gesendet:* Freitag, 29. März 2019 19:44 > *An:* von Hoesslin, Volker > *Cc:* starlingx-discuss at lists.starlingx.io > *Betreff:* Re: [Starlingx-discuss] pre-stable version bevor next release? > > > > On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker < > Volker.Hoesslin at swsn.de> wrote: > > Hi anybody, > > i realy love this project but I have some reason to deploy now an working > stack. Is there currently an working version or have to wait until new > release date? The current stable version isn’t working in all details for > me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. > > > > > > With regards to port security, I tried to write this email a couple times, > it's tough b/c I don't know the history, but here are my thoughts: > > > > - Security groups are effectively disabled in stx (noop driver), at least > in my deployment from an ISO from last week > > - This is probably for performance reasons, ie. iptables, but I'm not sure > of the history > > - Maybe it's time to revisit security groups? eg. k8s is there and uses > iptables, or maybe openflow based driver would be an option...or other? > > - Likely we (the project) just need to make sure it gets properly > documented, if it's not already > > > > Maybe some others with more history will chip in. :) > > > > Thanks, > > Curtis > > > > > > > > Thx, > > Volker… > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Thu Apr 4 15:44:02 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 4 Apr 2019 15:44:02 +0000 Subject: [Starlingx-discuss] port-security In-Reply-To: References: <9A85D2917C58154C960D95352B22818BD070BA9F@fmsmsx123.amr.corp.intel.com> <14259A4D-50DA-4B9A-854E-28BFD7DC438E@windriver.com> Message-ID: <53BABE5D-3032-4B53-8C0B-BEC8FE25B3B4@windriver.com> Hi Curtis, Yes, this Story is to enable the security groups by default. They can be disabled via Helm overrides to change it globally (override to noop), or via port_security extension to enable/disable it on a per-network basis. Regards, Matt From: Curtis Date: Thursday, April 4, 2019 at 11:35 AM To: "Peters, Matt" Cc: "Jones, Bruce E" , "von Hoesslin, Volker" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] port-security On Thu, Apr 4, 2019 at 7:40 AM Peters, Matt > wrote: The Neutron security group support is already being tracked and in development under the following storyboard. https://storyboard.openstack.org/#!/story/2002944 A Storyboard is not required to enable/disable the port_security extension. It can be configured through Helm overrides for the neutron service. You would use the “system helm-override-xxxx” commands to show and modify the configuration overrides. For the port security extension, you would issue the following: system helm-override-update neutron openstack --set conf.plugins.ml2_conf.ml2.extension_drivers="port_security" Thanks Matt, that's great to know. Is the idea that security groups will be enabled by default? Thanks, Curtis From: Curtis > Date: Monday, April 1, 2019 at 2:22 PM To: "Jones, Bruce E" > Cc: "von Hoesslin, Volker" >, "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 1:08 PM Jones, Bruce E > wrote: Volker, please follow the process Curtis mentioned below and submit a StoryBoard Story. Then I’d suggest you send the story link out to the mailing list and ask the Networking sub-project to work with you to fill in any additional details needed. Meanwhile Curtis can you add this to the ethercalc as an item for the next release? I added it into the ethercalc. Thank, Curtis brucej From: Curtis [mailto:serverascode at gmail.com] Sent: Monday, April 1, 2019 10:00 AM To: von Hoesslin, Volker > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] port-security On Mon, Apr 1, 2019 at 10:40 AM von Hoesslin, Volker > wrote: Ok, this is an very intressting point! I would prefere to add port-security maybe an system switch to change this behavior in runtime (of cource, it need an re-provisining). I should know this better, but I believe if you'd like to request a feature you could go through this process: https://wiki.openstack.org/wiki/StarlingX/Feature_Development_Process If that's not the process we're following for the project hopefully someone on the list will correct me. Once it's there it could be discussed. :) Is there an releation with my other problem? One Instance with multiple Networks and for every Network an floating IP -> only one floating IP is working all other are without any response? Also port-forwarding in the router are broken and do not word … Is there any chance it's just a routing problem? ie. reply packets for the non-working interfaces are going out the working interface b/c it has the single default gw? Something like that? Thanks, Curtis Von: Curtis [mailto:serverascode at gmail.com] Gesendet: Freitag, 29. März 2019 19:44 An: von Hoesslin, Volker Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] pre-stable version bevor next release? On Fri, Mar 29, 2019 at 10:55 AM von Hoesslin, Volker > wrote: Hi anybody, i realy love this project but I have some reason to deploy now an working stack. Is there currently an working version or have to wait until new release date? The current stable version isn’t working in all details for me, look at discuss “Unrecognized attribute(s) 'port_security_enabled'”. With regards to port security, I tried to write this email a couple times, it's tough b/c I don't know the history, but here are my thoughts: - Security groups are effectively disabled in stx (noop driver), at least in my deployment from an ISO from last week - This is probably for performance reasons, ie. iptables, but I'm not sure of the history - Maybe it's time to revisit security groups? eg. k8s is there and uses iptables, or maybe openflow based driver would be an option...or other? - Likely we (the project) just need to make sure it gets properly documented, if it's not already Maybe some others with more history will chip in. :) Thanks, Curtis Thx, Volker… _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -- Blog: serverascode.com -- Blog: serverascode.com -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Apr 4 16:49:17 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 4 Apr 2019 16:49:17 +0000 Subject: [Starlingx-discuss] DRAFT release policy In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC0A23DE3@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BD0709720@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A26A@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BD070A293@fmsmsx123.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3E16@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD070C76C@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC0A23418@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BD070C8D8@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC0A23DE3@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BD070E166@fmsmsx123.amr.corp.intel.com> OK, I understand now. Let's discuss at our meeting later today which other milestones we might want to include. brucej -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Thursday, April 4, 2019 6:57 AM To: Jones, Bruce E ; Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce - I guess I'm saying that we need to consider a wider churn policy than just what to do if stuff isn't ready at MS-3. -----Original Message----- From: Jones, Bruce E Sent: Tuesday, April 2, 2019 3:38 PM To: Zvonar, Bill ; Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: RE: [Starlingx-discuss] DRAFT release policy Bill, yes if all anchor features are unready, there will be a delay for MS-3. Regarding the how much, and which features, and the churn you describe, I think (hope!) that the topic is covered already in the draft. The proposal is that it is managed by the Release team who makes recommendations to the TSC. brucej -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, April 2, 2019 11:24 AM To: Jones, Bruce E ; Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, If none of the anchor features will be ready by MS-3, then we have no choice but to reforecast. Then we get into the question of by how much, and which, if any anchor features can be excluded (which doesn't totally make sense to me - why would we have called it an anchor in the first place). This Release Churn is another heading that we need to add to the release policy, and whatever we come up with, we'll need to get it approved by the TSC, I think. Bill... -----Original Message----- From: Jones, Bruce E Sent: Tuesday, April 2, 2019 1:02 PM To: Khalil, Ghada Cc: starlingx-discuss ; Dean Troyer ; Seiler, Glenn Subject: Re: [Starlingx-discuss] DRAFT release policy Good feedback, Ghada, thank you. I have updated the document accordingly. I've also made a few other changes like adding test case readiness as a release criteria and proposed a method for handing anchor features that aren't completed by MS-3. I've removed the previously resolved (and much appreciated) feedback in the interests of readability. Link and updated text below. Brucej This file is: https://etherpad.openstack.org/p/stx-release-policy-draft This is a draft document for release planning. Comments and feedback welcomed! Openstack Release Policy ==================== The StarlingX project follows the release model defined in https://docs.openstack.org/project-team-guide/release-management.html using the "Trailing the common cycle" due to our dependency on upstream OpenStack projects. Release Planning ============== Initial release planning starts at the Open Infrastructure PTG meetings, where the TSC and community members discuss candidate features for the next release. The TSC then will review and approve a feature list for the release, will identify any release gating "anchor" features, and set a target date. The recommended target date is the date of the next OpenStack release plus 6 weeks. The PTG meeting is also an opportunity to review the community's goals and to define goals for the release. The overall Release Plan is created and managed by the Release sub-team by combining the TSC's input on content and target dates with input from the feature developers in the community and the Test team. The plan will include a standard set of milestones as per the usual OpenStack release management process. The Release team will actively manage the plan over the course of the release, recommending any adjustments in content and dates to the community and to the TSC for approval. We recognize that we are a new community working in a highly dynamic technology and that changes in our plans over time are normal and expected. We will work as a community to be open and transparent about our release process, and to minimize change from the original plan. Open issue: We should consider changing our release naming convention to something that isn't a date. Defect Tracking ============ The release team shall review active and incoming bug reports and make an initial call as to whether or not the bug needs to be fixed in the next release. If so, the bugs shall be tagged and tracked as the work on the release progresses. The list of release gating bugs will be actively managed, reviewed and scrubbed by the Release team to ensure that bugs are properly categorized as release gating. Release Policy =========== The Release team, together with the Test team TLs/PLs, shall make a recommendation to the community and TSC that a release is ready to go. Upon TSC approval, the release branches are tagged and the release documented. That recommendation should be based on: * Whether or not all anchor features in the release are complete, as per the input of the team(s) implementing the features and the results of Test team testing of the features * Proposal: All features identified as anchor features for a release need to be completed by the feature freeze milestone (MS-3). In the event that an anchor feature is not complete before the release feature freeze milestone, the Release team will make a recommendation to the TSC to extend the milestone date or to defer the feature to the next release. * Whether or not all test cases planned for the release are complete and ready to run * Proposal: All planned test cases shall be ready before the start of formal release candidate testing (RC1 milestone) * The completion and results of formal Testing performed by the Test team, measured by the percentage of planned tests attempted and the test pass rate * Proposal: 100% test cases attempted and 95% test cases passing in all configurations * The status of release gating bugs * Proposal: All release gating bugs must be fixed prior to a StarlingX release, ideally before the RC1 milestone but certainly before the final release. Bugfix Releases ============= The Release team, in conjunction with the community, can create a plan for a bug fix release. This would be an update to a previous release to address important defects that are impacting our users. The content would be fixes backported from master to the release branch, and would be based on both community and developer input regarding which fixes should be included. Testing of a bug fix release should include at least verification testing of the fixes and any additional testing needed as determined by the Test team. -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Monday, April 1, 2019 5:19 PM To: Jones, Bruce E ; Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: RE: [Starlingx-discuss] DRAFT release policy Hi Bruce, I have a comment regarding this point: The severity and number of bugs open against the release Proposal: No open Critical or High severity bugs against the release candidate. Or maybe 1-3 Highs if we have a clear resolution plan (and a plan to release a patch against the release?) (ghada) We use an explicit tag to identify which bugs gate a particular release (regardless of severity). The whole list will have to be reviewed and scrubbed prior to reaching the release milestone. I don't feel it is sufficient to only review Critical / Major issues. [Example: On April 1/2019, there are 85 release gating bugs: only 13 are Critical/High. Yet it wouldn't be sufficient to only fix those 13 to ensure a quality release]. In the Release Planning wiki[1] , we have previously stated this policy: All release gating issues are addressed or reviewed/accepted for deferral I feel we need to keep this. [1] https://wiki.openstack.org/wiki/StarlingX/Release_Plan -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 2:10 PM To: Jones, Bruce E; Dean Troyer; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy I've updated the etherpad with changes that reflect the current feedback. Please review and add any additional feedback there. Thank you! https://etherpad.openstack.org/p/stx-release-policy-draft brucej -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, March 29, 2019 10:06 AM To: Dean Troyer ; Seiler, Glenn Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy Dean, Glenn - thank you for the feedback. I agree with it. There is also some feedback in the etherpad. I'm going to respond to both sets in the etherpad and try to improve the policy and the wording. https://etherpad.openstack.org/p/stx-release-policy-draft Thanks! brucej -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, March 28, 2019 7:29 PM To: Seiler, Glenn Cc: Jones, Bruce E ; starlingx-discuss Subject: Re: [Starlingx-discuss] DRAFT release policy On Thu, Mar 28, 2019 at 6:39 PM Seiler, Glenn wrote: > 1- We need to move away from time-based releases > 2- We need to do twice a year releases. This stmt, by definition, implies a time-gated release. Maybe it isn’t a specific date, but it is still time-gated. The wording does need work, yes. After a short conversation with Bruce this afternoon (Bruce, correct me if I'm wrong here) I came away with the intention being more of increasing the lag from OpenStack releases rather than separating completely from the OpenStack release cycle which is likely to stay at approx 6 months for a while (that's a rabbit hole under the bike shed I'd like to avoid just now). > As a nascent project, I think we need to show gradual and consistent progress. ++ > I did listen to much of the release team meeting today, and realize the trade-offs between big-rocks and timing are very difficult. > > Given the difficult choice of functionality versus timing, I personally think we need to show progress in getting to Stein and a container based distribution as major milestones in 1H and perhaps defer the Distributed Cloud capability to a 2H release. > > I don’t see anything intrinsically wrong with moving a specific date out; it happens all the time. But I also think a release should have some gate; i.e. we don’t move out of 1H. And if some functionality isn’t ready, then we move the functionality to another release in 2H. We took a stab at estimating and missed, making adjustments now is normal and to be expected. I agree with considering pushing distcloud to the next release because it is a) new functionality, and b) devs overlap with the container work and I think making the k8s infrastructure rock solid is much more important. If we are too far off with system stability the ramifications will be harder to overcome than delaying a new feature. > Anyway, that would be my vote, if I have one. You totally have a voice as part of the community, I would like to hear from more folks here... dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Thu Apr 4 17:01:09 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 4 Apr 2019 13:01:09 -0400 Subject: [Starlingx-discuss] Publish container image list? In-Reply-To: <610643d3-79d7-ba67-cf44-395b62eae8ff@windriver.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA46BAB1@ALA-MBD.corp.ad.wrs.com> <610643d3-79d7-ba67-cf44-395b62eae8ff@windriver.com> Message-ID: In addition a list of the most current images will be posted http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/images---latest.lst http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/images---versioned.lst e.g. images-centos-dev-latest.lst: docker.io/starlingx/stx-nova-api-proxy:master-centos-dev-latest     docker.io/starlingx/stx-aodh:master-centos-dev-latest     docker.io/starlingx/stx-ironic:master-centos-dev-latest    ... images-centos-dev-versioned.lst docker.io/starlingx/stx-nova-api-proxy:master-centos-dev-20190327T013001Z    docker.io/starlingx/stx-aodh:master-centos-dev-20190327T013001Z docker.io/starlingx/stx-ironic:master-centos-dev-20190327T013001Z    ... Scott On 2019-03-27 12:52 p.m., Scott Little wrote: > Starting soon, for cengn builds that include docker images, you can > expect to see something like .... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos//outputs/docker-images/--.rpmlst > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos//outputs/docker-images/--.piplst > > stx-aodh-centos-dev.piplst: > Babel==2.6.0 > Jinja2==2.10 > Mako==1.0.7 > ... > > stx-aodh-centos-dev.rpmlst: > acl-2.2.51-14.el7.x86_64 > apr-1.4.8-3.el7_4.1.x86_64 > apr-util-1.5.2-6.el7.x86_64 > ... > > If you build your own images, the files are found here ... > >    $MY_WORKSPACE/std/build-images/ > > Scott > > > On 2019-03-26 2:09 p.m., Penney, Don wrote: >> >> Hi Curtis, >> >> That work is in progress now. We’ve updated the image build tool to >> record versions, which will be an input to building the application >> charts. Additionally, an upcoming work item will be to publish these >> records as part of the CENGN build. >> >> Cheers, >> >> Don. >> >> *From:*Curtis [mailto:serverascode at gmail.com] >> *Sent:* Tuesday, March 26, 2019 2:07 PM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Publish container image list? >> >> Hi All, >> >> If we are not already (which maybe we are, this is a big project) I >> think it would be useful to publish a list of containers that match >> up with a particular release. Otherwise I have to build it on my own >> anyways (for example for building out a repeatable workshop setup). >> >> I know we publish an ISO, RPMs, and Helm charts, but it's be nice to >> have a list of image and tags to use to be able to populate local >> docker registries. Or a way to generate that list from existing >> artifacts...if it doesn't exist already that is. >> >> Also, right now my list has many of the stx images as >> "dev-centos-master-latest" and I'd like to firm that up with versions >> if possible. >> >> Thanks, >> >> Curtis >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Apr 4 18:06:09 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 4 Apr 2019 18:06:09 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 In-Reply-To: References: Message-ID: Hello Frank, We did not raise a bug for simplex because we are double checking if is not a false result, we are using the new iso. The problem is that Cirrus Instance can't be created. Regards Maria G. From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Thursday, April 4, 2019 6:49 AM To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Maria: Can you indicate which 13 TCS failed for the AIO - Simplex and what LP bug is tracking the cause of those failures? The LPs at the end of the report don't look to be related to the AIO - Simplex configuration. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 04, 2019 12:29 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-03 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 Sanity Platform 07 TCs [FAIL] | 03 TOTAL: 56 [PASS : 43] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 6 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 46] [Fail : 7] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 50] [Fail : 11] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] | 1 FAIL Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 49] [Fail : 12] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 2 FAIL Sanity Platform 05 TCs [PASS] TOTAL: 61 [PASS : 59] [Fail : 2] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 33 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 61 [PASS : 27] [Fail : 34] ------------------------------------------------------------------ Since the bug https://bugs.launchpad.net/starlingx/+bug/1822657 came up, I had to add the hosts manually "2+2+2 Bare Metal" and then I was able to continue with the automated execution. Standard Dedicated Storage BM and Virtual - Could not create VMs Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1821841 ------------------------------------------------------------------ Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Thu Apr 4 19:15:39 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Thu, 4 Apr 2019 13:15:39 -0600 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> <822d14b1-65a0-d9f0-93b2-ea05495f6fb7@windriver.com> Message-ID: Given that we think the networking issue is likely due to using 4KB pages instead of hugepages, it would probably make sense to try specifying hugepages first. If that doesn't work, then we can start looking at the nova logs. Chris On 4/4/2019 8:16 AM, Xu, Chenjie wrote: > Hi Chris, > > I have reinstalled StarlingX on the 4 bare metals but meet some > problems. So I can’t execute your commands now. As soon as I deploy > StarlingX successfully, I will execute those command and let you know > the result. > > Best Regards, > > Xu, Chenjie > > *From:*Chris Friesen [mailto:chris.friesen at windriver.com] > *Sent:* Wednesday, April 3, 2019 11:54 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing > > On 4/2/2019 9:50 AM, Khalil, Ghada wrote: > > Thanks Chenjie. > > Can you please run the following on your compute nodes and attach > the output? > > sudo lscpu > sudo virsh capabilities > > Richardo / Juan P, > > Please provide the cpu models and the above output from your two > hardware systems as well. > > In addition to the above, can you please provide the logs from one of > the nova-compute pods on an affected system? > > You can get the logs by running : > > POD=`kubectl -n openstack get pod -l application=nova,component=compute \ > > -o=jsonpath='{.items[0].metadata.name'}` > > kubectl -n openstack logs -c nova-compute $POD > > Thanks, > > Chris > From vm.rod25 at gmail.com Thu Apr 4 19:24:21 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 4 Apr 2019 13:24:21 -0600 Subject: [Starlingx-discuss] Performance Footprint Metrics phase 0 (update) Message-ID: Hi STX team This is the follow-up mail to share with the community the test case presented yesterday on the community call meeting. This test case responds to phase 0 on our goal to have a test framework (using existing tools ) to measure performance metrics on StarlingX project. This very basic script only has the scope o measure the footprint of STX host system ( users can decide if it is a controller or a node ) This test case was presented yesterday, here are the slides: https://drive.google.com/open?id=1Nr12zDRXf34kpjiA0LsFLU8GIpMY8Y-H2zmiC96CD4A The source code of the test has been updated in tools-contrib repository: https://github.com/starlingx-staging/tools-contrib The list of TODO things for this test case ( and framework ) is here: Please feel free to add as many requests as we have as a community. https://etherpad.openstack.org/p/stx_performance_feedback This test case also add the tools to save the data recorded on an InfluxDB database The way to run this script as described on the README is: python metrics.py Details of elapsed time and sampling time are described in README as well By default the script does not send any data, only print on the console the values: Example Output: https://gist.github.com/VictorRodriguez/b9260ad223b176c323363ac06d0afdde If we want to send the data to an influxDB: python metrics.py --send_data Will send data to the InlfuxDB server configured in the file: $ server.conf INFLUX_SERVER= INFLUX_PORT=PORT INFLUX_PASS=PASS INFLUX_USER=USER DB_NAME=starlingx Since yesterday meeting I left the script running on my ubuntu machine and here are the metrics of DB size: Size in data points per table: name: boottime time count count_1 count_2 ---- ----- ------- ------- 0 1571 name: cpu_footprint time count count_1 count_2 ---- ----- ------- ------- 0 1562 name: hd_footprint time count count_1 count_2 ---- ----- ------- ------- 0 3132 name: memory_footprint time count count_1 count_2 ---- ----- ------- ------- 0 3132 Size in HD: du -sh /var/lib/influxdb/data/starlingx/ 28K /var/lib/influxdb/data/starlingx/ As we can see in a total of 7827 data points we are only using 28K of the HD ( not much ) Curtis and will be working on the set up of this infrastructure on packet infra, as soon as we have information we will let you know. This perf framework is designed with the idea that anyone can use it to track their version of STX that you prefer. If you have any question how to set up on your infra please let me know Regards Victor Rodriguez -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Thu Apr 4 20:11:54 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 4 Apr 2019 20:11:54 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 In-Reply-To: References: Message-ID: Thanks Maria for the update. I'll await your next results. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 04, 2019 2:06 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Hello Frank, We did not raise a bug for simplex because we are double checking if is not a false result, we are using the new iso. The problem is that Cirrus Instance can't be created. Regards Maria G. From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Thursday, April 4, 2019 6:49 AM To: Perez Ibarra, Maria G >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Maria: Can you indicate which 13 TCS failed for the AIO - Simplex and what LP bug is tracking the cause of those failures? The LPs at the end of the report don't look to be related to the AIO - Simplex configuration. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 04, 2019 12:29 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190403 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-03 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 Sanity Platform 07 TCs [FAIL] | 03 TOTAL: 56 [PASS : 43] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 6 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 46] [Fail : 7] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 50] [Fail : 11] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] | 1 FAIL Sanity OpenStack 52 TCs [PASS] | 10 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 58 [PASS : 49] [Fail : 12] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 2 FAIL Sanity Platform 05 TCs [PASS] TOTAL: 61 [PASS : 59] [Fail : 2] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] | 33 FAIL Sanity Platform 05 TCs [PASS] | 1 FAIL TOTAL: 61 [PASS : 27] [Fail : 34] ------------------------------------------------------------------ Since the bug https://bugs.launchpad.net/starlingx/+bug/1822657 came up, I had to add the hosts manually "2+2+2 Bare Metal" and then I was able to continue with the automated execution. Standard Dedicated Storage BM and Virtual - Could not create VMs Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1821841 ------------------------------------------------------------------ Regards. Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Apr 4 20:30:20 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Apr 2019 16:30:20 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 69 - Failure! Message-ID: <546977966.64.1554409821103.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 69 Status: Failure Timestamp: 20190404T202224Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190401T233000Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: master-stable-20190401T233000Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: master PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/logs PUBLISH_TIMESTAMP: 20190401T233000Z FLOCK_VERSION: master-centos-stable-20190401T233000Z PREFIX: master TIMESTAMP: 20190401T233000Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Thu Apr 4 20:30:23 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Apr 2019 16:30:23 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 70 - Failure! Message-ID: <824113714.67.1554409824649.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 70 Status: Failure Timestamp: 20190404T200736Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190401T233000Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs MASTER_BUILD_NUMBER: 45 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190401T233000Z DOCKER_BUILD_ID: jenkins-master-20190401T233000Z-builder TIMESTAMP: 20190401T233000Z OS_VERSION: 7.6.1810 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/outputs From serverascode at gmail.com Thu Apr 4 20:51:03 2019 From: serverascode at gmail.com (Curtis) Date: Thu, 4 Apr 2019 16:51:03 -0400 Subject: [Starlingx-discuss] Performance Footprint Metrics phase 0 (update) In-Reply-To: References: Message-ID: Thanks for sending this out Victor! Overall I think this is a great place to start. :) On Thu, Apr 4, 2019 at 3:25 PM Victor Rodriguez wrote: > Hi STX team > > This is the follow-up mail to share with the community the test case > presented yesterday on the community call meeting. This test case responds > to phase 0 on our goal to have a test framework (using existing tools ) to > measure performance metrics on StarlingX project. > > This very basic script only has the scope o measure the footprint of STX > host system ( users can decide if it is a controller or a node ) > > This test case was presented yesterday, here are the slides: > > > https://drive.google.com/open?id=1Nr12zDRXf34kpjiA0LsFLU8GIpMY8Y-H2zmiC96CD4A > > The source code of the test has been updated in tools-contrib repository: > > https://github.com/starlingx-staging/tools-contrib > > It's great to see that repo already in use. There are some things in there that I can already use from the autodeploy work. Very helpful. > The list of TODO things for this test case ( and framework ) is here: > > Please feel free to add as many requests as we have as a community. > https://etherpad.openstack.org/p/stx_performance_feedback > I added a couple comments to that etherpad. Thanks, Curtis > This test case also add the tools to save the data recorded on an InfluxDB > database > > The way to run this script as described on the README is: > > python metrics.py > > Details of elapsed time and sampling time are described in README as well > > By default the script does not send any data, only print on the console > the values: > > Example Output: > https://gist.github.com/VictorRodriguez/b9260ad223b176c323363ac06d0afdde > > If we want to send the data to an influxDB: > > python metrics.py --send_data > > Will send data to the InlfuxDB server configured in the file: > > $ server.conf > INFLUX_SERVER= under test> > INFLUX_PORT=PORT > INFLUX_PASS=PASS > INFLUX_USER=USER > DB_NAME=starlingx > > > Since yesterday meeting I left the script running on my ubuntu machine and > here are the metrics of DB size: > > Size in data points per table: > > name: boottime > time count count_1 count_2 > ---- ----- ------- ------- > 0 1571 > > name: cpu_footprint > time count count_1 count_2 > ---- ----- ------- ------- > 0 1562 > > name: hd_footprint > time count count_1 count_2 > ---- ----- ------- ------- > 0 3132 > > name: memory_footprint > time count count_1 count_2 > ---- ----- ------- ------- > 0 3132 > > > Size in HD: > du -sh /var/lib/influxdb/data/starlingx/ > 28K /var/lib/influxdb/data/starlingx/ > > As we can see in a total of 7827 data points we are only using 28K of the > HD ( not much ) > > Curtis and will be working on the set up of this infrastructure on packet > infra, as soon as we have information we will let you know. > > This perf framework is designed with the idea that anyone can use it to > track their version of STX that you prefer. If you have any question how to > set up on your infra please let me know > > Regards > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Thu Apr 4 20:52:53 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 4 Apr 2019 20:52:53 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 3/27 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35EDC25F@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35EDC25F@SHSMSX104.ccr.corp.intel.com> Message-ID: Cindy: Apologies for the delay but you can close my AR for "create a separate task under a container SB for adding support of multiple CEPH tiers": I created task 30351 under the HELM SB 2003909 Frank -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, March 27, 2019 9:54 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 3/27 Agenda & Notes for 3/27 meeting: - Ceph upgrade update (Ovidiu/Daniel, Tingjie/Yong) 1. Feature developed (2+2+2 work as expected) - 3/8 (Done as of 3/13) common code base: https://github.com/oponcea?tab=repositories 1.1 mon_max_pg_per_osd needs to be increased - done w/ PR submitted (updated max pg per osd to 2048) 1.2 prevent rados-gw being started by system- issue solved (PR?) 2. Rebase to master w/ container & CentOS7.6 - 3/15 (Done as of 3/20) both Intel & WR built & deploy successfully on AIO-SX/DX and multi-nodes (2+2+2). ISO provided to Ada/Ricardo - AR: Yong to provide the test ISO to Ricardo for engineering testing dry-run. Node re-install the storage/computer node is one of the test cases we should done, AR: Yong follow-up w/ Daniel to make sure re-install controller/compute/storage nodes. mimic release notes review - AR: Daniel/Tingjie work together to review the release notes from mimic and understand the changes. 3. Code ready for review - 3/20 (actual?) Squashing the commits - Daniel WIP Daniel is working on debugging the mimic support in Openstack helm to provide support multiple-tiers of functional support. AR: Frank to treate a seperate task under container SB to track the debug tasks. Patch submitted against master - ETA (Daniel - 4/1 to upload; Changcheng - abandon all the patches on old-feature-branch & master) Rebase starlingx-staging/stx-ceph/tree/stx/v13.2.2 (Changcheng/Dean) - ETA (Changcheng - 4/1) 2 options: option#1 - Dean to remove patches on stx-ceph on starlingx-staging, then cherry-pick your patches to the staging; Clean git history. option#2 - Changcheng refactor all patches to latest code on starling-staging. agreed to do Option#1: Dean to delete the stx/v13.2.2 branch and create a new fresh one from upstream, then Changcheng will do PR on top of new branch. Goal is to generate dev build from master with the pending reviews and rebased stx-Ceph (4/2) 4. System testing - 3/20 ~ 4/5 (actual?) Test case in discussion: https://docs.google.com/spreadsheets/d/1O2zWn-R83Wj1SqmeUtxCP59_DsSM0UNZW0gRALnDn6w/edit#gid=0 The official test cycle with master build next Tuesday (4/2) 5. Patch merge (Ovidiu/Daniel) - 4/5 (actual?) - trending at least 2 wks delay. 4/19 - DevStack update (Dean/Yi) 1. stx-metal: one final review pending (https://review.openstack.org/#/c/641597/) 2. stx-fault: one review pending (https://review.openstack.org/#/c/639501/) Martin will submit another patch against fault. 3. stx-ha: WIP (Shuicheng to provide update) Shuicheng: enabled the ha services running, still checking the status correctness. More debug/testing required before the patch can be upload. ETA: patch upload by next week (4/5) 4. Story#2005285 added to finalize Devstack job structure: Rename/relocate the base StarlingX DevStack job to stx-integ (the least-worst existing home) and make all jobs voting and run on more than just devstack/* files. restructure devstack job definition, moved the base task from stx-fault to stx-integ. change to ha and fault still open. Once the review merged, Devstack will be configured as Zuul voting. Question from Frank: how difficult it is for devs to understand the debug message from Zuul job failure? Devs will debug the issue but if any questions sending to the mailing list for help. - Opens (all) None -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, March 27, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Apr 4 21:00:59 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 4 Apr 2019 17:00:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_retag_docker_images - Build # 11 - Still Failing! In-Reply-To: <1274854111.57.1553869347759.JavaMail.javamailuser@localhost> References: <1274854111.57.1553869347759.JavaMail.javamailuser@localhost> Message-ID: <2053489428.72.1554411660222.JavaMail.javamailuser@localhost> Project: STX_retag_docker_images Build #: 11 Status: Still Failing Timestamp: 20190404T210056Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 OLD_LATEST_PREFIX: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190401T233000Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: master-stable-20190401T233000Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: master PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/logs RETAG_IMAGE_LIST: "stx-fm-rest-api stx-libvirt stx-mariadb" FLOCK_VERSION: master-centos-stable-20190401T233000Z PREFIX: master TIMESTAMP: 20190401T233000Z BUILD_STREAM: dev REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/outputs REGISTRY: docker.io OLD_BUILD_STREAM: stable From Ghada.Khalil at windriver.com Thu Apr 4 22:17:34 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 4 Apr 2019 22:17:34 +0000 Subject: [Starlingx-discuss] StoryBoard / Launchpad Re-tagging Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D4EAE@ALA-MBD.corp.ad.wrs.com> Hello all, As per the community call on Wednesday, we agreed to change the release tags in StoryBoard and Launchpad as follows: stx.2018.10 >> stx.1.0 stx.2019.05 >> stx.2.0 The tags will be bulk updated tomorrow. I will send another email once the updates are done. You will need to update your personal queries accordingly. I will update references on the wikis. Thanks, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Apr 5 00:09:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 5 Apr 2019 00:09:19 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 3/27 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35EDC25F@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35EFD4E5@SHSMSX104.ccr.corp.intel.com> Thanks Frank – seems like Daniel is still the guy working on that, and I will let you to track that in containerization sub-project. Thx. - cindy From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Friday, April 5, 2019 4:53 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 3/27 Cindy: Apologies for the delay but you can close my AR for "create a separate task under a container SB for adding support of multiple CEPH tiers": I created task 30351 under the HELM SB 2003909 Frank -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, March 27, 2019 9:54 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 3/27 Agenda & Notes for 3/27 meeting: - Ceph upgrade update (Ovidiu/Daniel, Tingjie/Yong) 1. Feature developed (2+2+2 work as expected) - 3/8 (Done as of 3/13) common code base: https://github.com/oponcea?tab=repositories 1.1 mon_max_pg_per_osd needs to be increased - done w/ PR submitted (updated max pg per osd to 2048) 1.2 prevent rados-gw being started by system- issue solved (PR?) 2. Rebase to master w/ container & CentOS7.6 - 3/15 (Done as of 3/20) both Intel & WR built & deploy successfully on AIO-SX/DX and multi-nodes (2+2+2). ISO provided to Ada/Ricardo - AR: Yong to provide the test ISO to Ricardo for engineering testing dry-run. Node re-install the storage/computer node is one of the test cases we should done, AR: Yong follow-up w/ Daniel to make sure re-install controller/compute/storage nodes. mimic release notes review - AR: Daniel/Tingjie work together to review the release notes from mimic and understand the changes. 3. Code ready for review - 3/20 (actual?) Squashing the commits - Daniel WIP Daniel is working on debugging the mimic support in Openstack helm to provide support multiple-tiers of functional support. AR: Frank to treate a seperate task under container SB to track the debug tasks. Patch submitted against master - ETA (Daniel - 4/1 to upload; Changcheng - abandon all the patches on old-feature-branch & master) Rebase starlingx-staging/stx-ceph/tree/stx/v13.2.2 (Changcheng/Dean) - ETA (Changcheng - 4/1) 2 options: option#1 - Dean to remove patches on stx-ceph on starlingx-staging, then cherry-pick your patches to the staging; Clean git history. option#2 - Changcheng refactor all patches to latest code on starling-staging. agreed to do Option#1: Dean to delete the stx/v13.2.2 branch and create a new fresh one from upstream, then Changcheng will do PR on top of new branch. Goal is to generate dev build from master with the pending reviews and rebased stx-Ceph (4/2) 4. System testing - 3/20 ~ 4/5 (actual?) Test case in discussion: https://docs.google.com/spreadsheets/d/1O2zWn-R83Wj1SqmeUtxCP59_DsSM0UNZW0gRALnDn6w/edit#gid=0 The official test cycle with master build next Tuesday (4/2) 5. Patch merge (Ovidiu/Daniel) - 4/5 (actual?) - trending at least 2 wks delay. 4/19 - DevStack update (Dean/Yi) 1. stx-metal: one final review pending (https://review.openstack.org/#/c/641597/) 2. stx-fault: one review pending (https://review.openstack.org/#/c/639501/) Martin will submit another patch against fault. 3. stx-ha: WIP (Shuicheng to provide update) Shuicheng: enabled the ha services running, still checking the status correctness. More debug/testing required before the patch can be upload. ETA: patch upload by next week (4/5) 4. Story#2005285 added to finalize Devstack job structure: Rename/relocate the base StarlingX DevStack job to stx-integ (the least-worst existing home) and make all jobs voting and run on more than just devstack/* files. restructure devstack job definition, moved the base task from stx-fault to stx-integ. change to ha and fault still open. Once the review merged, Devstack will be configured as Zuul voting. Question from Frank: how difficult it is for devs to understand the debug message from Zuul job failure? Devs will debug the issue but if any questions sending to the mailing list for help. - Opens (all) None -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, March 27, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Apr 5 00:15:57 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 5 Apr 2019 00:15:57 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 04/04 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D4F7B@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Next Meeting: 04/11 << agreed to add another meeting next week to work thru the actions related to test strategy/definition/planning. Team Meeting Agenda/Notes - Apr 4/2019 - Networking Testing Status - Welcome! - Elio is the networking test prime for stx - with help from Juan Pablo / Richo for execution - Chris Winnicki is the networking test prime in WR - Test Environment / Strategy - OVS testing will be done predominantly in virtual env. However, still need a subset of TCs run on baremetal for NIC coverage - Regression must accommodate this, identifying a subset of test-cases to be run on h/w with OVS - OVS-DPDK testing will be done on baremetal; not supported in virtual env. - NIC Types: 520, CX4 (10G?), CX5 (40G?), X710 (10G), 10G baseboard NICs - Due to lack of hardware, the team is re-configuring the switches to enable different configurations >> may be susceptible to config issues - Servers: wolfpass (?) - Need to add cluster networks to the lab configurations -- Action: Elio & team - Recommended Configurations: - No explicit cluster network << already covered in current config shared by Elio - Separate interface from mgmt network - A VLAN on the mgmt network - Shared on the mgmt network (multi-netting) - Recommend this is done on a number of systems so that you can get implicit coverage just by using the system. - Traffic Generation Capability - No current current traffic generation; just using ping - Good tools to use are netperf, pktgen (for DPDK-supported guests) - Action: Elio & team to investigate and propose traffic capability - Reporting Issues - There are a set of data that should be collected for connectivity issues - Action: Numan to share with the Intel team - No changes to neturon conf files directly, use helm overrides - Networking coverage during sanity - Basic networking test-cases will be added to sanity -- Time-Frame: 2-3wks (end of April) - Action: Elio to add list of proposed sanity TCs here for team to review - Networking Regression Testing - Test-case definition is still in progress. - The initial set chosen is around 150 test-cases. Action: Elio to make the test-cases visible to the team - Areas of Focus for Regression - Action: Matt & team to add areas of focus priority here - Add short justification for chosen priority - Networking Feature Testing - Currently testing OVS-DPDK Upversion (subset of regression TCs?) - Test-cases were sent by email. - Test-cases were not run on the standard load, so we don't have baseline results. - Given the time crunch, agreed for Elio to continue with testing with the new ovs-dpdk version. When needed, he can do specific checks against the standard load. - Test-case References - https://git.openstack.org/cgit/openstack/stx-test/tree/doc/source/manual_tests/networking - Structure and organization in the repo is hard to follow; suggest to re-organize this based on input from the team of the regression focus areas - General Action: Networking team will start a wiki to capture common commands / useful info - Containerized OVS Integration - Code merged as of March 23 - Remaining item: Follow-up on version of ovs used in the container image. - Right now, the default docker image (with ovs 2.8.0) is used. - openvswitch package Upversion - Code Merge Plan: Apr 19 (pending test progress) - Issue with vhostuser is now understood: https://bugs.launchpad.net/starlingx/+bug/1820378 - waiting for Chenjie to re-test and confirm - OVS-DPDK Firewall - Code Merge Plan: Apr 5 - https://review.openstack.org/#/c/645054/ - OVS process monitoring and alarming - Code Merge Plan: Apr 12 - https://review.openstack.org/#/c/648330/ - https://review.openstack.org/#/c/648367/ From maria.g.perez.ibarra at intel.com Fri Apr 5 00:51:01 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 5 Apr 2019 00:51:01 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190404 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-04 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] ------------------------------------------------------------------ An issue was found across all BareMetal configurations, it looks like nova services takes several minutes to stabilize. https://bugs.launchpad.net/starlingx/+bug/1823275 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Apr 5 02:21:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 5 Apr 2019 02:21:19 +0000 Subject: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up In-Reply-To: <6594B51DBE477C48AAE23675314E6C46645A3E23@fmsmsx107.amr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466459EDD6@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35ED90C1@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466459F616@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35EF86FB@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C46645A3E23@fmsmsx107.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35EFD71A@SHSMSX104.ccr.corp.intel.com> Thanks Mario for the update. Please continue the integration & testing for the FM chart w/ Armada system for those pending patches. You can share the test cases to the community as well so we can have a review. For the tasks still "todo", when you think we can upload initial patches? Or you are not working on those for now? Just need to know the ETA for those. Mingyuan is interested but he is still working on Ironic so we may still need to rely on you for FM at this moment. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Thursday, April 4, 2019 11:00 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, Actually, Luis and me have had some issues related to the integration of the FM chart with armada system in some local tests, I have been working on some patches updates to solve this. Right now I am creating an ISO image from scratch with these patches in order to test them in a clean environment. At this moment I would like to focus on this issue during the rest of the week and I will continue with the other patches related to horizon and another one about the implementation of the PUT method for the FM restful API.. At this moment my progress in the pending patches is research, however if there are someone interested about these pending patches, let me know. Thank you for your attention. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Wednesday, April 03, 2019 4:12 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi, Mario, I see that you made very good progress in uploading several patches against SB#2004008 - anything needs help for the remaining 3 tasks so far? Thx. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Wednesday, March 27, 2019 3:38 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, The first version of the patches will take around 2 weeks, after that, a validation step will start. In this step I am going to update the patches according to the feedback received from the community and Luis Botello will help to validate the functionality of the patches. As final step, I would like to execute the sanity when all patches are reviewed by the community an they are ready to be merged. This final step could vary around 2-3 weeks, it will depend on the response time from the community and the complexity of the required updates, in addition to the validation tasks. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Monday, March 25, 2019 5:44 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Mario, Nice to know that you're getting all information and having better understanding for the tasks. We probably needs to get a little bit more detail granularity of your plan, for each task in the storyboard: - when the patches will be uploaded for review; - what tests you're planning to do? Any support required from Ada's team? and when... - when you expect the patch review comments can be addressed and patch merged to master. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Tuesday, March 26, 2019 8:38 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com Subject: Re: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi team, Thank you for your feedback from our last meeting, and this is my update. I am checking all points described in this thread. Actually I have got progress in the topic related to snmp and the relation with oamcontroller, sysinv and cgts-client. I plan to send a PR with information about findings/architecture to the stx-fault/doc in a future. I think, it is not necessary another meeting as it was mentioned, I think I have enough information to continue and I am going to update the current reviews and send news according to the points discussed until today, and contact Tao for specific questions. Thanks Tao, Abraham and Frank. Best regards. Mario. ________________________________________ From: Arce Moreno, Abraham Sent: Friday, March 22, 2019 10:37 AM To: starlingx-discuss at lists.starlingx.io Cc: Arevalo, Mario Alfredo C; Tao Liu Subject: Fault Management Containerization (SB 2004008) Follow Up Thanks Frank for setting this up. Thanks everyone for your attendance to this meeting, here you have high level notes and ToDos based in the topics covered. In Summary - The presentation Stx-Fault/Containers is located at [0]. - Tao will kindly update the Fault Management architecture diagram, slide 8. - Mario will send an email no later than Monday afternoon with the latest findings / questions based in his 5 ToDos. - We will meet again on Tuesday to finalize on tasks and implementation details. If we are forgetting about any key point in this email, please do not hesitate to reply. StarlingX Architecture - 2 instances for each of the following projects: - Keystone - Horizon - Barbican - Fault Management will have 2 instances as well. Fault Management Architecture - [ToDo] [Tao] to modify the Fault Management architecture (Slide 8) Thanks Tao! - fm-api runs in compute node, snmp provide interfaces - [ToDo] [Mario] to check these statements Fault Management REST API - [ToDo] [Mario] to write the next level of details for REST API mapping / implementation, consider to include PUT to Event Log. Fault Management Architecture - python-fmclient is a wrapper to fm_cli / fm_api - [ToDo] [Mario] to understand more about fm_cli as a wrapper and how does it interact and affects fault management containerized strategy. FM Proposal - Remove mysql, fm-api, fm-common - [ToDo] [Mario] to understand about the removal of fm-api and fm-common from the containerized instance. - Dependency to cgts-client - [ToDo] [Mario] to understand what is cgtc-client and how does it interacts with fault management and the new containerized instance. OpenStack Applications The following 2 projects will make use of the Fault Management containerized: - starlingx-dashboard - stx-nfv [0] https://docs.google.com/presentation/d/1_vG83aHTToXlIdJxaJpVL-MHWfRGnxLuyEdFDt-nfwo/edit?usp=sharing _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Fri Apr 5 02:51:53 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 5 Apr 2019 02:51:53 +0000 Subject: [Starlingx-discuss] Release 3 Feature Candidates Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB449DFA@ALA-MBD.corp.ad.wrs.com> Folks, We have started an etherpad, https://etherpad.openstack.org/p/stx-ptg-denver, with the current list of feature candidates for the next release. This list was constructed based on discussions during the community meeting in January plus additional input from the TSC. If you have any input/comments on the existing items or have new feature candidates please update the etherpad. The list will be reviewed and prioritized at the upcoming PTG with the outcome being the draft content list for the next release. Thanks, Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Fri Apr 5 14:36:18 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 5 Apr 2019 14:36:18 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 04/04 Message-ID: <31402D29-CC10-425F-AAC6-2B48CD553D82@windriver.com> Hello Folks, I have added the initial set of recommend test areas and configurations for the network regression testing (re: Focus Areas for Regression). Please feel free to add additional comments and focus areas so that we can work to prioritize the list and functionality to be covered. There are many more functional test cases to cover for a full regression, but we want to capture the highest priority items to ensure we have adequate coverage for the first pass. https://etherpad.openstack.org/p/stx-networking Let me know if you have any questions about any of the items captured. Thanks, Matt On 2019-04-04, 8:16 PM, "Khalil, Ghada" wrote: Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Next Meeting: 04/11 << agreed to add another meeting next week to work thru the actions related to test strategy/definition/planning. Team Meeting Agenda/Notes - Apr 4/2019 - Networking Testing Status - Welcome! - Elio is the networking test prime for stx - with help from Juan Pablo / Richo for execution - Chris Winnicki is the networking test prime in WR - Test Environment / Strategy - OVS testing will be done predominantly in virtual env. However, still need a subset of TCs run on baremetal for NIC coverage - Regression must accommodate this, identifying a subset of test-cases to be run on h/w with OVS - OVS-DPDK testing will be done on baremetal; not supported in virtual env. - NIC Types: 520, CX4 (10G?), CX5 (40G?), X710 (10G), 10G baseboard NICs - Due to lack of hardware, the team is re-configuring the switches to enable different configurations >> may be susceptible to config issues - Servers: wolfpass (?) - Need to add cluster networks to the lab configurations -- Action: Elio & team - Recommended Configurations: - No explicit cluster network << already covered in current config shared by Elio - Separate interface from mgmt network - A VLAN on the mgmt network - Shared on the mgmt network (multi-netting) - Recommend this is done on a number of systems so that you can get implicit coverage just by using the system. - Traffic Generation Capability - No current current traffic generation; just using ping - Good tools to use are netperf, pktgen (for DPDK-supported guests) - Action: Elio & team to investigate and propose traffic capability - Reporting Issues - There are a set of data that should be collected for connectivity issues - Action: Numan to share with the Intel team - No changes to neturon conf files directly, use helm overrides - Networking coverage during sanity - Basic networking test-cases will be added to sanity -- Time-Frame: 2-3wks (end of April) - Action: Elio to add list of proposed sanity TCs here for team to review - Networking Regression Testing - Test-case definition is still in progress. - The initial set chosen is around 150 test-cases. Action: Elio to make the test-cases visible to the team - Areas of Focus for Regression - Action: Matt & team to add areas of focus priority here - Add short justification for chosen priority - Networking Feature Testing - Currently testing OVS-DPDK Upversion (subset of regression TCs?) - Test-cases were sent by email. - Test-cases were not run on the standard load, so we don't have baseline results. - Given the time crunch, agreed for Elio to continue with testing with the new ovs-dpdk version. When needed, he can do specific checks against the standard load. - Test-case References - https://git.openstack.org/cgit/openstack/stx-test/tree/doc/source/manual_tests/networking - Structure and organization in the repo is hard to follow; suggest to re-organize this based on input from the team of the regression focus areas - General Action: Networking team will start a wiki to capture common commands / useful info - Containerized OVS Integration - Code merged as of March 23 - Remaining item: Follow-up on version of ovs used in the container image. - Right now, the default docker image (with ovs 2.8.0) is used. - openvswitch package Upversion - Code Merge Plan: Apr 19 (pending test progress) - Issue with vhostuser is now understood: https://bugs.launchpad.net/starlingx/+bug/1820378 - waiting for Chenjie to re-test and confirm - OVS-DPDK Firewall - Code Merge Plan: Apr 5 - https://review.openstack.org/#/c/645054/ - OVS process monitoring and alarming - Code Merge Plan: Apr 12 - https://review.openstack.org/#/c/648330/ - https://review.openstack.org/#/c/648367/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From elio.martinez.monroy at intel.com Fri Apr 5 15:21:41 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Fri, 5 Apr 2019 15:21:41 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 04/04 In-Reply-To: <31402D29-CC10-425F-AAC6-2B48CD553D82@windriver.com> References: <31402D29-CC10-425F-AAC6-2B48CD553D82@windriver.com> Message-ID: <1466AF2176E6F040BD63860D0A241BBD46CB55AA@FMSMSX109.amr.corp.intel.com> Thanks Matt, I will take a look and compare what we have already, my idea is to upload all the progress that we had done until today and expand our scope -----Original Message----- From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, April 5, 2019 8:36 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 04/04 Hello Folks, I have added the initial set of recommend test areas and configurations for the network regression testing (re: Focus Areas for Regression). Please feel free to add additional comments and focus areas so that we can work to prioritize the list and functionality to be covered. There are many more functional test cases to cover for a full regression, but we want to capture the highest priority items to ensure we have adequate coverage for the first pass. https://etherpad.openstack.org/p/stx-networking Let me know if you have any questions about any of the items captured. Thanks, Matt On 2019-04-04, 8:16 PM, "Khalil, Ghada" wrote: Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Next Meeting: 04/11 << agreed to add another meeting next week to work thru the actions related to test strategy/definition/planning. Team Meeting Agenda/Notes - Apr 4/2019 - Networking Testing Status - Welcome! - Elio is the networking test prime for stx - with help from Juan Pablo / Richo for execution - Chris Winnicki is the networking test prime in WR - Test Environment / Strategy - OVS testing will be done predominantly in virtual env. However, still need a subset of TCs run on baremetal for NIC coverage - Regression must accommodate this, identifying a subset of test-cases to be run on h/w with OVS - OVS-DPDK testing will be done on baremetal; not supported in virtual env. - NIC Types: 520, CX4 (10G?), CX5 (40G?), X710 (10G), 10G baseboard NICs - Due to lack of hardware, the team is re-configuring the switches to enable different configurations >> may be susceptible to config issues - Servers: wolfpass (?) - Need to add cluster networks to the lab configurations -- Action: Elio & team - Recommended Configurations: - No explicit cluster network << already covered in current config shared by Elio - Separate interface from mgmt network - A VLAN on the mgmt network - Shared on the mgmt network (multi-netting) - Recommend this is done on a number of systems so that you can get implicit coverage just by using the system. - Traffic Generation Capability - No current current traffic generation; just using ping - Good tools to use are netperf, pktgen (for DPDK-supported guests) - Action: Elio & team to investigate and propose traffic capability - Reporting Issues - There are a set of data that should be collected for connectivity issues - Action: Numan to share with the Intel team - No changes to neturon conf files directly, use helm overrides - Networking coverage during sanity - Basic networking test-cases will be added to sanity -- Time-Frame: 2-3wks (end of April) - Action: Elio to add list of proposed sanity TCs here for team to review - Networking Regression Testing - Test-case definition is still in progress. - The initial set chosen is around 150 test-cases. Action: Elio to make the test-cases visible to the team - Areas of Focus for Regression - Action: Matt & team to add areas of focus priority here - Add short justification for chosen priority - Networking Feature Testing - Currently testing OVS-DPDK Upversion (subset of regression TCs?) - Test-cases were sent by email. - Test-cases were not run on the standard load, so we don't have baseline results. - Given the time crunch, agreed for Elio to continue with testing with the new ovs-dpdk version. When needed, he can do specific checks against the standard load. - Test-case References - https://git.openstack.org/cgit/openstack/stx-test/tree/doc/source/manual_tests/networking - Structure and organization in the repo is hard to follow; suggest to re-organize this based on input from the team of the regression focus areas - General Action: Networking team will start a wiki to capture common commands / useful info - Containerized OVS Integration - Code merged as of March 23 - Remaining item: Follow-up on version of ovs used in the container image. - Right now, the default docker image (with ovs 2.8.0) is used. - openvswitch package Upversion - Code Merge Plan: Apr 19 (pending test progress) - Issue with vhostuser is now understood: https://bugs.launchpad.net/starlingx/+bug/1820378 - waiting for Chenjie to re-test and confirm - OVS-DPDK Firewall - Code Merge Plan: Apr 5 - https://review.openstack.org/#/c/645054/ - OVS process monitoring and alarming - Code Merge Plan: Apr 12 - https://review.openstack.org/#/c/648330/ - https://review.openstack.org/#/c/648367/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Fri Apr 5 15:28:43 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 5 Apr 2019 15:28:43 +0000 Subject: [Starlingx-discuss] Sanity updates Message-ID: An update on the 2nd issue where VMs fail to launch: Gerry has confirmed the issue is due to using a nova docker image from OpenStack master. Don Penney is updating the docker image builds to use the OpenStack Stein branches. After re-testing confirms these docker images are sane, he will work to switch the CENGN builds over to the Stein branches. Frank From: Miller, Frank Sent: Wednesday, April 03, 2019 10:48 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Sanity updates Folks: I took an action on the containers community call to send out an update on the current sanity issues. 1. AIO-SX: This configuration should now be ready for use. * Bart Wensley solved LP 1820928 which turned out to be a bug in kubelet where it was hitting a limit of 250 http2 streams in a single connection. 2. Other multi-server configs: An intermittent issue still exists when launching VMs resulting in the VMs failing to be scheduled. Tracked under LPs 1821841 & 1822116 * Gerry Kopec continues to investigate intermittent issues with the nova-placement pod. When issue occurs VMs cannot be launched. * Issue is with nova-compute unable to get requests processed to one of the nova-placement pods running on each controller. * Current theory is this is related to our docker images using OpenStack master and a recent nova commit in the nova placement area is impacting the placement pod. Gerry expects to prove or disprove the theory later today. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Apr 5 15:46:29 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 5 Apr 2019 11:46:29 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 70 - Failure! In-Reply-To: <824113714.67.1554409824649.JavaMail.javamailuser@localhost> References: <824113714.67.1554409824649.JavaMail.javamailuser@localhost> Message-ID: <38b05619-e988-3d87-699a-fefed79c3edd@windriver.com> Operator error. Passed a bad parameter.  The rebuild passed. Scott On 2019-04-04 4:30 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_docker_images > Build #: 70 > Status: Failure > Timestamp: 20190404T200736Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: master > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190401T233000Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190401T233000Z/logs > MASTER_BUILD_NUMBER: 45 > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/logs > MASTER_JOB_NAME: STX_build_master_master > MY_REPO_ROOT: /localdisk/designer/jenkins/master > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos > PUBLISH_TIMESTAMP: 20190401T233000Z > DOCKER_BUILD_ID: jenkins-master-20190401T233000Z-builder > TIMESTAMP: 20190401T233000Z > OS_VERSION: 7.6.1810 > BUILD_STREAM: stable > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/inputs > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190401T233000Z/outputs > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Apr 5 16:19:07 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 5 Apr 2019 11:19:07 -0500 Subject: [Starlingx-discuss] Sanity updates In-Reply-To: References: Message-ID: On Fri, Apr 5, 2019 at 10:29 AM Miller, Frank wrote: > An update on the 2nd issue where VMs fail to launch: Gerry has confirmed the issue is due to using a nova docker image from OpenStack master. Don Penney is updating the docker image builds to use the OpenStack Stein branches. After re-testing confirms these docker images are sane, he will work to switch the CENGN builds over to the Stein branches. Timing is everything, they say... I just finished resetting the stx-nova repo [0] to track upstream nova: * the old master branch is now stx/old-master for reference * master branch is a snapshot of upstream master as of about 30 min ago * stable/stein branch is a snapshot of upstream stable/stein as of about 30 min ago * stx/stein is our working copy of stable/stein and where anything we backport should land. Big Note: I am thinking about keeping a policy of periodically rebasing stx/stein on stable/stein to keep a clear history as we move forward, making it easier to see what we have added. That possibly means doing it next week when the final stein tag is added. Thoughts? Force pushes can be inconvenient for developers but I am thinking the price may be worth the return on a wider scale. dt Also, I am going to re-post this as a top-level message to make sure it get seen...let's do follow-up conversations there [0] https://github.com/starlingx-staging/stx-nova -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Fri Apr 5 16:19:43 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 5 Apr 2019 11:19:43 -0500 Subject: [Starlingx-discuss] stx-nova repo changes for upstream tracking Message-ID: I just finished resetting the stx-nova repo [0] to track upstream nova: * the old master branch is now stx/old-master for reference * master branch is a snapshot of upstream master as of about 30 min ago * stable/stein branch is a snapshot of upstream stable/stein as of about 30 min ago * stx/stein is our working copy of stable/stein and where anything we backport should land. Big Note: I am thinking about keeping a policy of periodically rebasing stx/stein on stable/stein to keep a clear history as we move forward, making it easier to see what we have added. That possibly means doing it next week when the final stein tag is added. Thoughts? Force pushes can be inconvenient for developers but I am thinking the price may be worth the return on a wider scale. dt [0] https://github.com/starlingx-staging/stx-nova -- Dean Troyer dtroyer at gmail.com From ada.cabrales at intel.com Fri Apr 5 19:23:34 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Fri, 5 Apr 2019 19:23:34 +0000 Subject: [Starlingx-discuss] StarlingX Test meeting - 04/09 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDB4906@FMSMSX114.amr.corp.intel.com> Testing meeting - 04/09/2019 Intel team has a conflict, we will have our testing meeting one hour later: 10am Pacific time. Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2064 bytes Desc: not available URL: From ricardo.o.perez at intel.com Wed Apr 3 04:48:47 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Wed, 3 Apr 2019 04:48:47 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, Attached you can find the output of my compute node. Regards _Richo From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 9:50 AM To: Xu, Chenjie ; Martinez Monroy, Elio ; Peters, Matt ; Lin, Shuicheng ; Cabrales, Ada ; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io' ; Zhao, Forrest ; Rowsell, Brent ; Gauld, James Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don’t have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1’s server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn’t exist in my BIOS. Only NUMA Optimized exists. An image for “Memory RAS and Performance Configuration” has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: virsh_capabilities_compute.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lscpu_compute.txt URL: From chenjie.xu at intel.com Wed Apr 3 07:20:28 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 3 Apr 2019 07:20:28 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, The output of following commands on compute-0 have been attached: sudo lscpu sudo virsh capabilities My controllers are wolf pass servers and have skylake processors. I'm reinstalling the standard 0322 ISO image and this time I will use wolf pass servers as compute node. Best Regards, Xu, Chenjie From: Xu, Chenjie Sent: Wednesday, April 3, 2019 11:48 AM To: Peters, Matt ; Liu, ZhipengS ; He, Yongli Cc: 'starlingx-discuss at lists.starlingx.io' ; 'Khalil, Ghada' ; Zhao, Forrest ; Rowsell, Brent ; Gauld, James ; Le, Huifeng ; Martinez Monroy, Elio ; Perez, Ricardo O ; Cabrales, Ada ; Lin, Shuicheng Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, Zhipeng is working on "PCI Affinity dependency on Nova NUMA topology". This may relate to BUG "VM can't send packet through vhostuser port due to missing numa settings in domain xml": https://bugs.launchpad.net/starlingx/+bug/1820378 According to him, after cutting over to nova master, the NUMA topology and PCI device info has been removed from nova. Before nova master, StarlingX uses nova stage which has NUMA topology. Some detailed information are listed below: * PCI Affinity dependency on Nova NUMA topology - Zhipeng o This affinity agent could not get both pci_device info and numa info of server from nova. o In nova stage version, we added below two for server. o server["wrs-res:topology"] o server["wrs-res:pci_devices"] o In nova master, no these attributions for server. o For topology, I can see that there is a patch of adding numa topology pending for merge o https://review.openstack.org/#/c/621476 Add server sub-resource topology API o Next step - Zhipeng to investigate alternative implementations that don't have dependencies on Nova. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 11:50 PM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent >; Gauld, James > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don't have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1's server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn't exist in my BIOS. Only NUMA Optimized exists. An image for "Memory RAS and Performance Configuration" has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lscpu.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: virsh_capabilities.txt URL: From Matt.Peters at windriver.com Wed Apr 3 19:21:26 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 3 Apr 2019 19:21:26 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> Message-ID: <69276584-6FF8-458A-AF4F-EC073C1856E9@windriver.com> Hello Folks, Thanks to Chris Friesen for point this out, but we believe the issues you are experiencing is due to the requirement for guests to be backed by huge pages to operate with OVS-DPDK vhost-user based ports/interfaces. The master (default) behavior for the latest nova will default to 4K pages for the guest, but this is not compatible with OVS-DPDK. The guests must be configured to use a flavor that has the property hw:mem_page_size=large set. You can follow this link to read more about the requirements on the guests for OVS-DPDK: https://docs.openstack.org/neutron/rocky/admin/config-ovs-dpdk.html “vhost-user requires file descriptor-backed shared memory. Currently, the only way to request this is by requesting large pages. This is why instances spawned on hosts with OVS-DPDK must request large pages”. Hope this helps. -Matt From: "Xu, Chenjie" Date: Tuesday, April 2, 2019 at 11:48 PM To: "Peters, Matt" , "Liu, ZhipengS" , "He, Yongli" Cc: "'starlingx-discuss at lists.starlingx.io'" , Ghada Khalil , "Zhao, Forrest" , Brent Rowsell , "Gauld, James" , "Le, Huifeng" , "Martinez Monroy, Elio" , "Perez, Ricardo O" , "Cabrales, Ada" , "Lin, Shuicheng" Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, Zhipeng is working on “PCI Affinity dependency on Nova NUMA topology”. This may relate to BUG “VM can't send packet through vhostuser port due to missing numa settings in domain xml”: https://bugs.launchpad.net/starlingx/+bug/1820378 According to him, after cutting over to nova master, the NUMA topology and PCI device info has been removed from nova. Before nova master, StarlingX uses nova stage which has NUMA topology. Some detailed information are listed below: • PCI Affinity dependency on Nova NUMA topology - Zhipeng o This affinity agent could not get both pci_device info and numa info of server from nova. o In nova stage version, we added below two for server. o server["wrs-res:topology"] o server["wrs-res:pci_devices"] o In nova master, no these attributions for server. o For topology, I can see that there is a patch of adding numa topology pending for merge o https://review.openstack.org/#/c/621476 Add server sub-resource topology API o Next step - Zhipeng to investigate alternative implementations that don't have dependencies on Nova. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 11:50 PM To: Xu, Chenjie ; Martinez Monroy, Elio ; Peters, Matt ; Lin, Shuicheng ; Cabrales, Ada ; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io' ; Zhao, Forrest ; Rowsell, Brent ; Gauld, James Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don’t have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1’s server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn’t exist in my BIOS. Only NUMA Optimized exists. An image for “Memory RAS and Performance Configuration” has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From ricardo.o.perez at intel.com Wed Apr 3 19:30:07 2019 From: ricardo.o.perez at intel.com (Perez, Ricardo O) Date: Wed, 3 Apr 2019 19:30:07 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <822d14b1-65a0-d9f0-93b2-ea05495f6fb7@windriver.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> <822d14b1-65a0-d9f0-93b2-ea05495f6fb7@windriver.com> Message-ID: Hi Chris, Lastnight I have sent out the lspci and virsh capabilities, but seems like the moderator it’s still moderating my last e-mail. So please find attached my output again, including your latest command. Please let me know if anything else is required. Regards -Richo From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, April 3, 2019 9:54 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing On 4/2/2019 9:50 AM, Khalil, Ghada wrote: Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. In addition to the above, can you please provide the logs from one of the nova-compute pods on an affected system? You can get the logs by running : POD=`kubectl -n openstack get pod -l application=nova,component=compute \ -o=jsonpath='{.items[0].metadata.name'}` kubectl -n openstack logs -c nova-compute $POD Thanks, Chris -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: POD_output.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: virsh_capabilities_compute.txt URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: lscpu_compute.txt URL: From tingjie.chen at intel.com Thu Apr 4 03:23:31 2019 From: tingjie.chen at intel.com (Chen, Tingjie) Date: Thu, 4 Apr 2019 03:23:31 +0000 Subject: [Starlingx-discuss] Discussion about StarlingX test cases in CEPH In-Reply-To: References: , <9174DAE490321844AE273F6AD001E3EA9D85012C@ALA-MBD.corp.ad.wrs.com> Message-ID: The test case for StarlingX CEPH upgrade we discussed, I have file link for review: https://ethercalc.openstack.org/orb83xruwmo8 + starlingx-discuss for collecting feedback from community... Thanks, Tingjie From: Badea, Daniel [mailto:Daniel.Badea at windriver.com] Sent: Wednesday, March 27, 2019 5:52 PM To: Chen, Tingjie > Cc: Perez, Ricardo O >; Cabrales, Ada >; Xie, Cindy >; Zhu, Vivian >; Miller, Frank >; Poncea, Ovidiu >; Jones, Bruce E >; Lara, Cesar > Subject: RE: Discussion about StarlingX test cases in CEPH Hi Tingjie, Please note that CEPH_STOR_TIER_04 ("associate services with a new storage tier") will fail because support multiple tiers is currently broken. There is a review in progress to fix it: https://review.openstack.org/#/c/632346/3 Best regards, Daniel B. From: Chen, Tingjie Sent: Tuesday, March 26, 2019 4:10 PM To: Perez, Ricardo O >; Cabrales, Ada >; Xie, Cindy >; Zhu, Vivian > Subject: RE: Discussion about StarlingX test cases in CEPH Hi Ricardo, For the test case of IO path, I have setup my environment and dry run, it need network configuration in VM or BM. Steps: 1/ Make sure your VM/BM can access external network, if yes, ignore the following commands in step 1. It is needed also in containerized configuration, My VM setting for example, suppose your IP of controller-0 (active controller) is 10.10.10.3 In Host: echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward sudo iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -j MASQUERADE Also if you have proxy in VM/BM please don't forget to set. 2/ Install FIO and related libraries. [wrsroot at controller-0 ~(keystone_admin)]$ sudo yum update If you have no repo base list, just find one from network and push into /etc/yum.repos.d/ [wrsroot at controller-0 ~(keystone_admin)]$ sudo yum install fio 3/ Create pool and rbd, start to run fio. In fio config file (rbd.fio), assign one pool and rbd and you created manually. [wrsroot at controller-0 ~(keystone_admin)]$ cat rbd.fio [global] ioengine=rbd clientname=admin pool=test_pool # create pool named test_pool before run fio rbdname=test_rbd # create rbd named test_rbd (1G in my example) in test_pool before run fio invalidate=0 rw=randwrite bs=4k [rbd_iodepth32] iodepth=32 Then run the fio: [wrsroot at controller-0 ~(keystone_admin)]$ fio rbd.fio rbd_iodepth32: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32 fio-3.1 Starting 1 process Jobs: 1 (f=1): [w(1)][99.6%][r=0KiB/s,w=4624KiB/s][r=0,w=1156 IOPS][eta 00m:01s] rbd_iodepth32: (groupid=0, jobs=1): err= 0: pid=1694134: Tue Mar 26 00:22:04 2019 write: IOPS=1112, BW=4448KiB/s (4555kB/s)(1024MiB/235722msec) slat (nsec): min=987, max=13491k, avg=4692.70, stdev=63898.52 clat (usec): min=1688, max=242157, avg=28660.64, stdev=14844.91 lat (usec): min=1705, max=242160, avg=28665.34, stdev=14845.14 clat percentiles (msec): | 1.00th=[ 10], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 18], | 30.00th=[ 21], 40.00th=[ 24], 50.00th=[ 27], 60.00th=[ 29], | 70.00th=[ 33], 80.00th=[ 38], 90.00th=[ 45], 95.00th=[ 53], | 99.00th=[ 79], 99.50th=[ 97], 99.90th=[ 169], 99.95th=[ 178], | 99.99th=[ 213] bw ( KiB/s): min= 1416, max= 7016, per=99.96%, avg=4446.03, stdev=765.16, samples=471 iops : min= 354, max= 1754, avg=1111.41, stdev=191.28, samples=471 lat (msec) : 2=0.01%, 4=0.01%, 10=1.56%, 20=26.82%, 50=65.29% lat (msec) : 100=5.89%, 250=0.44% cpu : usr=0.63%, sys=0.29%, ctx=14440, majf=0, minf=8434 IO depths : 1=1.6%, 2=3.9%, 4=9.6%, 8=23.8%, 16=57.1%, 32=4.1%, >=64=0.0% submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% complete : 0=0.0%, 4=96.3%, 8=0.2%, 16=0.3%, 32=3.1%, 64=0.0%, >=64=0.0% issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0 latency : target=0, window=0, percentile=100.00%, depth=32 Run status group 0 (all jobs): WRITE: bw=4448KiB/s (4555kB/s), 4448KiB/s-4448KiB/s (4555kB/s-4555kB/s), io=1024MiB (1074MB), run=235722-235722msec Disk stats (read/write): sda: ios=2664/10592, merge=0/2020, ticks=2333/40725, in_queue=42859, util=11.94% And just refer the flow with Read / Write of CEPH internal process. [cid:image002.jpg at 01D4EAD2.324A2A20] So it seems the TCs we aligned mostly, what is the next process to final confirm? :) @Ricardo, will you list the new TCs and dry-run on 2+2+2 configuration firstly? Thanks, Tingjie From: Perez, Ricardo O Sent: Tuesday, March 26, 2019 5:57 AM To: Chen, Tingjie >; Cabrales, Ada >; Xie, Cindy >; Zhu, Vivian > Subject: RE: Discussion about StarlingX test cases in CEPH Hi Tingjie, Thanks lot for you shared details, please see my embedded answers below. Regards -Richo From: Chen, Tingjie Sent: Monday, March 25, 2019 8:32 AM To: Perez, Ricardo O >; Cabrales, Ada >; Xie, Cindy >; Zhu, Vivian > Subject: RE: Discussion about StarlingX test cases in CEPH Hi Ricardo, For the test cases we proposed, it seems there are 2 items need to clarify per your comments. 1/ RESTful plugin: This case is to verify CEPH-MGR via python-client with restful API, we have no automation script and yes the rest API is complex for manual commands. I can provide the lists can be verified, first step we can go through some GET operations, the case will be passed if information shows normally and correctly. [cid:image004.jpg at 01D4EAD2.324A2A20] // get the user and keys. [wrsroot at controller-0 ~(keystone_admin)]$ ceph restful list-keys { "admin": "579b6a27-e019-4887-b0ce-ee2fee6c4134" } // get ceph-mgr service endpoint and port [wrsroot at controller-0 ~(keystone_admin)]$ ceph mgr services { "restful": "https://controller-0:5001/" } // get the available service link list [wrsroot at controller-0 ~(keystone_admin)]$ curl -k https://controller-0:5001/doc ... // for example, get monitors detail information [wrsroot at controller-0 ~(keystone_admin)]$ curl -k -u admin:579b6a27-e019-4887-b0ce-ee2fee6c4134 https://controller-0:5001/mon -X GET [ { "addr": "192.168.204.3:6789/0", "in_quorum": true, "leader": true, "name": "controller-0", "public_addr": "192.168.204.3:6789/0", "rank": 0, "server": "controller-0" }, { "addr": "192.168.204.4:6789/0", "in_quorum": true, "leader": false, "name": "controller-1", "public_addr": "192.168.204.4:6789/0", "rank": 1, "server": "controller-1" }, { "addr": "192.168.204.95:6789/0", "in_quorum": true, "leader": false, "name": "storage-0", "public_addr": "192.168.204.95:6789/0", "rank": 2, "server": "storage-0" } ] [Perez, Ricardo O] Thanks for sharing the commands and the expected results. 2/ IO path: The read/write test with fio with rbd engine will go through CEPH full stack, include librados, rbd, osd, mon and messages, this is a comprehensive case and also performance impact if needed in the future. I am preparing the FIO environment in StarlingX, since current setup does not support fio by default, more details will provide if have progress. [Perez, Ricardo O] Then, I believe you are going to share with us when FIO environment is ready as well the steps to be executed right ? BTW: May I ask your plan for StarlingX deployment? [Perez, Ricardo O] Sure, by now, when we have to test something related to CEPH, we normally use a 2+2+2 configuration (2 controllers, 2 computes and 2 storage nodes). By now, we are able to use a mix of network cards, Mellanox / Intel. (in this 2+2+2 config) if more nodes are required we are able just to use the default attached cards in the servers. Thanks, Tingjie From: Perez, Ricardo O Sent: Saturday, March 23, 2019 5:49 AM To: Chen, Tingjie >; Cabrales, Ada >; Xie, Cindy >; Zhu, Vivian > Subject: RE: Discussion about StarlingX test cases in CEPH Hi Tingjie, See my embedded answers below From: Chen, Tingjie Sent: Friday, March 22, 2019 1:58 AM To: Cabrales, Ada >; Perez, Ricardo O >; Xie, Cindy >; Zhu, Vivian > Subject: Discussion about StarlingX test cases in CEPH Hi, Just kick-off a new thread for the discussion about CEPH test case. 1/ Previously Ada has share the link: https://docs.google.com/spreadsheets/d/1O2zWn-R83Wj1SqmeUtxCP59_DsSM0UNZW0gRALnDn6w/edit?usp=sharing We have some detailed comments on some cases, as purple color. also there are questions for the test plan: a/ We execute test suites, do you have test framework/script or use commands directly? [Perez, Ricardo O] Currently we execute commands directly. b/ In case of failure, how to decide whether it is blocking issue, maybe we can define the priority of test cases, how do you think? [Perez, Ricardo O] As part of the validation conventions, normally a blocking issue its one of these things: Block you to enable / disable some feature of your software, it fails in the way that there is no way to recover the original state, it's impossible to perform some step required to enable a further feature. About priority of the tests, sure, we can define it. Test ID Test Name Test Objective Expected behavior Comments 1 CEPH_STOR_TIER_01 The objective of this test is to ensure that a new storage tier can be created. Additional storage tier is successfully created. Scenario of storage tier is not used frequently, espacially when use SSD since no need to cache pool (Tingjie). [Perez, Ricardo O] I agree that might not be a widely used scenario, but as the feature is there, we should test it. 2 CEPH_STOR_TIER_02 The objective of this test is to ensure that a new storage tier can be associated with an OSD. Storage tier is successfully associated with OSD 3 CEPH_STOR_TIER_03 The objective of this test is to ensure that a new storage tier can be associated with a backend. Storage tier can be successfully associated with a backend 4 CEPH_STOR_TIER_04 The objective of this test is to ensure you can associate services with a new storage tier. The new storage tier can be used. 5 CEPH_STOR_REP_05 The objective of this test is to ensure you can provision the system to have replication factor 3. After replication factor 3 is enabled, there are 3 copies of the data present on the system. Performance can vary wildly amount different Ceph clusters, it all depends on what the replication factor is set to. With a replication factor of 2 you will see roughly half the write performance compared to a replication factor of 1. The drop in write performance between replication factor 2 and 3 is also pretty dramatic. This is not surprising since replication takes time and you must wait for multiple OSDs to complete a write instead of just one. How to verify the data present, write and wait for sync complete between OSDs? (Tingjie). [Perez, Ricardo O] At this point, I beleive the intention of the test isn't to verify the data. Just to see that the system are still functional no matter which replication factor you use. This for sure will impact in the performance but this is out of the scope of the test. 6 CEPH_STOR_SWI_06 The objective of this test is to ensure you can enable swift on the system. Swift should be successfully enabled at the end of this test. 7 CEPH_STOR_PROC_07 The objective of this test is to repeatedly kill the ceph monitor process and ensure they are restarted by the system. The ceph monitor processes should alarm when expected, and should recover when killed. 8 CEPH_STOR_OSD_08 The objective of this test is to repeatedly kill the ceph osd process and ensure they are restarted by the system. The ceph osd processes should alarm when expected, and should recover when killed. 9 CEPH_STOR_SCALABILITY_09 The objective of this test is to test the basic provisioning procedure for 8 storage node ceph systems. The system is properly configured and functioning as expected at the end of the test. Since currently 2-9x node CEPH storage cluster support, it is fine, but we have not deploy so many yet, maybe dry-run first. (Tingjie). [Perez, Ricardo O] This is i show the original test was defined, for sure we can adjust, normally we use 2 storage nodes for BM. 10 CEPH_STOR_CORE_10 The objective of this test is to ensure that host reinstall of nodes running ceph-mon works properly on all supported configs. Ceph should be healthy at the end of the test. the robustness test should strip the influence of residual data/config, propose one Precondition: clean deployment when each test (Tingjie). [Perez, Ricardo O] This test is just to ensure that a host can be re-installed as many times as required, if you believe such pre-condition is required we can add it. However in the real world this should work flawlessly despite of the state of the system. 11 CEPH_STOR_CORE_11 The objective of this test is to ensure that host delete and reprovision of nodes running ceph-mon works properly on all supported configs. Ceph should be healthy at the end of the test. 12 CEPH_STOR_CORE_12 The objective of this test is to ensure that semantic checks with respect to node lock, work properly on nodes running ceph monitors. Semantic checks should work as expected. Not sure the meaning of semantic check about node lock. (Tingjie). [Perez, Ricardo O] Semantic check means basically that if you lock / unlock or perform any action to an specific node, the "system" checks their current state (semantically) and allows / deny such operations depending on the defined state. Please check the detailed steps for the tests here: https://review.openstack.org/#/c/640546/1/manual-tests/storage/storage_regression_test_plan.rst The original test name is STOR_CORE_016 13 CEPH_STOR_CORE_13 The objective of this test is to ensure that the user can provision SSD journals. It should be possible to modify the journal configuration on the SSD disks. Journals is related on Filestore only and removed in Bluestore. CEPH mimic by default support Bluestore but in StarlingX still use Filestore since puppet issues, maybe switch to Bluestore in future, just share the information. (Tingjie) 14 CEPH_STOR_HW_14 The objective of this test is to ensure that the hardware disk replacement procedure for OSDs is accurate. The system should be functional and healthy after hardware disk replacement. 15 CEPH_STOR_HW_15 The objective of this test is to ensure that the hardware disk replacement procedure for journal disks is accurate. The system should be functional and healthy after hardware disk replacement. 16 CEPH_STOR_DOR_16 To verify the system recovers after a DOR test (dead-office-recovery). Storage system recovers after DOR test Not sure the context of DOR test: dead-office-recovery means? (Tingjie). [Perez, Ricardo O] The context is, all nodes shutdown ( by a power outage or any other issue), running VMs on compute nodes, after all came back, we have that VMs are still able to resume ping and their attached storages are still funcional. 17 CEPH_STOR_FAULT_17 To verify the system can recover when there is a cable pull on the cluster network. Storage system recovers after cable pull Any operations defined when you pull out the cable? (Tingjie). [Perez, Ricardo O] I'm not quite sure what do you mean with "operations"?, but here what we want is to see if after a physically disconnection and connection , the system are still working in an stable way, and able to continue providing services. 18 CEPH_STOR_FS_18 To verify that the sizes of the ceph pools can be modified. It should be possible for the user to change the size of the ceph pools Do you mean the max size of ceph pools, can set quota. or PGs ? or warning log threshold? (Tingjie). [Perez, Ricardo O] 'ceph osd pool get-quota ' 19 CEPH_STOR_OPROF_19 This test validates the creation and application of storage profiles on a system. It should be possible to apply an existing storage porfile to a new node What is the form of storage profile, do you have any script or tools to apply the profile for creation and application? (Tingjie). [Perez, Ricardo O] A storage profile is a configuration file that is saved using Horizon. So, no script is required. You can use such profile to setup a new node with the same configuration (if required). 20 CEPH_STOR_PART_20 This test validates that multiple partitions can be created and the partition modification/deletion behaviour is correct Partition creation, deletion and semantic checks should work as expected. Partition on Disk? You means deploy in bare-metal or Disk in Virtual Machine? (Tingjie). [Perez, Ricardo O] By now, BM. 21 CEPH_STOR_FS_21 To ensure that the size of ceph-mon can be increased. The size should be increased on both controllers Actually we have no interface to resize the ceph-mon, or you means the warning threshold percentage? (Tingjie). [Perez, Ricardo O] 'system ceph-mon-modify ceph_mon_gib=' 2/ Besides we have proposed cases with CEPH functionality interface coverage. TC Module Commands Description ceph_status TOTAL ceph -s Ceph total status and health check [Perez, Ricardo O] I believe this is already included in the upper list. io_path TOTAL fio xxx.conf Read/Write test. [Perez, Ricardo O] This is a good one, however, this Will allow us to see the IO of the Disk, not CEPH by itself, however I would like to hear the details about this test. osd_add/remove/tree OSD ceph osd add/destroy/tree osd common operations and verify after each commands. [Perez, Ricardo O] This are already included in the upper list. pool_create/modify/list Pool ceph osd create test-pool 64 3 ceph osd lspools ceph pool common operations and verify after each commands, pool can be increased PGs.. [Perez, Ricardo O] This look ok. mon_operation/status MON ceph mon operations: increase (add)/decrease (kill) and status check. [Perez, Ricardo O] Already included radosgw_status RADOSGW merge Ada's cases with swift: case 6 [Perez, Ricardo O] No comment here. restful_pugin operations MGR restful interface common operations [Perez, Ricardo O] Do you have scripts or tools to do this ? Rest API tests are quite complex. rbd_create/delete/resize RBD rbd create --size 10G test_pool/test_rbd; rbd ls test_pools create and delete operation, and verify status [Perez, Ricardo O] Looks ok to me. rbd_snapshot rbd snap create test_pool/test_rbd at test_rbd_snap snapshot operations. [Perez, Ricardo O] Looks ok to me. Thanks, Tingjie SSG OTC NST Storage Tel: +86(21)88216699 Mobile: 15901876439 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 46875 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 23782 bytes Desc: image004.jpg URL: From tingjie.chen at intel.com Thu Apr 4 03:44:20 2019 From: tingjie.chen at intel.com (Chen, Tingjie) Date: Thu, 4 Apr 2019 03:44:20 +0000 Subject: [Starlingx-discuss] Discussion about StarlingX release notes in CEPH upgrade Message-ID: Hi, I have file release notes for Ceph upgrade mimic. https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes There are 2 parts, First one is Major changes, this is official changes from 10.2.6 (Jewel) -> 13.2.2 (Mimic), there are many changes to the three major version updates. Second one is known issues in StarlingX, this may expand after validation and system test if have non-block issues. Welcome to give your comments and concerns. Thanks, Tingjie SSG OTC NST Storage Tel: +86(21)88216699 Mobile: 15901876439 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Thu Apr 4 14:20:26 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Thu, 4 Apr 2019 14:20:26 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <69276584-6FF8-458A-AF4F-EC073C1856E9@windriver.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> <69276584-6FF8-458A-AF4F-EC073C1856E9@windriver.com> Message-ID: Hi team, I have reinstalled StarlingX on my 4 bare metals but meet some problems. As soon as I deploy StarlingX successfully, I will try to add property “hw:mem_page_size=large” to flavor and then create VMs. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, April 4, 2019 3:21 AM To: Xu, Chenjie ; Liu, ZhipengS ; He, Yongli Cc: 'starlingx-discuss at lists.starlingx.io' ; Khalil, Ghada ; Zhao, Forrest ; Rowsell, Brent ; Gauld, James ; Le, Huifeng ; Martinez Monroy, Elio ; Perez, Ricardo O ; Cabrales, Ada ; Lin, Shuicheng Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello Folks, Thanks to Chris Friesen for point this out, but we believe the issues you are experiencing is due to the requirement for guests to be backed by huge pages to operate with OVS-DPDK vhost-user based ports/interfaces. The master (default) behavior for the latest nova will default to 4K pages for the guest, but this is not compatible with OVS-DPDK. The guests must be configured to use a flavor that has the property hw:mem_page_size=large set. You can follow this link to read more about the requirements on the guests for OVS-DPDK: https://docs.openstack.org/neutron/rocky/admin/config-ovs-dpdk.html “vhost-user requires file descriptor-backed shared memory. Currently, the only way to request this is by requesting large pages. This is why instances spawned on hosts with OVS-DPDK must request large pages”. Hope this helps. -Matt From: "Xu, Chenjie" > Date: Tuesday, April 2, 2019 at 11:48 PM To: "Peters, Matt" >, "Liu, ZhipengS" >, "He, Yongli" > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ghada Khalil >, "Zhao, Forrest" >, Brent Rowsell >, "Gauld, James" >, "Le, Huifeng" >, "Martinez Monroy, Elio" >, "Perez, Ricardo O" >, "Cabrales, Ada" >, "Lin, Shuicheng" > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, Zhipeng is working on “PCI Affinity dependency on Nova NUMA topology”. This may relate to BUG “VM can't send packet through vhostuser port due to missing numa settings in domain xml”: https://bugs.launchpad.net/starlingx/+bug/1820378 According to him, after cutting over to nova master, the NUMA topology and PCI device info has been removed from nova. Before nova master, StarlingX uses nova stage which has NUMA topology. Some detailed information are listed below: • PCI Affinity dependency on Nova NUMA topology - Zhipeng o This affinity agent could not get both pci_device info and numa info of server from nova. o In nova stage version, we added below two for server. o server["wrs-res:topology"] o server["wrs-res:pci_devices"] o In nova master, no these attributions for server. o For topology, I can see that there is a patch of adding numa topology pending for merge o https://review.openstack.org/#/c/621476 Add server sub-resource topology API o Next step - Zhipeng to investigate alternative implementations that don't have dependencies on Nova. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 11:50 PM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent >; Gauld, James > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don’t have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1’s server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn’t exist in my BIOS. Only NUMA Optimized exists. An image for “Memory RAS and Performance Configuration” has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Thu Apr 4 20:40:30 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 4 Apr 2019 20:40:30 +0000 Subject: [Starlingx-discuss] Discussion about StarlingX release notes in CEPH upgrade In-Reply-To: References: Message-ID: Tingjie: Thanks for putting this together as it gives a very good summary of the changes in the CEPH mimic version which is expected to merge in the near future into StarlingX. This list will be a good reference for those who will be running TCs for the new CEPH version. I have a couple of questions - would you be able to help me: 1. One of the notes indicates "There is a simplified OSD replacement process that is more robust." * Can you explain what these changes are? * Will this result in any changes to the steps an operator takes to replace a CEPH disk? 2. Another note indicates "Several sleep settings, include osd_recovery_sleep, osd_snap_trim_sleep, and osd_scrub_sleep have been reimplemented to work efficiently." * Can you share the settings used in StarlingX today with CEPH jewel as well as the planned settings that will be used in StarlingX with CEPH mimic. Will any of these settings change value when CEPH mimic is merged into StarlingX? 3. One more note indicates "CLI changes" * Can you explain which CLIs have changed? Frank From: Chen, Tingjie [mailto:tingjie.chen at intel.com] Sent: Wednesday, April 03, 2019 11:44 PM To: Jones, Bruce E ; Xie, Cindy ; Poncea, Ovidiu ; Badea, Daniel ; Cabrales, Ada ; Perez, Ricardo O ; Hernandez Gonzalez, Fernando ; Miller, Frank ; Zhu, Vivian ; Hu, Yong ; Liu, Changcheng Cc: starlingx-discuss at lists.starlingx.io Subject: Discussion about StarlingX release notes in CEPH upgrade Hi, I have file release notes for Ceph upgrade mimic. https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes There are 2 parts, First one is Major changes, this is official changes from 10.2.6 (Jewel) -> 13.2.2 (Mimic), there are many changes to the three major version updates. Second one is known issues in StarlingX, this may expand after validation and system test if have non-block issues. Welcome to give your comments and concerns. Thanks, Tingjie SSG OTC NST Storage Tel: +86(21)88216699 Mobile: 15901876439 -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Fri Apr 5 20:34:39 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 5 Apr 2019 14:34:39 -0600 Subject: [Starlingx-discuss] stx-nova repo changes for upstream tracking In-Reply-To: References: Message-ID: <7d1d0ad2-2277-e188-7121-3067e94dcb95@windriver.com> On 4/5/2019 10:19 AM, Dean Troyer wrote: > Big Note: I am thinking about keeping a policy of periodically > rebasing stx/stein on stable/stein to keep a clear history as we move > forward, making it easier to see what we have added. That possibly > means doing it next week when the final stein tag is added. Thoughts? > Force pushes can be inconvenient for developers but I am thinking the > price may be worth the return on a wider scale. I like the idea of rebasing periodically to keep our changes "on top". Rather than force-pushing, it might make sense to create a new branch for each of these rebases. That way we don't need to rewrite history. Chris From cesar.lara at intel.com Fri Apr 5 21:19:09 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Fri, 5 Apr 2019 21:19:09 +0000 Subject: [Starlingx-discuss] [mulltios][meetings] Multi-OS team meeting agenda 4/8/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710FE6563@fmsmsx123.amr.corp.intel.com> Multi-OS team meeting Agenda for 4/8/2019 - Ubuntu build PoC Demo - Kickoff phase 2 PoC Ubuntu based cloud platform Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From kailun.qin at intel.com Fri Apr 5 23:06:31 2019 From: kailun.qin at intel.com (Qin, Kailun) Date: Fri, 5 Apr 2019 23:06:31 +0000 Subject: [Starlingx-discuss] Security Groups (firewall driver) is now enabled by default Message-ID: Hi StarlingX Community, The neutron OVS firewall_driver driver was previously set to noop since there was no suitable firewall driver packaged. The security group support for OVS agent is now enabled by default with the native "openvswitch" firewall driver [0][1], which is stateful and based on openflow + conntrack implementation [2]. Due to this change, it requires that appropriate security group rules be added to allow ingress traffic to the VM. 1. List the available security groups and note the ID of the security group that you want to use for your instance: $ openstack security group list 2. By default (If you have not created any security groups), the "default" security group applies to all instances and includes firewall rules that deny remote access to instances. For Linux images such as CirrOS, we recommend allowing at least ICMP (ping) and secure shell (SSH). Add rules to the "default" security group: a. Permit ICMP (ping): $ openstack security group rule create --proto icmp default b. Permit secure shell (SSH) access: $ openstack security group rule create --proto tcp --dst-port 22 default 3. In addition to the above rules (ICMP, SSH), other security group rules can be created, deleted and listed/showed within a certain security group [3]. And other security groups (besides "default") can also be created/added, removed for a certain VM instance/server [4]. Please kindly update your related StarlingX user docs/test cases w/ steps to apply the appropriate security rules. Let me know if any question. Thanks a lot. BR, Kailun [0] https://review.openstack.org/#/c/645054/ [1] https://storyboard.openstack.org/#!/story/2002944 [2] https://docs.openstack.org/neutron/latest/contributor/internals/openvswitch_firewall.html [3] https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/security-group-rule.html [4] https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Sat Apr 6 03:59:18 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Sat, 6 Apr 2019 03:59:18 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190405 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-05 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 Tcs] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] ------------------------------------------------------------------ An issue was found across all BareMetal configurations, it looks like nova services takes several minutes to stabilize. https://bugs.launchpad.net/starlingx/+bug/1823275 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sat Apr 6 03:59:01 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Apr 2019 23:59:01 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_wheels - Build # 72 - Failure! Message-ID: <1594923636.76.1554523142373.JavaMail.javamailuser@localhost> Project: STX_build_wheels Build #: 72 Status: Failure Timestamp: 20190406T034421Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190405T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190405T233000Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190405T233000Z/logs OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190405T233000Z/logs From build.starlingx at gmail.com Sat Apr 6 03:59:04 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Apr 2019 23:59:04 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 73 - Failure! Message-ID: <1059390354.79.1554523146201.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 73 Status: Failure Timestamp: 20190406T034029Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190405T233000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190405T233000Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190405T233000Z/logs MASTER_BUILD_NUMBER: 49 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190405T233000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190405T233000Z DOCKER_BUILD_ID: jenkins-master-20190405T233000Z-builder TIMESTAMP: 20190405T233000Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190405T233000Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190405T233000Z/outputs From build.starlingx at gmail.com Sat Apr 6 03:59:08 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 5 Apr 2019 23:59:08 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 49 - Failure! Message-ID: <273922827.82.1554523149251.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 49 Status: Failure Timestamp: 20190405T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190405T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From build.starlingx at gmail.com Sat Apr 6 20:26:39 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Apr 2019 16:26:39 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_container_setup - Build # 215 - Failure! Message-ID: <1179043435.85.1554582400262.JavaMail.javamailuser@localhost> Project: STX_BUILD_container_setup Build #: 215 Status: Failure Timestamp: 20190406T202445Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190406T193412Z/logs -------------------------------------------------------------------------------- Parameters PROJECT: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190406T193412Z DOCKER_BUILD_ID: jenkins-master-20190406T193412Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190406T193412Z/logs DOCKER_BUILD_TAG: master-20190406T193412Z-builder-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190406T193412Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sat Apr 6 20:26:42 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Apr 2019 16:26:42 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 50 - Still Failing! In-Reply-To: <650593607.80.1554523147000.JavaMail.javamailuser@localhost> References: <650593607.80.1554523147000.JavaMail.javamailuser@localhost> Message-ID: <418653436.88.1554582403787.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 50 Status: Still Failing Timestamp: 20190406T193412Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190406T193412Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From build.starlingx at gmail.com Sun Apr 7 01:21:56 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Apr 2019 21:21:56 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 256 - Failure! Message-ID: <1242262947.95.1554600117584.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 256 Status: Failure Timestamp: 20190407T012146Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190407T012009Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190407T012009Z DOCKER_DL_ID: jenkins-master-20190407T012009Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190407T012009Z/logs DOCKER_DL_TAG: master-20190407T012009Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190407T012009Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sun Apr 7 01:21:59 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 6 Apr 2019 21:21:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 52 - Failure! Message-ID: <1953619152.98.1554600121289.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 52 Status: Failure Timestamp: 20190407T012009Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190407T012009Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From zhang.kunpeng at 99cloud.net Mon Apr 8 05:09:00 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 8 Apr 2019 13:09:00 +0800 Subject: [Starlingx-discuss] how to modify nova.conf? Message-ID: Hi All, In some case, I need to modify /etc/nova/nova.conf. But it will recover to original after reboot. Are there some way to persist nova.conf? Thanks Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Mon Apr 8 11:27:11 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 8 Apr 2019 07:27:11 -0400 Subject: [Starlingx-discuss] Security Groups (firewall driver) is now enabled by default In-Reply-To: References: Message-ID: On Fri, Apr 5, 2019 at 7:08 PM Qin, Kailun wrote: > Hi StarlingX Community, > > > > The neutron OVS firewall_driver driver was previously set to noop since > there was no suitable firewall driver packaged. The security group support > for OVS agent is now enabled by default with the native "openvswitch" > firewall driver [0][1], which is stateful and based on openflow + conntrack > implementation [2]. > > > Thanks kindly for the update! This is actually a very interesting change. Thanks, Curtis > Due to this change, it requires that appropriate security group rules be > added to allow ingress traffic to the VM. > > *1. **List the available security groups and note the ID of the > security group that you want to use for your instance:* > > *$ openstack security group list* > > *2. **By default (If you have not created any security groups), the > “default” security group applies to all instances and includes firewall > rules that deny remote access to instances. For Linux images such as > CirrOS, we recommend allowing at least ICMP (ping) and secure shell (SSH).* > > *Add rules to the “default” security group:* > > *a. **Permit ICMP (ping):* > > *$ openstack security group rule create --proto icmp default* > > *b. **Permit secure shell (SSH) access:* > > *$ openstack security group rule create --proto tcp --dst-port 22 default* > > *3. **In addition to the above rules (ICMP, SSH), other security > group rules can be created, deleted and listed/showed within a certain > security group *[3]*. And other security groups (besides “default”) can > also be created/added, removed for a certain VM instance/server *[4]*.* > > > > Please kindly update your related StarlingX user docs/test cases w/ steps > to apply the appropriate security rules. > > > > Let me know if any question. Thanks a lot. > > > > BR, > > Kailun > > > > [0] https://review.openstack.org/#/c/645054/ > > [1] https://storyboard.openstack.org/#!/story/2002944 > > [2] > https://docs.openstack.org/neutron/latest/contributor/internals/openvswitch_firewall.html > > [3] > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/security-group-rule.html > > [4] > https://docs.openstack.org/python-openstackclient/latest/cli/command-objects/server.html > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Apr 8 13:11:41 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 8 Apr 2019 08:11:41 -0500 Subject: [Starlingx-discuss] stx-nova repo changes for upstream tracking In-Reply-To: <7d1d0ad2-2277-e188-7121-3067e94dcb95@windriver.com> References: <7d1d0ad2-2277-e188-7121-3067e94dcb95@windriver.com> Message-ID: On Fri, Apr 5, 2019 at 3:35 PM Chris Friesen wrote: > I like the idea of rebasing periodically to keep our changes "on top". > > Rather than force-pushing, it might make sense to create a new branch > for each of these rebases. That way we don't need to rewrite history. We could do that, it would mean updating manifest files or whatever else points to the right branch each time and be one more thing to track for debugging. I had considered renaming the prior stx/stein branch and creating a new one, the effect is the same as a force push but it preserves that bit of history. I don't have much invested in either option, but I lean toward always building from stx/stein. Opinions from those who this would affect more directly? dt -- Dean Troyer dtroyer at gmail.com From Ghada.Khalil at windriver.com Mon Apr 8 13:44:59 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 8 Apr 2019 13:44:59 +0000 Subject: [Starlingx-discuss] StoryBoard / Launchpad Re-tagging -- COMPLETE Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D5773@ALA-MBD.corp.ad.wrs.com> The re-tagging has been complete. Please update your personal queries accordingly. Wiki references have been updated. Regards, Ghada PS: Apologies for the email spam from Launchpad. From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, April 04, 2019 6:18 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StoryBoard / Launchpad Re-tagging Hello all, As per the community call on Wednesday, we agreed to change the release tags in StoryBoard and Launchpad as follows: stx.2018.10 >> stx.1.0 stx.2019.05 >> stx.2.0 The tags will be bulk updated tomorrow. I will send another email once the updates are done. You will need to update your personal queries accordingly. I will update references on the wikis. Thanks, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Apr 8 14:54:40 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 8 Apr 2019 14:54:40 +0000 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 8 Message-ID: Planned agenda for today's call: 1. Sanity status: RED - Simplex BM failures: https://bugs.launchpad.net/starlingx/+bug/1823275 - Standard Dedicated Storage BM and Virtual - Could not create VMs: https://bugs.launchpad.net/starlingx/+bug/1821841 [Gerry, Don, Angie] 2. Feature reviews & technical issue discussion across team: a) mariaDB [Chris]: https://storyboard.openstack.org/#!/story/2004712 b) Ironic [Mingyuan] c) Huge pages [Austin] d) Helm upversion issues [Bob] e) K8s API authentication [Jerry] f) Fault helm chart [Mario] g) Others? 3. Open topics -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Apr 8 15:07:46 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 8 Apr 2019 15:07:46 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190405 In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D5846@ALA-MBD.corp.ad.wrs.com> Please note that https://bugs.launchpad.net/starlingx/+bug/1823275 impacts baremetal systems with non-network pci devices (qat devices: ex: C62x or gpus). If your system doesn't have non-network pci devices, you will not encounter this issue. This is an upstream nova issue which was first reported by StarlingX a week ago via https://bugs.launchpad.net/starlingx/+bug/1821938 A fix has been merged in the nova master & stein branches. It should be picked up in tonight's stx docker image builds. Regards, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, April 05, 2019 11:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190405 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-05 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 Tcs] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 51 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] ------------------------------------------------------------------ An issue was found across all BareMetal configurations, it looks like nova services takes several minutes to stabilize. https://bugs.launchpad.net/starlingx/+bug/1823275 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Apr 8 07:38:30 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 8 Apr 2019 07:38:30 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: <69276584-6FF8-458A-AF4F-EC073C1856E9@windriver.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> <69276584-6FF8-458A-AF4F-EC073C1856E9@windriver.com> Message-ID: Hi all, After setting property "hw:mem_page_size=large" to flavor, the newly created VM can get IP from DHCP and ping other VM successfully. And NUMA related sections exist in the domain XML file (new domain XML mem_page_size.xml is attached). My steps are listed in the bug report: https://bugs.launchpad.net/starlingx/+bug/1820378 I think it’s better to modify the installation guide to include how to create VM on different environment (OVS/OVSDPDK). Please let me know your idea. Best Regards Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, April 4, 2019 3:21 AM To: Xu, Chenjie ; Liu, ZhipengS ; He, Yongli Cc: 'starlingx-discuss at lists.starlingx.io' ; Khalil, Ghada ; Zhao, Forrest ; Rowsell, Brent ; Gauld, James ; Le, Huifeng ; Martinez Monroy, Elio ; Perez, Ricardo O ; Cabrales, Ada ; Lin, Shuicheng Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello Folks, Thanks to Chris Friesen for point this out, but we believe the issues you are experiencing is due to the requirement for guests to be backed by huge pages to operate with OVS-DPDK vhost-user based ports/interfaces. The master (default) behavior for the latest nova will default to 4K pages for the guest, but this is not compatible with OVS-DPDK. The guests must be configured to use a flavor that has the property hw:mem_page_size=large set. You can follow this link to read more about the requirements on the guests for OVS-DPDK: https://docs.openstack.org/neutron/rocky/admin/config-ovs-dpdk.html “vhost-user requires file descriptor-backed shared memory. Currently, the only way to request this is by requesting large pages. This is why instances spawned on hosts with OVS-DPDK must request large pages”. Hope this helps. -Matt From: "Xu, Chenjie" > Date: Tuesday, April 2, 2019 at 11:48 PM To: "Peters, Matt" >, "Liu, ZhipengS" >, "He, Yongli" > Cc: "'starlingx-discuss at lists.starlingx.io'" >, Ghada Khalil >, "Zhao, Forrest" >, Brent Rowsell >, "Gauld, James" >, "Le, Huifeng" >, "Martinez Monroy, Elio" >, "Perez, Ricardo O" >, "Cabrales, Ada" >, "Lin, Shuicheng" > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, Zhipeng is working on “PCI Affinity dependency on Nova NUMA topology”. This may relate to BUG “VM can't send packet through vhostuser port due to missing numa settings in domain xml”: https://bugs.launchpad.net/starlingx/+bug/1820378 According to him, after cutting over to nova master, the NUMA topology and PCI device info has been removed from nova. Before nova master, StarlingX uses nova stage which has NUMA topology. Some detailed information are listed below: • PCI Affinity dependency on Nova NUMA topology - Zhipeng o This affinity agent could not get both pci_device info and numa info of server from nova. o In nova stage version, we added below two for server. o server["wrs-res:topology"] o server["wrs-res:pci_devices"] o In nova master, no these attributions for server. o For topology, I can see that there is a patch of adding numa topology pending for merge o https://review.openstack.org/#/c/621476 Add server sub-resource topology API o Next step - Zhipeng to investigate alternative implementations that don't have dependencies on Nova. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 11:50 PM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent >; Gauld, James > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Thanks Chenjie. Can you please run the following on your compute nodes and attach the output? sudo lscpu sudo virsh capabilities Richardo / Juan P, Please provide the cpu models and the above output from your two hardware systems as well. Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, April 02, 2019 2:55 AM To: Khalil, Ghada; Martinez Monroy, Elio; Peters, Matt; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Ghada, Compute-0 and compute-1 don’t have skylake processors. The processors both are: Intel(R) Xeon(R) CPU E5-2680 0 @ 2.70GHz Compute-0 and compute-1’s server model both are: Manufacturer: Intel Corporation Product Name: S2600JF Sub-NUMA Clustering doesn’t exist in my BIOS. Only NUMA Optimized exists. An image for “Memory RAS and Performance Configuration” has been attached. Best Regards, Xu, Chenjie From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Tuesday, April 2, 2019 6:52 AM To: Xu, Chenjie >; Martinez Monroy, Elio >; Peters, Matt >; Lin, Shuicheng >; Cabrales, Ada >; Perez, Ricardo O > Cc: 'starlingx-discuss at lists.starlingx.io' >; Zhao, Forrest >; Rowsell, Brent > Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Chenjie, Can you let us know the model of your server? Does it have skylake processors? Can you also double-check that you have Sub-NUMA Clustering disabled in the BIOS? On a wolfpass, the BIOS setting is at: Enter Setup > Advanced > Memory Configuration > Memory RAS and Performance Configuration Sub-NUMA Cluster Thanks, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Wednesday, March 27, 2019 9:59 PM To: Martinez Monroy, Elio; Peters, Matt; Khalil, Ghada; Lin, Shuicheng; Cabrales, Ada; Perez, Ricardo O Cc: 'starlingx-discuss at lists.starlingx.io'; Zhao, Forrest; Rowsell, Brent Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi Matt, I run the following commands on compute-0. The outputs of commands have been attached. sudo /usr/sbin/dmidecode > dmidecode.txt virsh nodeinfo > nodeinfo.txt /usr/bin/topology > topology.txt grep -i numa /var/log/dmesg > dmesg.txt And the StarlingX is installed with standard 0322 ISO image: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190322T174230Z/outputs/iso/ Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: mem_page_size.xml Type: text/xml Size: 5239 bytes Desc: mem_page_size.xml URL: From Daniel.Badea at windriver.com Mon Apr 8 12:23:09 2019 From: Daniel.Badea at windriver.com (Badea, Daniel) Date: Mon, 8 Apr 2019 12:23:09 +0000 Subject: [Starlingx-discuss] Discussion about StarlingX release notes in CEPH upgrade In-Reply-To: References: , Message-ID: <9174DAE490321844AE273F6AD001E3EA9D8526FA@ALA-MBD.corp.ad.wrs.com> Hi Frank, I looked at released notes put together by Tingjie and here are my notes: * Each OSD has a device class associated with it. Documentation: https://ceph.com/community/new-luminous-crush-device-classes/ . Notes: * Purpose: added to simplify OSD crush placement based on hardware properties reported by the kernel * We are currently using storage tiers to partition ceph storage pool access to faster or slower disks. When a new storage tier is created the entire crush tree hierarchy is cloned then OSDs can be attached to it. Pools are then configured to use the new crush tree root. * With Ceph Luminous there is no need to clone the entire crush tree when we want to create "faster" pools. The command to create a crush rule for a pool now supports a device-class parameter that can be used to filter OSDs based on their type: hdd, ssd or nvme. * If we are using multiple ceph tiers exclusively for partitioning OSDs based on their hardware characteristics then we can take advantage of the device-class feature but we also need to update the logic related to replication and storage node locking. OSDs of all classes will be anchored to one storage node whereas currently they are anchored to different crush trees. However there is no urgent reason to use the new feature now. We are already updating the crush map automatically and we can mix any kind of disks into a ceph tier (which is not possible when using device classes). * Simplified OSD replacement procedure. Documentation http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#replacing-an-osd . Notes: * Replacement procedure is based entirely on "ceph-volume" utility that we are not currently using. * The replacement procedure is not documented in Ceph Jewel so I can't tell what's "simplified". * Currently, when replacing a storage disk: * if puppet finds on disk a Ceph cluster signature that's different from the current one then it fails and storage node fails to unlock * if the signature matches current Ceph cluster then the disk is used as is * otherwise the disk is setup to be an OSD: ceph-disk prepare, ceph-disk activate, etc. * There is no reason to use the new OSD replacement procedure now. * Pools are expected to be associated with the application using them. Notes: * We already hit this issue. Fixed by running pool application enable. * Config options can now be centrally stored and managed by the monitor. Notes: * Not sure how this helps. Configuration is already managed by sysinv and puppet. * RGW now supports data compression for objects. Notes: * We may want to expose this configuration option via system service parameters Best regards, Daniel B. ________________________________ From: Miller, Frank Sent: Thursday, April 04, 2019 23:40 To: Chen, Tingjie; Jones, Bruce E; Xie, Cindy; Poncea, Ovidiu; Badea, Daniel; Cabrales, Ada; Perez, Ricardo O; Hernandez Gonzalez, Fernando; Zhu, Vivian; Hu, Yong; Liu, Changcheng Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Discussion about StarlingX release notes in CEPH upgrade Tingjie: Thanks for putting this together as it gives a very good summary of the changes in the CEPH mimic version which is expected to merge in the near future into StarlingX. This list will be a good reference for those who will be running TCs for the new CEPH version. I have a couple of questions – would you be able to help me: 1. One of the notes indicates “There is a simplified OSD replacement process that is more robust.” · Can you explain what these changes are? · Will this result in any changes to the steps an operator takes to replace a CEPH disk? 2. Another note indicates “Several sleep settings, include osd_recovery_sleep, osd_snap_trim_sleep, and osd_scrub_sleep have been reimplemented to work efficiently.” · Can you share the settings used in StarlingX today with CEPH jewel as well as the planned settings that will be used in StarlingX with CEPH mimic. Will any of these settings change value when CEPH mimic is merged into StarlingX? 3. One more note indicates “CLI changes” · Can you explain which CLIs have changed? Frank From: Chen, Tingjie [mailto:tingjie.chen at intel.com] Sent: Wednesday, April 03, 2019 11:44 PM To: Jones, Bruce E ; Xie, Cindy ; Poncea, Ovidiu ; Badea, Daniel ; Cabrales, Ada ; Perez, Ricardo O ; Hernandez Gonzalez, Fernando ; Miller, Frank ; Zhu, Vivian ; Hu, Yong ; Liu, Changcheng Cc: starlingx-discuss at lists.starlingx.io Subject: Discussion about StarlingX release notes in CEPH upgrade Hi, I have file release notes for Ceph upgrade mimic. https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes There are 2 parts, First one is Major changes, this is official changes from 10.2.6 (Jewel) -> 13.2.2 (Mimic), there are many changes to the three major version updates. Second one is known issues in StarlingX, this may expand after validation and system test if have non-block issues. Welcome to give your comments and concerns. Thanks, Tingjie SSG OTC NST Storage Tel: +86(21)88216699 Mobile: 15901876439 -------------- next part -------------- An HTML attachment was scrubbed... URL: From tingjie.chen at intel.com Mon Apr 8 15:16:15 2019 From: tingjie.chen at intel.com (Chen, Tingjie) Date: Mon, 8 Apr 2019 15:16:15 +0000 Subject: [Starlingx-discuss] Discussion about StarlingX release notes in CEPH upgrade In-Reply-To: <9174DAE490321844AE273F6AD001E3EA9D8526FA@ALA-MBD.corp.ad.wrs.com> References: , <9174DAE490321844AE273F6AD001E3EA9D8526FA@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Frank & Daniel, Following are my comments. For Frank, ------------------------------- 1/ Simplified OSD replacement process that is more robust. Daniel has explained in detail and precisely, for supplement, it is first introduced in Luminous: http://docs.ceph.com/docs/luminous/rados/operations/add-or-rm-osds/#replacing-an-osd but the process has evolution with new command: ceph-volume in Mimic: http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#replacing-an-osd 2/ Several sleep settings. It is a good question, the rework of sleep implementation is introduced in Luminous, and I will check the scenario for sleep setting later since I am deploy new image now. 3/ CLI changes, This also introduced initially in Luminous, and I have add the detail changes in etherpad review. https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes For Daniel, --------------------------------- Thanks for your notes, it is practical in StarlingX deployment. 1/ Config options can now be centrally stored and managed by the monitor You can refer details: https://ceph.com/community/new-mimic-centralized-configuration-management/ And also it is a different case in containerize Ceph configuration management (no puppet). Thanks, Tingjie From: Badea, Daniel [mailto:Daniel.Badea at windriver.com] Sent: Monday, April 8, 2019 8:23 PM To: Miller, Frank ; Chen, Tingjie ; Jones, Bruce E ; Xie, Cindy ; Poncea, Ovidiu ; Cabrales, Ada ; Perez, Ricardo O ; Hernandez Gonzalez, Fernando ; Zhu, Vivian ; Hu, Yong ; Liu, Changcheng Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Discussion about StarlingX release notes in CEPH upgrade Hi Frank, I looked at released notes put together by Tingjie and here are my notes: * Each OSD has a device class associated with it. Documentation: https://ceph.com/community/new-luminous-crush-device-classes/ . Notes: * Purpose: added to simplify OSD crush placement based on hardware properties reported by the kernel * We are currently using storage tiers to partition ceph storage pool access to faster or slower disks. When a new storage tier is created the entire crush tree hierarchy is cloned then OSDs can be attached to it. Pools are then configured to use the new crush tree root. * With Ceph Luminous there is no need to clone the entire crush tree when we want to create "faster" pools. The command to create a crush rule for a pool now supports a device-class parameter that can be used to filter OSDs based on their type: hdd, ssd or nvme. * If we are using multiple ceph tiers exclusively for partitioning OSDs based on their hardware characteristics then we can take advantage of the device-class feature but we also need to update the logic related to replication and storage node locking. OSDs of all classes will be anchored to one storage node whereas currently they are anchored to different crush trees. However there is no urgent reason to use the new feature now. We are already updating the crush map automatically and we can mix any kind of disks into a ceph tier (which is not possible when using device classes). * Simplified OSD replacement procedure. Documentation http://docs.ceph.com/docs/mimic/rados/operations/add-or-rm-osds/#replacing-an-osd . Notes: * Replacement procedure is based entirely on "ceph-volume" utility that we are not currently using. * The replacement procedure is not documented in Ceph Jewel so I can't tell what's "simplified". * Currently, when replacing a storage disk: * if puppet finds on disk a Ceph cluster signature that's different from the current one then it fails and storage node fails to unlock * if the signature matches current Ceph cluster then the disk is used as is * otherwise the disk is setup to be an OSD: ceph-disk prepare, ceph-disk activate, etc. * There is no reason to use the new OSD replacement procedure now. * Pools are expected to be associated with the application using them. Notes: * We already hit this issue. Fixed by running pool application enable. * Config options can now be centrally stored and managed by the monitor. Notes: * Not sure how this helps. Configuration is already managed by sysinv and puppet. * RGW now supports data compression for objects. Notes: * We may want to expose this configuration option via system service parameters Best regards, Daniel B. ________________________________ From: Miller, Frank Sent: Thursday, April 04, 2019 23:40 To: Chen, Tingjie; Jones, Bruce E; Xie, Cindy; Poncea, Ovidiu; Badea, Daniel; Cabrales, Ada; Perez, Ricardo O; Hernandez Gonzalez, Fernando; Zhu, Vivian; Hu, Yong; Liu, Changcheng Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Discussion about StarlingX release notes in CEPH upgrade Tingjie: Thanks for putting this together as it gives a very good summary of the changes in the CEPH mimic version which is expected to merge in the near future into StarlingX. This list will be a good reference for those who will be running TCs for the new CEPH version. I have a couple of questions - would you be able to help me: 1. One of the notes indicates "There is a simplified OSD replacement process that is more robust." * Can you explain what these changes are? * Will this result in any changes to the steps an operator takes to replace a CEPH disk? 2. Another note indicates "Several sleep settings, include osd_recovery_sleep, osd_snap_trim_sleep, and osd_scrub_sleep have been reimplemented to work efficiently." * Can you share the settings used in StarlingX today with CEPH jewel as well as the planned settings that will be used in StarlingX with CEPH mimic. Will any of these settings change value when CEPH mimic is merged into StarlingX? 3. One more note indicates "CLI changes" * Can you explain which CLIs have changed? Frank From: Chen, Tingjie [mailto:tingjie.chen at intel.com] Sent: Wednesday, April 03, 2019 11:44 PM To: Jones, Bruce E >; Xie, Cindy >; Poncea, Ovidiu >; Badea, Daniel >; Cabrales, Ada >; Perez, Ricardo O >; Hernandez Gonzalez, Fernando >; Miller, Frank >; Zhu, Vivian >; Hu, Yong >; Liu, Changcheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Discussion about StarlingX release notes in CEPH upgrade Hi, I have file release notes for Ceph upgrade mimic. https://etherpad.openstack.org/p/stx-ceph-uprev-mimic-release-notes There are 2 parts, First one is Major changes, this is official changes from 10.2.6 (Jewel) -> 13.2.2 (Mimic), there are many changes to the three major version updates. Second one is known issues in StarlingX, this may expand after validation and system test if have non-block issues. Welcome to give your comments and concerns. Thanks, Tingjie SSG OTC NST Storage Tel: +86(21)88216699 Mobile: 15901876439 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Apr 8 15:58:04 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 8 Apr 2019 15:58:04 +0000 Subject: [Starlingx-discuss] Sanity updates Message-ID: As also mentioned on today's containerization call, Don Penney has now updated the docker builds to use the OpenStack Stein branch and Angie Wang has a commit to switch the system application-upload stx-openstack command to pull the "stable" aka Stein docker images. Once this commit is merged today, the recent sanity issues should be addressed: https://review.openstack.org/#/c/650436/ Frank From: Miller, Frank Sent: Friday, April 05, 2019 11:29 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Sanity updates An update on the 2nd issue where VMs fail to launch: Gerry has confirmed the issue is due to using a nova docker image from OpenStack master. Don Penney is updating the docker image builds to use the OpenStack Stein branches. After re-testing confirms these docker images are sane, he will work to switch the CENGN builds over to the Stein branches. Frank From: Miller, Frank Sent: Wednesday, April 03, 2019 10:48 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: Sanity updates Folks: I took an action on the containers community call to send out an update on the current sanity issues. 1. AIO-SX: This configuration should now be ready for use. * Bart Wensley solved LP 1820928 which turned out to be a bug in kubelet where it was hitting a limit of 250 http2 streams in a single connection. 2. Other multi-server configs: An intermittent issue still exists when launching VMs resulting in the VMs failing to be scheduled. Tracked under LPs 1821841 & 1822116 * Gerry Kopec continues to investigate intermittent issues with the nova-placement pod. When issue occurs VMs cannot be launched. * Issue is with nova-compute unable to get requests processed to one of the nova-placement pods running on each controller. * Current theory is this is related to our docker images using OpenStack master and a recent nova commit in the nova placement area is impacting the placement pod. Gerry expects to prove or disprove the theory later today. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Chris.Winnicki at windriver.com Mon Apr 8 17:14:47 2019 From: Chris.Winnicki at windriver.com (Winnicki, Chris) Date: Mon, 8 Apr 2019 17:14:47 +0000 Subject: [Starlingx-discuss] how to modify nova.conf? In-Reply-To: References: Message-ID: <7E4792BA14B1DE4BAB354DF77FE0233ABC8AD74C@ALA-MBD.corp.ad.wrs.com> StarlingX is a managed system; modifying conf/ini files manually is not supported. All system configuration changes should be done via: nova, openstack, neutron, system, cinder, etc.... Regards -Chris ________________________________ From: 张鲲鹏 [zhang.kunpeng at 99cloud.net] Sent: Monday, April 08, 2019 1:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] how to modify nova.conf? Hi All, In some case, I need to modify /etc/nova/nova.conf. But it will recover to original after reboot. Are there some way to persist nova.conf? Thanks Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Mon Apr 8 17:18:13 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 8 Apr 2019 17:18:13 +0000 Subject: [Starlingx-discuss] StarlingX Release Meeting - Test Input for Revised Release Dates Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A2500A@ALA-MBD.corp.ad.wrs.com> In last week's release team meeting, we agreed to hold a meeting, before next week's TSC meeting, to collect input from the Test team on the 'long pole' items in the plan. The meeting will be at 4:30pm EDT (8:30pm UTC), see [0]. We'll use the usual Zoom bridge [1]. Bill... [0] Meeting start time in various time zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190410T2030 [1] Zoom Link: https://zoom.us/j/342730236 [2] Release Team etherpad: agenda/minutes: https://etherpad.openstack.org/p/stx-releases From Gerry.Kopec at windriver.com Mon Apr 8 20:24:09 2019 From: Gerry.Kopec at windriver.com (Kopec, Gerald (Gerry)) Date: Mon, 8 Apr 2019 20:24:09 +0000 Subject: [Starlingx-discuss] how to modify nova.conf? In-Reply-To: <7E4792BA14B1DE4BAB354DF77FE0233ABC8AD74C@ALA-MBD.corp.ad.wrs.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8AD74C@ALA-MBD.corp.ad.wrs.com> Message-ID: <58CF5BABC9A76946A638A0E8AE48D1737182E3D1@ALA-MBD.corp.ad.wrs.com> For quick designer testing, you can modify nova.conf by altering /opt/platform/armada/19.01/stx-openstack-manifest.yaml on the active controller. Find the openstack-nova schema section and then alter values in data.values.conf.nova: schema: armada/Chart/v1 metadata: schema: metadata/Document/v1 name: openstack-nova data: chart_name: nova release: openstack-nova namespace: openstack … values: … conf: ceph: enabled: true nova: DEFAULT: default_mempages_size: 2048 reserved_host_memory_mb: 0 compute_monitors: cpu.virt_driver Then restart the appropriate pods from platform cli via: system application-apply Gerry From: Winnicki, Chris [mailto:Chris.Winnicki at windriver.com] Sent: Monday, April 08, 2019 1:15 PM To: 张鲲鹏; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] how to modify nova.conf? StarlingX is a managed system; modifying conf/ini files manually is not supported. All system configuration changes should be done via: nova, openstack, neutron, system, cinder, etc.... Regards -Chris ________________________________ From: 张鲲鹏 [zhang.kunpeng at 99cloud.net] Sent: Monday, April 08, 2019 1:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] how to modify nova.conf? Hi All, In some case, I need to modify /etc/nova/nova.conf. But it will recover to original after reboot. Are there some way to persist nova.conf? Thanks Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Mon Apr 8 20:41:41 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Mon, 8 Apr 2019 20:41:41 +0000 Subject: [Starlingx-discuss] how to modify nova.conf? In-Reply-To: <58CF5BABC9A76946A638A0E8AE48D1737182E3D1@ALA-MBD.corp.ad.wrs.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8AD74C@ALA-MBD.corp.ad.wrs.com> <58CF5BABC9A76946A638A0E8AE48D1737182E3D1@ALA-MBD.corp.ad.wrs.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F8621A5D29@ALA-MBD.corp.ad.wrs.com> Or update the helm override. Example: system helm-override-update --set conf.nova.DEFAULT.default_mempages_size= nova openstack system application-apply stx-openstack From: Kopec, Gerald (Gerry) [mailto:Gerry.Kopec at windriver.com] Sent: April-08-19 4:24 PM To: 张鲲鹏 Cc: Winnicki, Chris; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] how to modify nova.conf? For quick designer testing, you can modify nova.conf by altering /opt/platform/armada/19.01/stx-openstack-manifest.yaml on the active controller. Find the openstack-nova schema section and then alter values in data.values.conf.nova: schema: armada/Chart/v1 metadata: schema: metadata/Document/v1 name: openstack-nova data: chart_name: nova release: openstack-nova namespace: openstack … values: … conf: ceph: enabled: true nova: DEFAULT: default_mempages_size: 2048 reserved_host_memory_mb: 0 compute_monitors: cpu.virt_driver Then restart the appropriate pods from platform cli via: system application-apply Gerry From: Winnicki, Chris [mailto:Chris.Winnicki at windriver.com] Sent: Monday, April 08, 2019 1:15 PM To: 张鲲鹏; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] how to modify nova.conf? StarlingX is a managed system; modifying conf/ini files manually is not supported. All system configuration changes should be done via: nova, openstack, neutron, system, cinder, etc.... Regards -Chris ________________________________ From: 张鲲鹏 [zhang.kunpeng at 99cloud.net] Sent: Monday, April 08, 2019 1:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] how to modify nova.conf? Hi All, In some case, I need to modify /etc/nova/nova.conf. But it will recover to original after reboot. Are there some way to persist nova.conf? Thanks Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Mon Apr 8 22:42:48 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 8 Apr 2019 22:42:48 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190407 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-07 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 10 TCs FAIL Sanity Platform 07 TCs [PASS] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44 TCs] [Fail : 13 Tcs] ------------------------------------------------------------------ We had internal deployment problems in our virtual environments No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 -------------------------------------------------------------------- Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Apr 8 23:33:17 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 8 Apr 2019 17:33:17 -0600 Subject: [Starlingx-discuss] how to modify nova.conf? In-Reply-To: <19C65A6E92EA384D809B1772130CD7F8621A5D29@ALA-MBD.corp.ad.wrs.com> References: <7E4792BA14B1DE4BAB354DF77FE0233ABC8AD74C@ALA-MBD.corp.ad.wrs.com> <58CF5BABC9A76946A638A0E8AE48D1737182E3D1@ALA-MBD.corp.ad.wrs.com> <19C65A6E92EA384D809B1772130CD7F8621A5D29@ALA-MBD.corp.ad.wrs.com> Message-ID: <49ba3354-0023-9d2d-9bda-454f06c70b40@windriver.com> This is the officially-supported way of changing the configuration. Chris On 4/8/2019 2:41 PM, Liu, Yang wrote: > > Or update the helm override. Example: > > system helm-override-update --set > conf.nova.DEFAULT.default_mempages_size= nova openstack > > system application-apply stx-openstack > > *From:*Kopec, Gerald (Gerry) [mailto:Gerry.Kopec at windriver.com] > *Sent:* April-08-19 4:24 PM > *To:* 张鲲鹏 > *Cc:* Winnicki, Chris; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] how to modify nova.conf? > > For quick designer testing, you can modify nova.conf by altering > /opt/platform/armada/19.01/stx-openstack-manifest.yaml on the active > controller. > > Find the openstack-nova schema section and then alter values in > data.values.conf.nova: > > schema: armada/Chart/v1 > > metadata: > > schema: metadata/Document/v1 > > name: openstack-nova > > data: > > chart_name: nova > > release: openstack-nova > > namespace: openstack > > … > > values: > > … > > conf: > > ceph: > > enabled: true > > nova: > > DEFAULT: > >          default_mempages_size: 2048 > > reserved_host_memory_mb: 0 > > compute_monitors: cpu.virt_driver > > Then restart the appropriate pods from platform cli via: system > application-apply > > Gerry > > *From:*Winnicki, Chris [mailto:Chris.Winnicki at windriver.com] > *Sent:* Monday, April 08, 2019 1:15 PM > *To:* 张鲲鹏; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] how to modify nova.conf? > > StarlingX is a managed system; modifying conf/ini  files manually is > not supported. > All system configuration changes should be done via: nova, openstack, > neutron, system, cinder, etc.... > > Regards > > -Chris > > ------------------------------------------------------------------------ > > *From:*张鲲鹏[zhang.kunpeng at 99cloud.net] > *Sent:* Monday, April 08, 2019 1:09 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] how to modify nova.conf? > > Hi All, > > In some case, I need to modify /etc/nova/nova.conf. But it > will recover to > > original after reboot. Are there some way to *persist nova.conf?* > > *Thanks* > > *Kunpeng* > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Apr 8 23:40:13 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 8 Apr 2019 23:40:13 +0000 Subject: [Starlingx-discuss] [ Test ] meeting agenda - 4/9/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC0CFB@FMSMSX114.amr.corp.intel.com> Test meeting agenda - 4/9 1. Test plan status: Containers - Jose, Numan OpenStack Patch elimination - JC, Numan Distributed cloud - Numan 2. Automation status - Elio 3. Opens - All -- Regards Ada From Gerry.Kopec at windriver.com Tue Apr 9 00:02:33 2019 From: Gerry.Kopec at windriver.com (Kopec, Gerald (Gerry)) Date: Tue, 9 Apr 2019 00:02:33 +0000 Subject: [Starlingx-discuss] stx-nova repo changes for upstream tracking In-Reply-To: References: <7d1d0ad2-2277-e188-7121-3067e94dcb95@windriver.com> Message-ID: <58CF5BABC9A76946A638A0E8AE48D1737182E495@ALA-MBD.corp.ad.wrs.com> I'm ok with new branches per rebase. We're not rebasing that often so I think it's manageable. Gerry -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Monday, April 08, 2019 9:12 AM To: starlingx Subject: Re: [Starlingx-discuss] stx-nova repo changes for upstream tracking On Fri, Apr 5, 2019 at 3:35 PM Chris Friesen wrote: > I like the idea of rebasing periodically to keep our changes "on top". > > Rather than force-pushing, it might make sense to create a new branch > for each of these rebases. That way we don't need to rewrite history. We could do that, it would mean updating manifest files or whatever else points to the right branch each time and be one more thing to track for debugging. I had considered renaming the prior stx/stein branch and creating a new one, the effect is the same as a force push but it preserves that bit of history. I don't have much invested in either option, but I lean toward always building from stx/stein. Opinions from those who this would affect more directly? dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Apr 9 03:55:19 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 8 Apr 2019 23:55:19 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_wheels - Build # 77 - Failure! Message-ID: <1619465611.103.1554782120022.JavaMail.javamailuser@localhost> Project: STX_build_wheels Build #: 77 Status: Failure Timestamp: 20190409T034253Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190408T233001Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190408T233001Z/logs From build.starlingx at gmail.com Tue Apr 9 03:55:22 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 8 Apr 2019 23:55:22 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 78 - Failure! Message-ID: <2061048675.106.1554782123449.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 78 Status: Failure Timestamp: 20190409T033910Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190408T233001Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs MASTER_BUILD_NUMBER: 55 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190408T233001Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190408T233001Z DOCKER_BUILD_ID: jenkins-master-20190408T233001Z-builder TIMESTAMP: 20190408T233001Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190408T233001Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190408T233001Z/outputs From build.starlingx at gmail.com Tue Apr 9 03:55:25 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 8 Apr 2019 23:55:25 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 55 - Failure! Message-ID: <1266599862.109.1554782127046.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 55 Status: Failure Timestamp: 20190408T233001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From Bill.Zvonar at windriver.com Tue Apr 9 10:11:20 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 9 Apr 2019 10:11:20 +0000 Subject: [Starlingx-discuss] StarlingX Release Meeting - Test Input for Revised Release Dates In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC0A2500A@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC0A2500A@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A26715@ALA-MBD.corp.ad.wrs.com> Neglected to mention that this meeting is happening on Wednesday (April 10). -----Original Message----- From: Zvonar, Bill Sent: Monday, April 8, 2019 1:18 PM To: starlingx-discuss at lists.starlingx.io Cc: Khalil, Ghada ; Jones, Bruce E ; Cabrales, Ada ; Waheed, Numan Subject: [Starlingx-discuss] StarlingX Release Meeting - Test Input for Revised Release Dates In last week's release team meeting, we agreed to hold a meeting, before next week's TSC meeting, to collect input from the Test team on the 'long pole' items in the plan. The meeting will be at 4:30pm EDT (8:30pm UTC), see [0]. We'll use the usual Zoom bridge [1]. Bill... [0] Meeting start time in various time zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190410T2030 [1] Zoom Link: https://zoom.us/j/342730236 [2] Release Team etherpad: agenda/minutes: https://etherpad.openstack.org/p/stx-releases _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Tue Apr 9 13:03:10 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 9 Apr 2019 13:03:10 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F04EDA@SHSMSX104.ccr.corp.intel.com> Agenda for 4/10 meeting: - Ceph upgrade status: 1. patch review status (Daniel) https://review.openstack.org/#/q/topic:ceph-mimic-upgrade+(status:open+OR+status:merged) 2. Ceph dev build validation status (Fernando) 3. Release notes review/update (Tingjie/Daniel) - QAT driver upgrade (Haitao) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 10, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From bruce.e.jones at intel.com Tue Apr 9 13:27:36 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 9 Apr 2019 13:27:36 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting Apr 10 2019 Message-ID: <9A85D2917C58154C960D95352B22818BD071A92B@fmsmsx123.amr.corp.intel.com> Meeting notes and agenda for the 4/10 meeting * We took a decision at the release planning meeting to integrate the partially complete NUMA live migration patches from Artom into the f/stein branch to de-risk the feature. Gerry to backport and add his fixes. Bill to get an update on the status of this. * Are there any other changes from the upstream list that are also that important? * Dean's email from 4/5: * I just finished resetting the stx-nova repo [0] to track upstream nova: * * the old master branch is now stx/old-master for reference * * master branch is a snapshot of upstream master as of about 30 min ago * * stable/stein branch is a snapshot of upstream stable/stein as of about 30 min ago * * stx/stein is our working copy of stable/stein and where anything we backport should land. * Big Note: I am thinking about keeping a policy of periodically rebasing stx/stein on stable/stein to keep a clear history as we move forward, making it easier to see what we have added. That possibly means doing it next week when the final stein tag is added. Thoughts? * Force pushes can be inconvenient for developers but I am thinking the price may be worth the return on a wider scale. * Chris replied: * I like the idea of rebasing periodically to keep our changes "on top". * Rather than force-pushing, it might make sense to create a new branch for each of these rebases. That way we don't need to rewrite history. * We agreed that we would create a new branch every time we pick up a new change, picking up the new upstream every time (even with other changes). This requires a build change every time but is consistent with how we are handling other similar packages e.g. Ceph. New branches to be f/stein.1/.2/.3 etc... * Have the NUMA changes from upstream been backported to the branch? Any links to reviews or stories? Bill to provide from Gerry. * 99 Cloud sent an email update: * AR Bruce to ping Eric on getting eyeballs on rdb disk reviews and a couple others that look ready to go * Bruce to update the master spreadsheet from the email update * Shuquan to check if the fixes for "Fix stale RequestSpec instance numa topology for live-migration" are in the Stein branch (or need to be backported) * Dean to create a f/stein.1 branch for the NUMA live migration backport from Gerry. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Apr 9 16:57:47 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 9 Apr 2019 16:57:47 +0000 Subject: [Starlingx-discuss] [ Test ] meeting agenda - 4/9/2019 In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC0CFB@FMSMSX114.amr.corp.intel.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC0CFB@FMSMSX114.amr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC1426@FMSMSX114.amr.corp.intel.com> Testing meeting will begin 5 min late. Sorry for the delay Ada > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, April 8, 2019 6:40 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] [ Test ] meeting agenda - 4/9/2019 > > Test meeting agenda - 4/9 > > 1. Test plan status: > Containers - Jose, Numan > OpenStack Patch elimination - JC, Numan > Distributed cloud - Numan > > 2. Automation status - Elio > > 3. Opens - All > > -- > Regards > Ada > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Apr 9 20:33:17 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 9 Apr 2019 20:33:17 +0000 Subject: [Starlingx-discuss] Community Call (April 10, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A26E51@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190410T1400 From scott.little at windriver.com Tue Apr 9 20:39:31 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 9 Apr 2019 16:39:31 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_wheels - Build # 77 - Failure! In-Reply-To: <1619465611.103.1554782120022.JavaMail.javamailuser@localhost> References: <1619465611.103.1554782120022.JavaMail.javamailuser@localhost> Message-ID: <51f8f388-0d8a-afa6-7ca1-45038917691a@windriver.com> Another transient network issue. Successful when the job was re-run Opened https://bugs.launchpad.net/starlingx/+bug/1823986 . On 2019-04-08 11:55 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_wheels > Build #: 77 > Status: Failure > Timestamp: 20190409T034253Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190408T233001Z > OS: centos > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190408T233001Z/logs > OS_VERSION: 7.5.1804 > BUILD_STREAM: stable > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190408T233001Z/logs > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Tue Apr 9 21:09:05 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 09 Apr 2019 17:09:05 -0400 Subject: [Starlingx-discuss] =?utf-8?q?Upgrading_the_lists=2E=28openstack?= =?utf-8?q?=7Cairshipit=7Cstarlingx=7Czuul-ci=29=2Eorg_server_Friday_April?= =?utf-8?q?_12?= Message-ID: It is that time of the Ubuntu LTS cycle again and we need to upgrade our mailman mailing list server. We'd like to do that this Friday, April 12. We expect the upgrade to begin at about 17:00UTC and result in a 30-45 minute outage. The reason for the extended outage is that we will be upgrading the server in place to preserve its mail reputation. Thankfully email is a persistent system and your clients should queue up email sent until the server is back up again and accepting smtp connections. This means the outage shouldn't be very noticeable. Finally, for list admins, there are new DMARC moderation action settings. We ask that you don't change these settings and instead work with us if you need to address DMARC problems. Our current preference is that we pass email through unmodified so that the signatures still validate. Thank you all for your patience and feel free to ask us questions, Clark From maria.g.perez.ibarra at intel.com Tue Apr 9 23:14:34 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 9 Apr 2019 23:14:34 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190408 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-08 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Wed Apr 10 00:04:48 2019 From: serverascode at gmail.com (Curtis) Date: Tue, 9 Apr 2019 20:04:48 -0400 Subject: [Starlingx-discuss] ipxe boot iso? Message-ID: Hi All, Out of curiosity has anyone ipxe booted stx from the ISO? I'm doing a bit of testing in trying to get stx installed on baremetal packet.com nodes and will need to ipxe boot. I thought I'd ask before I went to far into working on it. The easy way of using "kernel https://boot.netboot.xyz/memdisk iso raw" and the ISO via http did not work, ran out of memory, so might have to get a little more complicated. :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Apr 10 01:43:11 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 10 Apr 2019 01:43:11 +0000 Subject: [Starlingx-discuss] pep 8 issue need to be fixed In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA46DA46@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C753057919@SHSMSX104.ccr.corp.intel.com> <20190329124308.aufoetl6ppunxour@yuggoth.org> <6703202FD9FDFF4A8DA9ACF104AE129FBA46DA46@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C753067D65@SHSMSX104.ccr.corp.intel.com> Hi Penney, When I update my patch, it report new pep8 issue for the files without change. Errors are below. E302 expected 2 blank lines, found 1 E126 continuation line over-indented for hanging indent E127 continuation line over-indented for visual indent E128 continuation line under-indented for visual indent E305 expected 2 blank lines after class or function definition, found 1 What's your proposal for it, fix it or disable these errors? Thanks! Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年3月29日 22:07 To: Jeremy Stanley ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] pep 8 issue need to be fixed Al has posted a review for a quick fix to resolve the stx-integ issues: https://review.openstack.org/#/c/648694/1 -----Original Message----- From: Jeremy Stanley [mailto:fungi at yuggoth.org] Sent: Friday, March 29, 2019 8:43 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] pep 8 issue need to be fixed On 2019-03-29 05:35:06 +0000 (+0000), Liu, ZhipengS wrote: [...] > When I update my patch for stx-integ, it report below errors, which > causes "Verified -1 Zuul" Not sure why we have this issue now. It is a > common issue that I also saw in other patches submitted recently. Do > we need a new ticket to fix this pep8 issue or just disable B009 and > B010? Any comment? [...] Your tox.ini in that repository seems to include the flake8-bugbear plugin (with no version specifier) in the deps list for testenv:pep8. According to https://pypi.org/project/flake8-bugbear/#history they released a new version yesterday (19.3.0). Their changelog indicates they introduced new checks B009, B010 and B011 in that release. One tactic we take to avoid having these sorts of issues in OpenStack is for projects to pick what versions of static analysis tools they're going to use at the start of a release cycle and pin them like flake8-bugbear<19 for the duration of that cycle, then advance the version pin at the start of the next cycle to bring in the latest versions and fix whatever those have started warning about. This minimizes the chances of disruption at later points in the cycle when they may more significantly detract from things like release-focused work. -- Jeremy Stanley _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Apr 10 10:08:08 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 10 Apr 2019 12:08:08 +0200 (CEST) Subject: [Starlingx-discuss] stx-application fails to install Message-ID: <1781629218.298654.1554890888375@communicator.strato.com> Hi, I have tried with the image from 20190325 a simplex instalation. During the install of stx-openstack I'll get the error below. Any idea what the root cause of this problem is and how to solve it? Thanks Marcel 2019-04-09 18:08:39.869 86 INFO armada.handlers.armada [-] Install completed with results from Tiller: {'version': 1, 'namespace': 'openstack', 'release': 'osh-openstack-panko', 'status': 'DEPLOYED', 'description': 'Install complete'} 2019-04-09 18:08:39.870 86 INFO armada.handlers.armada [-] Processing Chart, release=osh-openstack-ceilometer 2019-04-09 18:08:39.870 86 INFO armada.handlers.chartbuilder [-] Building dependency chart helm-toolkit for release openstack-ceilometer. 2019-04-09 18:08:39.891 86 INFO armada.handlers.armada [-] Installing release osh-openstack-ceilometer in namespace openstack 2019-04-09 18:08:39.892 86 INFO armada.handlers.armada [-] Beginning Install, wait=True, timeout=1800s 2019-04-09 18:08:39.962 86 INFO armada.handlers.tiller [-] Helm install release: wait=True, timeout=1800 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller [-] Error while installing release osh-openstack-ceilometer: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller Traceback (most recent call last): 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 401, in install_release 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller metadata=self.metadata) 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, deadline) 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller 2019-04-09 18:38:40.792 86 DEBUG armada.handlers.tiller [-] Helm getting release status for release=osh-openstack-ceilometer, version=0 get_release_status /usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py:467 2019-04-09 18:38:41.203 86 DEBUG armada.handlers.tiller [-] GetReleaseStatus= name: "osh-openstack-ceilometer" info { status { code: FAILED } first_deployed { seconds: 1554833319 nanos: 978595007 } last_deployed { seconds: 1554833319 nanos: 978595007 } Description: "Release \"osh-openstack-ceilometer\" failed: timed out waiting for the condition" } namespace: "openstack" get_release_status /usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py:475 2019-04-09 18:38:41.204 86 ERROR armada.cli [-] Caught internal exception: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-ceilometer - Tiller Message: b'Release "osh-openstack-ceilometer" failed: timed out waiting for the condition' 2019-04-09 18:38:41.204 86 ERROR armada.cli Traceback (most recent call last): 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 401, in install_release 2019-04-09 18:38:41.204 86 ERROR armada.cli metadata=self.metadata) 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ 2019-04-09 18:38:41.204 86 ERROR armada.cli return _end_unary_response_blocking(state, call, False, deadline) 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2019-04-09 18:38:41.204 86 ERROR armada.cli raise _Rendezvous(state, None, None, deadline) 2019-04-09 18:38:41.204 86 ERROR armada.cli grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> 2019-04-09 18:38:41.204 86 ERROR armada.cli 2019-04-09 18:38:41.204 86 ERROR armada.cli During handling of the above exception, another exception occurred: 2019-04-09 18:38:41.204 86 ERROR armada.cli 2019-04-09 18:38:41.204 86 ERROR armada.cli Traceback (most recent call last): 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2019-04-09 18:38:41.204 86 ERROR armada.cli self.invoke() 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/apply.py", line 217, in invoke 2019-04-09 18:38:41.204 86 ERROR armada.cli resp = armada.sync() 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 472, in sync 2019-04-09 18:38:41.204 86 ERROR armada.cli timeout=timer) 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 414, in install_release 2019-04-09 18:38:41.204 86 ERROR armada.cli raise ex.ReleaseException(release, status, 'Install') 2019-04-09 18:38:41.204 86 ERROR armada.cli armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-ceilometer - Tiller Message: b'Release "osh-openstack-ceilometer" failed: timed out waiting for the condition' 2019-04-09 18:38:41.204 86 ERROR armada.cli [root at controller-0 log(keystone_admin)]# From cindy.xie at intel.com Wed Apr 10 10:38:21 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 10 Apr 2019 10:38:21 +0000 Subject: [Starlingx-discuss] stx-application fails to install In-Reply-To: <1781629218.298654.1554890888375@communicator.strato.com> References: <1781629218.298654.1554890888375@communicator.strato.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F0882A@SHSMSX104.ccr.corp.intel.com> Hi, Marcel, Not sure if the bug we recently fixed are the same one you encountered: https://bugs.launchpad.net/starlingx/+bug/1820928 You can choose a newer ISO after 0403, and see if it addressed your issue. Thx. - cindy -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, April 10, 2019 6:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] stx-application fails to install Hi, I have tried with the image from 20190325 a simplex instalation. During the install of stx-openstack I'll get the error below. Any idea what the root cause of this problem is and how to solve it? Thanks Marcel 2019-04-09 18:08:39.869 86 INFO armada.handlers.armada [-] Install completed with results from Tiller: {'version': 1, 'namespace': 'openstack', 'release': 'osh-openstack-panko', 'status': 'DEPLOYED', 'description': 'Install complete'} 2019-04-09 18:08:39.870 86 INFO armada.handlers.armada [-] Processing Chart, release=osh-openstack-ceilometer 2019-04-09 18:08:39.870 86 INFO armada.handlers.chartbuilder [-] Building dependency chart helm-toolkit for release openstack-ceilometer. 2019-04-09 18:08:39.891 86 INFO armada.handlers.armada [-] Installing release osh-openstack-ceilometer in namespace openstack 2019-04-09 18:08:39.892 86 INFO armada.handlers.armada [-] Beginning Install, wait=True, timeout=1800s 2019-04-09 18:08:39.962 86 INFO armada.handlers.tiller [-] Helm install release: wait=True, timeout=1800 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller [-] Error while installing release osh-openstack-ceilometer: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller Traceback (most recent call last): 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 401, in install_release 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller metadata=self.metadata) 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, deadline) 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller 2019-04-09 18:38:40.792 86 DEBUG armada.handlers.tiller [-] Helm getting release status for release=osh-openstack-ceilometer, version=0 get_release_status /usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py:467 2019-04-09 18:38:41.203 86 DEBUG armada.handlers.tiller [-] GetReleaseStatus= name: "osh-openstack-ceilometer" info { status { code: FAILED } first_deployed { seconds: 1554833319 nanos: 978595007 } last_deployed { seconds: 1554833319 nanos: 978595007 } Description: "Release \"osh-openstack-ceilometer\" failed: timed out waiting for the condition" } namespace: "openstack" get_release_status /usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py:475 2019-04-09 18:38:41.204 86 ERROR armada.cli [-] Caught internal exception: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-ceilometer - Tiller Message: b'Release "osh-openstack-ceilometer" failed: timed out waiting for the condition' 2019-04-09 18:38:41.204 86 ERROR armada.cli Traceback (most recent call last): 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 401, in install_release 2019-04-09 18:38:41.204 86 ERROR armada.cli metadata=self.metadata) 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ 2019-04-09 18:38:41.204 86 ERROR armada.cli return _end_unary_response_blocking(state, call, False, deadline) 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking 2019-04-09 18:38:41.204 86 ERROR armada.cli raise _Rendezvous(state, None, None, deadline) 2019-04-09 18:38:41.204 86 ERROR armada.cli grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> 2019-04-09 18:38:41.204 86 ERROR armada.cli 2019-04-09 18:38:41.204 86 ERROR armada.cli During handling of the above exception, another exception occurred: 2019-04-09 18:38:41.204 86 ERROR armada.cli 2019-04-09 18:38:41.204 86 ERROR armada.cli Traceback (most recent call last): 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/__init__.py", line 39, in safe_invoke 2019-04-09 18:38:41.204 86 ERROR armada.cli self.invoke() 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/apply.py", line 217, in invoke 2019-04-09 18:38:41.204 86 ERROR armada.cli resp = armada.sync() 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 472, in sync 2019-04-09 18:38:41.204 86 ERROR armada.cli timeout=timer) 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 414, in install_release 2019-04-09 18:38:41.204 86 ERROR armada.cli raise ex.ReleaseException(release, status, 'Install') 2019-04-09 18:38:41.204 86 ERROR armada.cli armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-ceilometer - Tiller Message: b'Release "osh-openstack-ceilometer" failed: timed out waiting for the condition' 2019-04-09 18:38:41.204 86 ERROR armada.cli [root at controller-0 log(keystone_admin)]# _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Apr 10 11:07:55 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 10 Apr 2019 13:07:55 +0200 (CEST) Subject: [Starlingx-discuss] stx-application fails to install In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F0882A@SHSMSX104.ccr.corp.intel.com> References: <1781629218.298654.1554890888375@communicator.strato.com> <2FD5DDB5A04D264C80D42CA35194914F35F0882A@SHSMSX104.ccr.corp.intel.com> Message-ID: <848062479.291491.1554894475089@communicator.strato.com> Hi Cindy, thanks for the hint. I'll give it a try. Marcel > "Xie, Cindy" hat am 10. April 2019 um 12:38 geschrieben: > > > Hi, Marcel, > Not sure if the bug we recently fixed are the same one you encountered: > > https://bugs.launchpad.net/starlingx/+bug/1820928 > > You can choose a newer ISO after 0403, and see if it addressed your issue. > > Thx. - cindy > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, April 10, 2019 6:08 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] stx-application fails to install > > Hi, > > I have tried with the image from 20190325 a simplex instalation. During the install of stx-openstack I'll get the error below. > > Any idea what the root cause of this problem is and how to solve it? > > Thanks > > Marcel > > > 2019-04-09 18:08:39.869 86 INFO armada.handlers.armada [-] Install completed with results from Tiller: {'version': 1, 'namespace': 'openstack', 'release': 'osh-openstack-panko', 'status': 'DEPLOYED', 'description': 'Install complete'} > 2019-04-09 18:08:39.870 86 INFO armada.handlers.armada [-] Processing Chart, release=osh-openstack-ceilometer > 2019-04-09 18:08:39.870 86 INFO armada.handlers.chartbuilder [-] Building dependency chart helm-toolkit for release openstack-ceilometer. > 2019-04-09 18:08:39.891 86 INFO armada.handlers.armada [-] Installing release osh-openstack-ceilometer in namespace openstack > 2019-04-09 18:08:39.892 86 INFO armada.handlers.armada [-] Beginning Install, wait=True, timeout=1800s > 2019-04-09 18:08:39.962 86 INFO armada.handlers.tiller [-] Helm install release: wait=True, timeout=1800 > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller [-] Error while installing release osh-openstack-ceilometer: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller Traceback (most recent call last): > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 401, in install_release > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller metadata=self.metadata) > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, deadline) > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> > 2019-04-09 18:38:40.774 86 ERROR armada.handlers.tiller > 2019-04-09 18:38:40.792 86 DEBUG armada.handlers.tiller [-] Helm getting release status for release=osh-openstack-ceilometer, version=0 get_release_status /usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py:467 > 2019-04-09 18:38:41.203 86 DEBUG armada.handlers.tiller [-] GetReleaseStatus= name: "osh-openstack-ceilometer" > info { > status { > code: FAILED > } > first_deployed { > seconds: 1554833319 > nanos: 978595007 > } > last_deployed { > seconds: 1554833319 > nanos: 978595007 > } > Description: "Release \"osh-openstack-ceilometer\" failed: timed out waiting for the condition" > } > namespace: "openstack" > get_release_status /usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py:475 > 2019-04-09 18:38:41.204 86 ERROR armada.cli [-] Caught internal exception: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-ceilometer - Tiller Message: b'Release "osh-openstack-ceilometer" failed: timed out waiting for the condition' > 2019-04-09 18:38:41.204 86 ERROR armada.cli Traceback (most recent call last): > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 401, in install_release > 2019-04-09 18:38:41.204 86 ERROR armada.cli metadata=self.metadata) > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 487, in __call__ > 2019-04-09 18:38:41.204 86 ERROR armada.cli return _end_unary_response_blocking(state, call, False, deadline) > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/grpc/_channel.py", line 437, in _end_unary_response_blocking > 2019-04-09 18:38:41.204 86 ERROR armada.cli raise _Rendezvous(state, None, None, deadline) > 2019-04-09 18:38:41.204 86 ERROR armada.cli grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with (StatusCode.UNKNOWN, release osh-openstack-ceilometer failed: timed out waiting for the condition)> > 2019-04-09 18:38:41.204 86 ERROR armada.cli > 2019-04-09 18:38:41.204 86 ERROR armada.cli During handling of the above exception, another exception occurred: > 2019-04-09 18:38:41.204 86 ERROR armada.cli > 2019-04-09 18:38:41.204 86 ERROR armada.cli Traceback (most recent call last): > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/__init__.py", line 39, in safe_invoke > 2019-04-09 18:38:41.204 86 ERROR armada.cli self.invoke() > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/cli/apply.py", line 217, in invoke > 2019-04-09 18:38:41.204 86 ERROR armada.cli resp = armada.sync() > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/armada.py", line 472, in sync > 2019-04-09 18:38:41.204 86 ERROR armada.cli timeout=timer) > 2019-04-09 18:38:41.204 86 ERROR armada.cli File "/usr/local/lib/python3.5/site-packages/armada/handlers/tiller.py", line 414, in install_release > 2019-04-09 18:38:41.204 86 ERROR armada.cli raise ex.ReleaseException(release, status, 'Install') > 2019-04-09 18:38:41.204 86 ERROR armada.cli armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-ceilometer - Tiller Message: b'Release "osh-openstack-ceilometer" failed: timed out waiting for the condition' > 2019-04-09 18:38:41.204 86 ERROR armada.cli > [root at controller-0 log(keystone_admin)]# > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Wed Apr 10 12:54:16 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 10 Apr 2019 14:54:16 +0200 Subject: [Starlingx-discuss] [tsc][all] Project mission statement Message-ID: <72372439-A24B-4104-ABF5-2BD83DE029A8@gmail.com> Hi StarlingX Community, I’ve been glancing through the project materials and realized that we don’t have a mission statement yet. I think it is important to a project to formalize a short description of its purpose and goals that can help new comers to understand what the project is as well as participants of the project to set directions and priorities as we go. I created an etherpad for this: https://etherpad.openstack.org/p/stx-mission-statement I will also bring up the topic on the community and TSC calls this week to discuss the topic and decide on next steps. Please let me know if you have any questions or comments. Thanks, Ildikó From cindy.xie at intel.com Wed Apr 10 13:40:31 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 10 Apr 2019 13:40:31 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F09066@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 4/10 meeting: - Ceph upgrade status: 1. patch review status (Daniel) https://review.openstack.org/#/q/topic:ceph-mimic-upgrade+(status:open+OR+status:merged) all patches have been uploaded (total 20) under review. update the patches according to the comments. Workflow - 1 on one manifest patch pending validation results from Fernando. stx-Ceph branch: all PR has been merged. https://github.com/starlingx-staging/stx-ceph/tree/stx/v13.2.0 is up to date. Can close the 2 pending PRs: https://github.com/starlingx-staging/stx-ceph/pulls 2. Ceph dev build validation status (Fernando) functional testing for the dev build before merge; Sanity testing on different configs (Simplex/Duplex/Multi-nodes with dedicated storage) before merge. regression testing after code merge test case format discussion, rst already on test repo, will need to modify those test cases based on the new Ceph. AR: Cindy to check w/ Fernando for how long it takes for pre-merge sanity. We want a detail functional testing after merge. Fernando almost finished the testing for the ISO Yong sent and will do a testing w/ new Helm-chart provided by Yong today. Elio: to check the test plans and seperate the cases into P1 & P2. 3. Release notes review/update (Tingjie/Daniel) Reviewing the new features for release notes in mailing list. Need close the items to be documented after testing. - QAT driver upgrade (Haitao) https://storyboard.openstack.org/#!/story/2004901 · 3/20 ~ 3/30: technical ramping up (Haitao) - DONE · 4/1 ~ 4/12: patch development and post for review (Haitao & new CW) Physical function (PF) on CentOS host, enabling virtual function (VF) driver on Qemu. https://bugs.launchpad.net/starlingx/+bug/1821938 has been fixed after 4/9 build. · 3/20 ~ 4/12: test plan review, dry-run using master build (Ricardo) Start working on prelimited test case. Cindy will send the case again to ensure they are same. · 4/15: test ISO provided to Ricardo (Haitao) trending 1 wk late to provide ISO to Ricardo. · 4/15 ~ 5/3: testing and bug fixing (Ricardo /Haitao) · 5/6: test pass, patch review done & merge to master (Haitao & CW) - Opens (all) Cindy to send email from Zhipeng regaridng PcI affinity dependency on libvirt. -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, April 9, 2019 9:03 PM To: starlingx-discuss at lists.starlingx.io Cc: Badea, Daniel ; Hernandez Gonzalez, Fernando ; Chen, Tingjie ; Wang, Hai Tao ; Wold, Saul ; Rowsell, Brent ; Khalil, Ghada ; Jones, Bruce E Subject: Re: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting Agenda for 4/10 meeting: - Ceph upgrade status: 1. patch review status (Daniel) https://review.openstack.org/#/q/topic:ceph-mimic-upgrade+(status:open+OR+status:merged) 2. Ceph dev build validation status (Fernando) 3. Release notes review/update (Tingjie/Daniel) - QAT driver upgrade (Haitao) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 10, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Al.Bailey at windriver.com Wed Apr 10 13:40:36 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 10 Apr 2019 13:40:36 +0000 Subject: [Starlingx-discuss] pep 8 issue need to be fixed In-Reply-To: <93814834B4855241994F290E959305C753067D65@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C753057919@SHSMSX104.ccr.corp.intel.com> <20190329124308.aufoetl6ppunxour@yuggoth.org> <6703202FD9FDFF4A8DA9ACF104AE129FBA46DA46@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C753067D65@SHSMSX104.ccr.corp.intel.com> Message-ID: My proposal would be to fix them. These pep8 violations are for the files you are adding, rather than files that are outside your review. Al -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: Tuesday, April 09, 2019 9:43 PM To: Penney, Don; Jeremy Stanley; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] pep 8 issue need to be fixed Hi Penney, When I update my patch, it report new pep8 issue for the files without change. Errors are below. E302 expected 2 blank lines, found 1 E126 continuation line over-indented for hanging indent E127 continuation line over-indented for visual indent E128 continuation line under-indented for visual indent E305 expected 2 blank lines after class or function definition, found 1 What's your proposal for it, fix it or disable these errors? Thanks! Zhipeng -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: 2019年3月29日 22:07 To: Jeremy Stanley ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] pep 8 issue need to be fixed Al has posted a review for a quick fix to resolve the stx-integ issues: https://review.openstack.org/#/c/648694/1 -----Original Message----- From: Jeremy Stanley [mailto:fungi at yuggoth.org] Sent: Friday, March 29, 2019 8:43 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] pep 8 issue need to be fixed On 2019-03-29 05:35:06 +0000 (+0000), Liu, ZhipengS wrote: [...] > When I update my patch for stx-integ, it report below errors, which > causes "Verified -1 Zuul" Not sure why we have this issue now. It is a > common issue that I also saw in other patches submitted recently. Do > we need a new ticket to fix this pep8 issue or just disable B009 and > B010? Any comment? [...] Your tox.ini in that repository seems to include the flake8-bugbear plugin (with no version specifier) in the deps list for testenv:pep8. According to https://pypi.org/project/flake8-bugbear/#history they released a new version yesterday (19.3.0). Their changelog indicates they introduced new checks B009, B010 and B011 in that release. One tactic we take to avoid having these sorts of issues in OpenStack is for projects to pick what versions of static analysis tools they're going to use at the start of a release cycle and pin them like flake8-bugbear<19 for the duration of that cycle, then advance the version pin at the start of the next cycle to bring in the latest versions and fix whatever those have started warning about. This minimizes the chances of disruption at later points in the cycle when they may more significantly detract from things like release-focused work. -- Jeremy Stanley _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From abraham.arce.moreno at intel.com Wed Apr 10 13:46:23 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 10 Apr 2019 13:46:23 +0000 Subject: [Starlingx-discuss] ipxe boot iso? In-Reply-To: References: Message-ID: Hi Curtis, > Out of curiosity has anyone ipxe booted stx from the ISO? Yes, I have tried a couple of times being able to load both the kernel and initrd from other Linux distros and StarlingX, At the end I was able to boot and install other distros but not StarlingX, here you have all my learning written: https://github.com/xe1gyq/starlingx/blob/master/Packet.md > I'm doing a bit of testing in trying to get stx installed on baremetal packet.com > nodes and will need to ipxe boot. I thought I'd ask before I > went to far into working on it. The easy way of using "kernel > https://boot.netboot.xyz/memdisk iso raw" and the ISO via http did not work, > ran out of memory, so might have to get a little more complicated. :) However, whehn working with StarlingX ISO, I run into the issue of not being able to get into the console, Packet Customer Service greatly supported me (Chat with an agent is a great option) and reply back after investigating: For the ISO issue we would suggest on setting the kernel options. Let him know that our x86 servers require console=ttyS1,115200n8, and our aarch64 servers require console=ttyAMA0,115200 We recommend to try adding 'console=ttyS1,115200n8' on your t1.small server eg. kernel https://boot.netboot.xyz/memdisk iso raw console=ttyS1,115200n8 So, I tried that once but got this error: iPXE> kernel https://boot.netboot.xyz/memdisk iso raw console=ttyS1,115200n8 https://boot.netboot.xyz/memdisk... ok Could not select: Exec format error (http://ipxe.org/2e008081) Due to other stuff going around, I did not follow up, I will try it again this week and see how far I get. Let me know if you want me to help with any specific task. Best Regards Abraham From marcel at schaible-consulting.de Wed Apr 10 14:33:25 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 10 Apr 2019 16:33:25 +0200 (CEST) Subject: [Starlingx-discuss] Instance console not reachable Message-ID: <1771203973.321198.1554906806013@communicator.strato.com> Hi, I just got the Image from 20190403 up and running. But when I try to access in the web-ui for horizion (?) at port 31000 the console of a newly created instance I am getting inside the ui the browser error: Error: Server not found Cannot find Server novncproxy.openstack.svc.cluster.local Any idea? Thanks Marcel From Al.Bailey at windriver.com Wed Apr 10 14:37:14 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 10 Apr 2019 14:37:14 +0000 Subject: [Starlingx-discuss] Instance console not reachable In-Reply-To: <1771203973.321198.1554906806013@communicator.strato.com> References: <1771203973.321198.1554906806013@communicator.strato.com> Message-ID: Sounds like this one https://bugs.launchpad.net/starlingx/+bug/1822212 it makes mention of the following steps to see console access https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Instance_Console_Access Al -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, April 10, 2019 10:33 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Instance console not reachable Hi, I just got the Image from 20190403 up and running. But when I try to access in the web-ui for horizion (?) at port 31000 the console of a newly created instance I am getting inside the ui the browser error: Error: Server not found Cannot find Server novncproxy.openstack.svc.cluster.local Any idea? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Apr 10 14:42:07 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 10 Apr 2019 16:42:07 +0200 (CEST) Subject: [Starlingx-discuss] Instance console not reachable In-Reply-To: References: <1771203973.321198.1554906806013@communicator.strato.com> Message-ID: <952775343.321866.1554907327303@communicator.strato.com> I have done this. Question what is meant by ""? Thanks Marcel > "Bailey, Henry Albert (Al)" hat am 10. April 2019 um 16:37 geschrieben: > > > Sounds like this one > https://bugs.launchpad.net/starlingx/+bug/1822212 > > it makes mention of the following steps to see console access > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Instance_Console_Access > > Al > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, April 10, 2019 10:33 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Instance console not reachable > > Hi, > > I just got the Image from 20190403 up and running. > > But when I try to access in the web-ui for horizion (?) at port 31000 the console of a newly created instance I am getting inside the ui the browser error: > > Error: Server not found > Cannot find Server novncproxy.openstack.svc.cluster.local > > Any idea? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Teresa.Ho at windriver.com Wed Apr 10 14:54:17 2019 From: Teresa.Ho at windriver.com (Ho, Teresa) Date: Wed, 10 Apr 2019 14:54:17 +0000 Subject: [Starlingx-discuss] Instance console not reachable In-Reply-To: <952775343.321866.1554907327303@communicator.strato.com> References: <1771203973.321198.1554906806013@communicator.strato.com> <952775343.321866.1554907327303@communicator.strato.com> Message-ID: <918130236148D14B982C7B8BC1C06EA16715956C@ALA-MBD.corp.ad.wrs.com> That would be the IP address of the machine where the VirtualBox is running. Teresa -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, April 10, 2019 10:42 AM To: Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Instance console not reachable I have done this. Question what is meant by ""? Thanks Marcel > "Bailey, Henry Albert (Al)" hat am 10. April 2019 um 16:37 geschrieben: > > > Sounds like this one > https://bugs.launchpad.net/starlingx/+bug/1822212 > > it makes mention of the following steps to see console access > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Instance_Console_Access > > Al > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, April 10, 2019 10:33 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Instance console not reachable > > Hi, > > I just got the Image from 20190403 up and running. > > But when I try to access in the web-ui for horizion (?) at port 31000 the console of a newly created instance I am getting inside the ui the browser error: > > Error: Server not found > Cannot find Server novncproxy.openstack.svc.cluster.local > > Any idea? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Volker.Hoesslin at swsn.de Wed Apr 10 15:10:41 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Wed, 10 Apr 2019 15:10:41 +0000 Subject: [Starlingx-discuss] Error creating volume Message-ID: hi, based on first public release i got this error: Error creating volume. Message from driver: Failed to copy image to volume: Insufficient free space on /opt/img-conversions for image download and conversion. this happens on create an new volume based on an qcow2 image with ~10GB. have a look at controller and checkout mounting points, i can see this: Filesystem Size Used Avail Use% Mounted on /dev/sda3 20G 8.8G 9.4G 49% / devtmpfs 7.7G 0 7.7G 0% /dev tmpfs 7.7G 496K 7.7G 1% /dev/shm tmpfs 7.7G 12M 7.7G 1% /run tmpfs 7.7G 0 7.7G 0% /sys/fs/cgroup tmpfs 1.0G 152K 1.0G 1% /tmp /dev/mapper/cgts--vg-img--conversions--lv 20G 45M 19G 1% /opt/img-conversions /dev/mapper/cgts--vg-gnocchi--lv 4.8G 52M 4.5G 2% /opt/gnocchi /dev/mapper/cgts--vg-scratch--lv 7.8G 36M 7.4G 1% /scratch /dev/mapper/cgts--vg-backup--lv 50G 53M 47G 1% /opt/backups /dev/mapper/cgts--vg-ceph--mon--lv 20G 143M 19G 1% /var/lib/ceph/mon /dev/mapper/cgts--vg-log--lv 7.6G 981M 6.3G 14% /var/log /dev/sda2 477M 96M 353M 22% /boot /dev/sda1 300M 8.7M 292M 3% /boot/efi /dev/drbd1 2.0G 19M 1.9G 1% /var/lib/rabbitmq /dev/drbd5 992M 2.6M 923M 1% /opt/extension /dev/drbd3 9.9G 15M 9.4G 1% /opt/cgcs /dev/drbd2 2.0G 7.6M 1.9G 1% /opt/platform /dev/drbd0 40G 223M 38G 1% /var/lib/postgresql so there are only 20GB free of space for conversions... any chance to handle this problem? volker... -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Wed Apr 10 15:17:58 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 10 Apr 2019 17:17:58 +0200 (CEST) Subject: [Starlingx-discuss] Instance console not reachable In-Reply-To: <918130236148D14B982C7B8BC1C06EA16715956C@ALA-MBD.corp.ad.wrs.com> References: <1771203973.321198.1554906806013@communicator.strato.com> <952775343.321866.1554907327303@communicator.strato.com> <918130236148D14B982C7B8BC1C06EA16715956C@ALA-MBD.corp.ad.wrs.com> Message-ID: <1851367461.324387.1554909478413@communicator.strato.com> Thanks Teresa! Still does not work. Even after rebooting the machine and putting the firewall rules at the top of the INPUT chain. I am trying to access the instance console from another machine within the same network where OAM is reachable. I'll guess that should work? Since the iptables rules and /etc/hosts is recreated after reboot how to make these changes permanent? > "Ho, Teresa" hat am 10. April 2019 um 16:54 geschrieben: > > > That would be the IP address of the machine where the VirtualBox is running. > > Teresa > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, April 10, 2019 10:42 AM > To: Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Instance console not reachable > > I have done this. > > Question what is meant by ""? > > Thanks > > Marcel > > > "Bailey, Henry Albert (Al)" hat am 10. April 2019 um 16:37 geschrieben: > > > > > > Sounds like this one > > https://bugs.launchpad.net/starlingx/+bug/1822212 > > > > it makes mention of the following steps to see console access > > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Instance_Console_Access > > > > Al > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, April 10, 2019 10:33 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Instance console not reachable > > > > Hi, > > > > I just got the Image from 20190403 up and running. > > > > But when I try to access in the web-ui for horizion (?) at port 31000 the console of a newly created instance I am getting inside the ui the browser error: > > > > Error: Server not found > > Cannot find Server novncproxy.openstack.svc.cluster.local > > > > Any idea? > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Wed Apr 10 15:28:49 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 10 Apr 2019 15:28:49 +0000 Subject: [Starlingx-discuss] Community Call (April 10, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A271D4@ALA-MBD.corp.ad.wrs.com> Notes from the April 10 call. Bill... Project mission statement - https://etherpad.openstack.org/p/stx-mission-statement (ildikov) - Ildikó created the etherpad linked just above, looking for Community input, she'll raise it at the TSC tomorrow as well StoryBoard / Launchpad Re-tagging -- COMPLETE - no issues, seems to have gone very smoothly Denver Planning: Open Infrastructure Summit & PTG - links - https://etherpad.openstack.org/p/stx-ptg-preparation-denver-2019 - https://etherpad.openstack.org/p/edge-wg-ptg-preparation-denver-2019 - https://www.openstack.org/summit/denver-2019/summit-schedule/global-search?t=starlingx - Ildikó is thinking of getting some space in the OSF lounge, she'll update on this next week Quick item - there are automated release notes being generated daily. Would the community like them posted to the mailing list? (Bruce) - Dean opined that it shouldn't go on the mailing list, it should go to the same place as other build artifacts - Don said there is similar (but differently formatted) changelog info already on CENGN - he provided this sample link after the meeting: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T013000Z/outputs/CHANGELOG.txt - we agreed that what's on CENGN is sufficient for folks who want to see this per-build update info Sub-Project Updates: - Release (Ghada) - skipped this since we're having a release meeting later today (at https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190410T2030 on https://zoom.us/j/342730236) - Containers (Frank) - Frank's away this week, we skipped this update - Security (Ken) - great progress on automated ISO scanning to get vulnerability data - able to get raw data now - next steps - formatting the output for easier review / manipulation - working with Build team on this - determine path fwd on scanning containers - work to extend Clear (rejected upstream) OR use same tool we're using on the ISO - Ceph upgrade (Vivian) - 20 patches uploaded, most reviewed - pending test results before we can merge - will do sanity + P1 testcases for pre-merge testing, then functional testing - CentOS upgrade (Cindy) - working on QAT upgrade - encountered some issues with QAT driver; trending ~1 week late vs. plan - Networking (Ghada) - had been working through connectivity issues w/ OVS-DPDK - turned out to be a procedural issue - understood & documented now - back in business! - OVS in Container is now supported as the default back-end - working with the test team to better understand their plans - Docs (Michael / Bruce) - https://docs.starlingx.io/ is now live with the new structure. Now we have to write the content - working on the overall plan. - Build (Cesar / Scott) - working on transforming of the build system to build in layers- working on this w/out impacting - switched over to dev/stable this week - Dist-Cloud (Dariush) - working on 2 things. - containerizing the keystone proxy - rebasing DB sync of keystone - Test (Ada / Numan) - prepping input for release plan - continued to develop testcases - focused on OpenStack Patch Elim & Containers - working to get automated TCs to the community - working on a plan for a plan (Robot Framework) - Numan's team has a similar initiative underway for Pytest framework/tests - Multi-OS (Cesar) - completed 1st PoC - built Ubuntu from scratch, added some STX pkgs, built an ISO; this is repeatable - next PoC is to build on top of this, try to make an entire STX cluster - Distro.openstack (Bruce) - NUMA live migration - plan is to create stx/stein.1 and backport from Artom's reviews. - Dean to create branch, Gerry to deliver backport. - Future backports (from upstream) will use new branches e..g stx/stein.2, .3, etc... if needed. - Good progress on several items in the nova community. Hoping to have all we need accepted upstream by early June. - Repo Name Freeze (Dean) - this happens on Friday (April 12) - as detailed previously, we'll take the "stx-" prefix off the repo names - Project Graduation for Kata Containers (Dean / Ildikó) - Kata Containers was made an official OpenStack project - Zuul got delayed due to some licensing issues - not their issue, just some stuff the OSF wants to clarify - Dean will talk about this at the TSC tomorrow - we need to start making sure we're getting there w/ no surprises - Ildikó gave a brief overview of what we'll need to do - it's well understood, we just need to do the prep work to make sure it goes smoothly -----Original Message----- From: Zvonar, Bill Sent: Tuesday, April 9, 2019 4:33 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (April 10, 2019) Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Bill. [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190410T1400 From andy.ning at windriver.com Wed Apr 10 15:37:00 2019 From: andy.ning at windriver.com (Andy Ning) Date: Wed, 10 Apr 2019 11:37:00 -0400 Subject: [Starlingx-discuss] Instance console not reachable In-Reply-To: <1851367461.324387.1554909478413@communicator.strato.com> References: <1771203973.321198.1554906806013@communicator.strato.com> <952775343.321866.1554907327303@communicator.strato.com> <918130236148D14B982C7B8BC1C06EA16715956C@ALA-MBD.corp.ad.wrs.com> <1851367461.324387.1554909478413@communicator.strato.com> Message-ID: <109aedab-0af3-4bb8-097f-484682903767@windriver.com> Do you see port 80/443 are being listened on by nginx? You can check by ... netstat -antp | grep LIST | grep 80 netstat -antp | grep LIST | grep 443 tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN 242430/nginx: maste The two ports are added recently, not sure how recent though. Andy On 2019-04-10 11:17 AM, Marcel Schaible wrote: > Thanks Teresa! > > Still does not work. Even after rebooting the machine and putting the firewall rules at the top of the INPUT chain. > > I am trying to access the instance console from another machine within the same network where OAM is reachable. I'll guess that should work? > > Since the iptables rules and /etc/hosts is recreated after reboot how to make these changes permanent? > >> "Ho, Teresa" hat am 10. April 2019 um 16:54 geschrieben: >> >> >> That would be the IP address of the machine where the VirtualBox is running. >> >> Teresa >> >> -----Original Message----- >> From: Marcel Schaible [mailto:marcel at schaible-consulting.de] >> Sent: Wednesday, April 10, 2019 10:42 AM >> To: Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Instance console not reachable >> >> I have done this. >> >> Question what is meant by ""? >> >> Thanks >> >> Marcel >> >>> "Bailey, Henry Albert (Al)" hat am 10. April 2019 um 16:37 geschrieben: >>> >>> >>> Sounds like this one >>> https://bugs.launchpad.net/starlingx/+bug/1822212 >>> >>> it makes mention of the following steps to see console access >>> https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Instance_Console_Access >>> >>> Al >>> >>> -----Original Message----- >>> From: Marcel Schaible [mailto:marcel at schaible-consulting.de] >>> Sent: Wednesday, April 10, 2019 10:33 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: [Starlingx-discuss] Instance console not reachable >>> >>> Hi, >>> >>> I just got the Image from 20190403 up and running. >>> >>> But when I try to access in the web-ui for horizion (?) at port 31000 the console of a newly created instance I am getting inside the ui the browser error: >>> >>> Error: Server not found >>> Cannot find Server novncproxy.openstack.svc.cluster.local >>> >>> Any idea? >>> >>> Thanks >>> >>> Marcel >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr From build.starlingx at gmail.com Wed Apr 10 19:46:08 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 15:46:08 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 157 - Failure! Message-ID: <1507872735.115.1554925569743.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 157 Status: Failure Timestamp: 20190410T194604Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190410T165800Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190410T165800Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Wed Apr 10 19:46:12 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 15:46:12 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 57 - Failure! Message-ID: <1036221798.118.1554925573585.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 57 Status: Failure Timestamp: 20190410T165800Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: true BUILD_CONTAINERS_STABLE: true From serverascode at gmail.com Wed Apr 10 19:57:42 2019 From: serverascode at gmail.com (Curtis) Date: Wed, 10 Apr 2019 15:57:42 -0400 Subject: [Starlingx-discuss] ipxe boot iso? In-Reply-To: References: Message-ID: On Wed, Apr 10, 2019 at 9:46 AM Arce Moreno, Abraham < abraham.arce.moreno at intel.com> wrote: > Hi Curtis, > > > Out of curiosity has anyone ipxe booted stx from the ISO? > > Yes, I have tried a couple of times being able to load both the kernel and > initrd from other Linux distros and StarlingX, > At the end I was able to boot and install other distros but not StarlingX, > here you have all my learning written: > https://github.com/xe1gyq/starlingx/blob/master/Packet.md > > > I'm doing a bit of testing in trying to get stx installed on baremetal > packet.com > > nodes and will need to ipxe boot. I thought I'd > ask before I > > went to far into working on it. The easy way of using "kernel > > https://boot.netboot.xyz/memdisk iso raw" and the ISO via http did not > work, > > ran out of memory, so might have to get a little more complicated. :) > > However, whehn working with StarlingX ISO, I run into the issue of not > being able to get into the console, Packet Customer Service greatly > supported me (Chat with an agent is a great option) and reply back after > investigating: > > > > For the ISO issue we would suggest on setting the kernel options. Let him > know that our x86 servers require console=ttyS1,115200n8, and our aarch64 > servers require console=ttyAMA0,115200 > We recommend to try adding 'console=ttyS1,115200n8' on your t1.small > server > eg. kernel https://boot.netboot.xyz/memdisk iso raw console=ttyS1,115200n8 > > > > So, I tried that once but got this error: > > iPXE> kernel https://boot.netboot.xyz/memdisk iso raw > console=ttyS1,115200n8 > https://boot.netboot.xyz/memdisk... ok > Could not select: Exec format error (http://ipxe.org/2e008081) > > Interesting, as I didn't get that error, instead I received one about running out of disk space, which I assume is b/c ipxe/memdisk doesn't load up that much room. Thanks for the info, I will keep looking into this. :) Thanks, Curtis > Due to other stuff going around, I did not follow up, I will try it again > this week and see how far I get. Let me know if you want me to help with > any specific task. > > Best Regards > Abraham > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Apr 10 20:35:26 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 16:35:26 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 158 - Still Failing! In-Reply-To: <617834557.113.1554925566016.JavaMail.javamailuser@localhost> References: <617834557.113.1554925566016.JavaMail.javamailuser@localhost> Message-ID: <1737680561.121.1554928527361.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 158 Status: Still Failing Timestamp: 20190410T203523Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190410T165800Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190410T165800Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Wed Apr 10 20:39:39 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 16:39:39 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 159 - Still Failing! In-Reply-To: <1105684608.119.1554928523705.JavaMail.javamailuser@localhost> References: <1105684608.119.1554928523705.JavaMail.javamailuser@localhost> Message-ID: <1984211244.124.1554928779928.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 159 Status: Still Failing Timestamp: 20190410T203936Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190410T165800Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190410T165800Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Wed Apr 10 20:40:33 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 16:40:33 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 160 - Still Failing! In-Reply-To: <584828125.122.1554928777045.JavaMail.javamailuser@localhost> References: <584828125.122.1554928777045.JavaMail.javamailuser@localhost> Message-ID: <1518209289.127.1554928834686.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 160 Status: Still Failing Timestamp: 20190410T204030Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190410T165800Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190410T165800Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Wed Apr 10 20:41:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 16:41:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 161 - Still Failing! In-Reply-To: <846598769.125.1554928831470.JavaMail.javamailuser@localhost> References: <846598769.125.1554928831470.JavaMail.javamailuser@localhost> Message-ID: <499422450.130.1554928892439.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 161 Status: Still Failing Timestamp: 20190410T204126Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190410T165800Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190410T165800Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From michael.l.tullis at intel.com Wed Apr 10 20:50:10 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 10 Apr 2019 20:50:10 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 4/10/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1AFC1BB@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Apr 10 23:11:20 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 10 Apr 2019 19:11:20 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 163 - Failure! Message-ID: <1926368598.134.1554937881282.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 163 Status: Failure Timestamp: 20190410T230701Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190410T165800Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190410T165800Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190410T165800Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From maria.g.perez.ibarra at intel.com Thu Apr 11 00:59:05 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 11 Apr 2019 00:59:05 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Apr 11 01:25:17 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 11 Apr 2019 01:25:17 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Thu Apr 11 02:40:27 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 11 Apr 2019 02:40:27 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" Cc: "Liu, Yang" Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Apr 11 02:49:43 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 11 Apr 2019 02:49:43 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> Hi Christopher, These images look correct to me, but I’ll ask Yang to confirm the images she tested with tomorrow. Can you confirm that nova is still returning the same error as was originally reported in https://bugs.launchpad.net/starlingx/+bug/1823275 nova is returning this error: 2019-04-04 16:22:30,902.902 168396 ERROR nova.compute.manager [req-b3bffaba-a62a-4e56-be63-3362c51a36df - - - - -] Error updating resources for node compute-0.: PciDeviceNotFoundById: PCI device 0000:b3:02.3 not found Can you also list the output of the nova hypervisor cmd on the compute node? nova hypervisor-list Thanks, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 10:40 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Thu Apr 11 12:25:44 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Thu, 11 Apr 2019 12:25:44 +0000 Subject: [Starlingx-discuss] [Containers] Kubernetes support question Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A967243@fmsmsx101.amr.corp.intel.com> Hi all I'm reviewing this patch [1] and on the description says "System pods will be active on both controllers" but I'm checking according to below list [3] that 'Tiller-deploy' POD is only running on controller-1 that is on stand-by, is this normal behavior? Also according to this patch [2] says that pod network should be pointing to "172.16.0.0/16", but on the list [3] I see only 'tiller-deploy' and 'calico-kube-controllers' PODs using IPs on that range, is that normal? 1 - https://review.openstack.org/#/c/587458/ 2 - https://review.openstack.org/#/c/587465/ 3- http://paste.openstack.org/raw/749175/ Regards, José From Ian.Jolliffe at windriver.com Thu Apr 11 13:16:31 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Thu, 11 Apr 2019 13:16:31 +0000 Subject: [Starlingx-discuss] [TSC] Minutes - 4/4 Message-ID: <3BDBF7C8-05E4-444F-B0B5-444476056006@windriver.com> Hi all; As a general reminder to the community – we are collecting input for the PTG and reviewing proposals that people are putting forward to evolve StarlingX. All ideas are welcome, the etherpad we are working with is here [0]. Minutes from last week: Packet POC: Decide on 1st project and how we get this off the ground Curtis - Probably makes the most sense to start with the performance/footprint/what have you work Integrate somehow with existing (CENGN based jenkins?) infrastructure to kick off performance tests in packet.com We deploy to Packet set servers - with a load from Cengn Performance suite - footprint (CPU, Mem) Follow by OPNFV yardstick; review what work would be required to integrate yardstick in some fashion Curtis and Victor working on this. Could this be used as a sandbox for new users? Could be a quick access for someone to try out STX - first step is to figure out the deployment This is actually a fairly large piece, as stx tends to deploy via ISO, and that is not how deployment to packet.com is really done. Just to write down possible projects (so now 4 or 5 ideas): CI/CD integration Performance/footprint/etc testing Edge CBRS short term project Sandbox - Allow new users to get quick access to an stx instance Workshop usage - Use packet.com for workshops, ala summit workshop Community building - Curtis looking for sponsors for Ottawa and Toronto meet-up. Curtis - Will be running the stx workshop on the 24th of April at OICR at the Open Infrastructure Toronto meetup Release policy draft - comments or feedback? -- brucej https://etherpad.openstack.org/p/stx-release-policy-draft Where should this go long term? Docs? Wiki? Release planning team to review some more. What is the process to approve changes to the release? Next release: What process do we want to follow? take of list and put in etherpad - make it easier to add comments easier to review and maintain context needs to live there for ever - until the content is decided and moves into Storyboard, Spec's Brent volunteered to start the etherpad - will send a note to ML https://etherpad.openstack.org/p/stx-ptg-denver What do we want to try to have ready for the PTG? Need a list Need a prime Need to do research and prep PTG outcome - draft list of content for next release After PTG who will sponsor the item Review details Curtis - Added two things to the ethercalc 1) security groups 2) network slicing Secuity groups are already being done in R2 - needs to be documented Network slicing - is STX interested in network slicing Would have to do a fair amount of research to see where NS is at this time; lots have changed in the last year or so Time sensitive networking might be related... [0] https://etherpad.openstack.org/p/stx-ptg-denver -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Thu Apr 11 13:48:28 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Thu, 11 Apr 2019 09:48:28 -0400 Subject: [Starlingx-discuss] Error creating volume In-Reply-To: References: Message-ID: <1554990508.3600.36.camel@windriver.com> Hi Volker, I think you'll want to do 'controllerfs-modify' system help controllerfs-modify You should be able to do that from the horizon interface under system configuration.  I'm not looking at the first public release though, so please excuse me if the information is dated. M On Wed, 2019-04-10 at 15:10 +0000, von Hoesslin, Volker wrote: > hi, > based on first public release i got this error: > > Error creating volume. Message from driver: Failed to copy image to > volume: Insufficient free space on /opt/img-conversions for image > download and conversion. > > this happens on create an new volume based on an qcow2 image with > ~10GB. have a look at controller and checkout mounting points, i can > see this: > > Filesystem                                 Size  Used Avail Use% > Mounted on > /dev/sda3                                   20G  8.8G  9.4G  49% / > devtmpfs                                   7.7G     0  7.7G   0% /dev > tmpfs                                      7.7G  496K  7.7G   1% > /dev/shm > tmpfs                                      7.7G   12M  7.7G   1% /run > tmpfs                                      7.7G     0  7.7G   0% > /sys/fs/cgroup > tmpfs                                      1.0G  152K  1.0G   1% /tmp > /dev/mapper/cgts--vg-img--conversions--lv   20G   45M   19G   1% > /opt/img-conversions > /dev/mapper/cgts--vg-gnocchi--lv           4.8G   52M  4.5G   2% > /opt/gnocchi > /dev/mapper/cgts--vg-scratch--lv           7.8G   36M  7.4G   1% > /scratch > /dev/mapper/cgts--vg-backup--lv             50G   53M   47G   1% > /opt/backups > /dev/mapper/cgts--vg-ceph--mon--lv          20G  143M   19G   1% > /var/lib/ceph/mon > /dev/mapper/cgts--vg-log--lv               7.6G  981M  6.3G  14% > /var/log > /dev/sda2                                  477M   96M  353M  22% > /boot > /dev/sda1                                  300M  8.7M  292M   3% > /boot/efi > /dev/drbd1                                 2.0G   19M  1.9G   1% > /var/lib/rabbitmq > /dev/drbd5                                 992M  2.6M  923M   1% > /opt/extension > /dev/drbd3                                 9.9G   15M  9.4G   1% > /opt/cgcs > /dev/drbd2                                 2.0G  7.6M  1.9G   1% > /opt/platform > /dev/drbd0                                  40G  223M   38G   1% > /var/lib/postgresql > > > so there are only 20GB free of space for conversions... any chance to > handle this problem? > > volker... > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Thu Apr 11 14:10:48 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 11 Apr 2019 14:10:48 +0000 Subject: [Starlingx-discuss] [Containers] Kubernetes support question In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A967243@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A967243@fmsmsx101.amr.corp.intel.com> Message-ID: Hello José, [2][3] - Pods that are deployed using host networking will use the node IP address (e.g. DaemonSets). Since the node IP address is within the management network subnet, then the deployment would be setup with multi-netting of the same physical interface and therefore the IP address is selected from that interface. Regards, Matt On 2019-04-11, 8:26 AM, "Perez Carranza, Jose" wrote: Hi all I'm reviewing this patch [1] and on the description says "System pods will be active on both controllers" but I'm checking according to below list [3] that 'Tiller-deploy' POD is only running on controller-1 that is on stand-by, is this normal behavior? Also according to this patch [2] says that pod network should be pointing to "172.16.0.0/16", but on the list [3] I see only 'tiller-deploy' and 'calico-kube-controllers' PODs using IPs on that range, is that normal? 1 - https://review.openstack.org/#/c/587458/ 2 - https://review.openstack.org/#/c/587465/ 3- http://paste.openstack.org/raw/749175/ Regards, José _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Barton.Wensley at windriver.com Thu Apr 11 14:36:26 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 11 Apr 2019 14:36:26 +0000 Subject: [Starlingx-discuss] [Containers] Kubernetes support question In-Reply-To: References: <0A5D9A624DF90343892F8F3FE7DE525A2A967243@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8AA39@ALA-MBD.corp.ad.wrs.com> José, To answer your question about the tiller-deploy pod - this pod is deployed by the "helm init" command which creates a single pod, which will move between the controllers if a controller fails or is taken out of service. Since the tiller-deploy pod is only used by helm operations, there is no need to have it running on both controllers. Bart -----Original Message----- From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: April 11, 2019 10:11 AM To: Perez Carranza, Jose; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Kubernetes support question Hello José, [2][3] - Pods that are deployed using host networking will use the node IP address (e.g. DaemonSets). Since the node IP address is within the management network subnet, then the deployment would be setup with multi-netting of the same physical interface and therefore the IP address is selected from that interface. Regards, Matt On 2019-04-11, 8:26 AM, "Perez Carranza, Jose" wrote: Hi all I'm reviewing this patch [1] and on the description says "System pods will be active on both controllers" but I'm checking according to below list [3] that 'Tiller-deploy' POD is only running on controller-1 that is on stand-by, is this normal behavior? Also according to this patch [2] says that pod network should be pointing to "172.16.0.0/16", but on the list [3] I see only 'tiller-deploy' and 'calico-kube-controllers' PODs using IPs on that range, is that normal? 1 - https://review.openstack.org/#/c/587458/ 2 - https://review.openstack.org/#/c/587465/ 3- http://paste.openstack.org/raw/749175/ Regards, José _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Apr 11 17:53:31 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 11 Apr 2019 17:53:31 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 04/11 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D77D7@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Apr 11/2019 - Networking Documentation - https://wiki.openstack.org/wiki/StarlingX/Networking#Useful_Networking_Commands - Add a note regarding VMs w/ huge pages when using ovs-dpdk - Add a reference to the packaged helm chart for neutron (as it can be used as a guide - Add an example for updating a single value with helm override - Continue to update as needed - Networking Test Status - 3 activities currently in progress: Regression TC definition, Feature testing for ovs-dpdk upversion, Feature testing for ovs-dpdk firewall - Testing time-lines are captured in the Test Release Plan - https://docs.google.com/spreadsheets/d/1Fyg-z4MirgE7CP-H8EXSeZ5CoN5t62hKk6lmX83rmQk/edit#gid=0 - TC Definition for Regression - Elio working to include feedback from Matt & Chris - Elio to update the networking domain sub-analysis and re-organize based on feedback - Matt plans to provide feedback on the neutron test-cases included in "StarlingX Tests" by end of week - Ghada also requested that Chris provided feedback - Regression Execution - Networking regression is currently planned to start in mid-June. - Need to have a mini-regression suite run as part of sanity/automation to ensure the load is sane in terms of networking. - Action: Elio to follow up with Ada - General: Are the automated test-cases implemented so far run on a regular basis already (weekly or bi-weekly)? - Action: Elio to follow up with Ada - Feature Testing for ovs-dpdk upversion - No need to run the TCs on config 2 and config 3. Config 4 covers data interfaces on all the required NIC families: Niantic, Mellanox, Fortville. - From Matt: When testing, ensure that VM traffic is traversing the different data networks configured on each of the NICs. TCs need to be adjusted accordingly. - No need to run TCs on virtual env as StarlingX only supports ovs-dpdk on baremetal. - No need to run all TCs on baremetal duplex; only a small subset is enough. Focus on running all TCs on multi-node with config 4; exercising the different NICs - TC steps included in https://drive.google.com/file/d/1JAJ_cGTRkFCePIX8UCtV9TLydBiidUsF/view are outdated. They need to be updated. - Failures - NET_PT_VLAN_01 - This is likely due to not having port trunking enabled in the neutron agents. helm-override is required. Action: Chenjie to investigate and provide steps. Add steps to the networking wiki above. - NET_BASIC_NT_06 - Needs to be retested with VM flavor ("hw:mem_page_size=large"). Action: Elio to arrange re-test - NET_DPDK_OVS_05 - Steps need to use helm-override (not service parameters). Needs to be retested with correct VM flavor. Action: Elio to arrange re-test - Feature Testing for ovs-dpdk firewall - Testplan sent; looking for feedback - Testing not started yet From cristopher.j.lemus.contreras at intel.com Thu Apr 11 18:03:01 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 11 Apr 2019 18:03:01 +0000 Subject: [Starlingx-discuss] [helm-charts] Name of auto-generated helm-charts Message-ID: Hello, Today we found that the usual helm-chat-manifest.tgz file was not generated and, instead, 4 new helm charts are available: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T140142Z/outputs/helm-charts/ helm-charts-manifest-centos-dev-latest.tgz helm-charts-manifest-centos-dev-versioned.tgz helm-charts-manifest-centos-stable-latest.tgz helm-charts-manifest-centos-stable-versioned.tgz So, a couple of questions: 1. Which file should be used to do the sanity tests? I assume that we should use: helm-charts-manifest-centos-stable-latest.tgz , kindly confirm. 2. Is it possible to notify these changes beforehand to the whole comunity? This change breaks the automation that we have in place and could also affect anybody that has automated the download. It might be a minor change, but it can be easily avoided with a quick notification. Thanks in advance! Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Apr 11 18:15:27 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 11 Apr 2019 18:15:27 +0000 Subject: [Starlingx-discuss] [helm-charts] Name of auto-generated helm-charts In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA47654D@ALA-MBD.corp.ad.wrs.com> Hi Cristopher, I’d recommend the test teams use the helm-charts-manifest-centos-stable-versioned.tgz file, which uses specific image versions in the manifest, as opposed to “latest”. This would allow for better accuracy in issue reporting, as well as reproducibility, and should ensure you’re testing with images corresponding to the load you’re testing. We’ll try to do a better job of communicating such changes in the future. Cheers, Don. From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Thursday, April 11, 2019 2:03 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [helm-charts] Name of auto-generated helm-charts Hello, Today we found that the usual helm-chat-manifest.tgz file was not generated and, instead, 4 new helm charts are available: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T140142Z/outputs/helm-charts/ helm-charts-manifest-centos-dev-latest.tgz helm-charts-manifest-centos-dev-versioned.tgz helm-charts-manifest-centos-stable-latest.tgz helm-charts-manifest-centos-stable-versioned.tgz So, a couple of questions: 1. Which file should be used to do the sanity tests? I assume that we should use: helm-charts-manifest-centos-stable-latest.tgz , kindly confirm. 2. Is it possible to notify these changes beforehand to the whole comunity? This change breaks the automation that we have in place and could also affect anybody that has automated the download. It might be a minor change, but it can be easily avoided with a quick notification. Thanks in advance! Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Thu Apr 11 18:30:09 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Thu, 11 Apr 2019 18:30:09 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F8621A6631@ALA-MBD.corp.ad.wrs.com> Yes it is fixed. I used the helm charts that came with the build just FYI. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T013000Z/outputs/helm-charts/ BR, Yang From: Khalil, Ghada Sent: April-10-19 9:25 PM To: Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From elio.martinez.monroy at intel.com Tue Apr 9 18:44:19 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Tue, 9 Apr 2019 18:44:19 +0000 Subject: [Starlingx-discuss] Association of unused OSDS to storage Message-ID: <1466AF2176E6F040BD63860D0A241BBD46CB85ED@FMSMSX109.amr.corp.intel.com> Hi, The scenario requires the association of unused OSDs to a storage tier. For this, the test creates a storage tier in a ceph cluster, which I do and consult this way in CLI: [cid:image002.png at 01D4EEDA.52761AA0] Next, the test requires the list of OSDs already created in association with a disk, of which I then take a disk with enough available space to create an OSD into: [cid:image003.png at 01D4EEDA.52761AA0] However, in Horizon, I am able to see and create partitions and OSDs in the host detail page for the host that I am modifying here: [cid:image004.png at 01D4EEDA.52761AA0] But the only place where I am able to see the storage tiers and clusters is in the Storage Overview tab under Platform, and I am unable to modify anything from it within this page: [cid:image005.png at 01D4EEDA.52761AA0] My question would be regarding where else I could find the option to create the storage tier within the ceph_cluster in Horizon, and if there is no way to do it from Horizon, if I could create the tier through CLI commands and then update the OSDs through Horizon instead. [cid:image001.png at 01CF8BAC.3B4C5DD0] Martinez Monroy, Elio. QA Engineer. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4914 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 18070 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 34759 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 78901 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 38574 bytes Desc: image005.png URL: From Ghada.Khalil at windriver.com Tue Apr 9 21:20:21 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 9 Apr 2019 21:20:21 +0000 Subject: [Starlingx-discuss] OVS-DPDK Upgrade Testing In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D1AA8@ALA-MBD.corp.ad.wrs.com> <72A55890-B007-4E93-A74A-291C725B158E@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA563A@FMSMSX109.amr.corp.intel.com> <608E4D93-FDFF-4F00-A166-A89EA6683B90@windriver.com> <1466AF2176E6F040BD63860D0A241BBD46CA56BA@FMSMSX109.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4D3D81@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D4194@ALA-MBD.corp.ad.wrs.com> <69276584-6FF8-458A-AF4F-EC073C1856E9@windriver.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D6EC5@ALA-MBD.corp.ad.wrs.com> Thanks Chenjie for confirming. I added the following note in the container installation guide[0]: IMPORTANT: When deploying OVS-DPDK, VMs must be configured to use a flavor with property: hw:mem_page_size=large [0] https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Configure_the_vswitch_type_.28optional.29 Regards, Ghada From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, April 08, 2019 3:39 AM To: Peters, Matt; Liu, ZhipengS; He, Yongli; Friesen, Chris Cc: 'starlingx-discuss at lists.starlingx.io'; Khalil, Ghada; Zhao, Forrest; Rowsell, Brent; Gauld, James; Le, Huifeng; Martinez Monroy, Elio; Perez, Ricardo O; Cabrales, Ada; Lin, Shuicheng Subject: RE: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hi all, After setting property "hw:mem_page_size=large" to flavor, the newly created VM can get IP from DHCP and ping other VM successfully. And NUMA related sections exist in the domain XML file (new domain XML mem_page_size.xml is attached). My steps are listed in the bug report: https://bugs.launchpad.net/starlingx/+bug/1820378 I think it’s better to modify the installation guide to include how to create VM on different environment (OVS/OVSDPDK). Please let me know your idea. Best Regards Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, April 4, 2019 3:21 AM To: Xu, Chenjie >; Liu, ZhipengS >; He, Yongli > Cc: 'starlingx-discuss at lists.starlingx.io' >; Khalil, Ghada >; Zhao, Forrest >; Rowsell, Brent >; Gauld, James >; Le, Huifeng >; Martinez Monroy, Elio >; Perez, Ricardo O >; Cabrales, Ada >; Lin, Shuicheng > Subject: Re: [Starlingx-discuss] OVS-DPDK Upgrade Testing Hello Folks, Thanks to Chris Friesen for point this out, but we believe the issues you are experiencing is due to the requirement for guests to be backed by huge pages to operate with OVS-DPDK vhost-user based ports/interfaces. The master (default) behavior for the latest nova will default to 4K pages for the guest, but this is not compatible with OVS-DPDK. The guests must be configured to use a flavor that has the property hw:mem_page_size=large set. You can follow this link to read more about the requirements on the guests for OVS-DPDK: https://docs.openstack.org/neutron/rocky/admin/config-ovs-dpdk.html “vhost-user requires file descriptor-backed shared memory. Currently, the only way to request this is by requesting large pages. This is why instances spawned on hosts with OVS-DPDK must request large pages”. Hope this helps. -Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Thu Apr 11 03:12:07 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 11 Apr 2019 03:12:07 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ghada, Logs on nova pod are not showing any ERROR, just this warning (log shows that is fixed on the next line): 2019-04-10 12:52:47,173.173 229781 WARNING nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection...: MessagingTimeout: Timed out waiting for a reply to message ID af6fd23a7b6a476985949f42947f0d4e 2019-04-10 12:52:47,192.192 229781 INFO nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] nova-conductor connection established successfully I’m not sure about the output of nova hypervisor-list DEBUG (shell:812) internalURL endpoint for compute service in RegionOne region not found Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 810, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 678, in main api_version = api_versions.discover_version(self.cs, api_version) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 261, in discover_version client) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 242, in _get_server_version_range version = client.versions.get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 70, in get_current return self._get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 52, in _get_current url = "%s" % self.api.client.get_endpoint() File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 271, in get_endpoint return self.session.get_endpoint(auth or self.auth, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1139, in get_endpoint return auth.get_endpoint(self, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 380, in get_endpoint allow_version_hack=allow_version_hack, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 279, in get_endpoint_data service_name=service_name) File "/usr/lib/python2.7/site-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for raise exceptions.EndpointNotFound(msg) EndpointNotFound: internalURL endpoint for compute service in RegionOne region not found Regards, Cristopher Lemus From: "Khalil, Ghada" Date: Wednesday, April 10, 2019 at 9:50 PM To: "Lemus Contreras, Cristopher J" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" Cc: "Liu, Yang" Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Christopher, These images look correct to me, but I’ll ask Yang to confirm the images she tested with tomorrow. Can you confirm that nova is still returning the same error as was originally reported in https://bugs.launchpad.net/starlingx/+bug/1823275 nova is returning this error: 2019-04-04 16:22:30,902.902 168396 ERROR nova.compute.manager [req-b3bffaba-a62a-4e56-be63-3362c51a36df - - - - -] Error updating resources for node compute-0.: PciDeviceNotFoundById: PCI device 0000:b3:02.3 not found Can you also list the output of the nova hypervisor cmd on the compute node? nova hypervisor-list Thanks, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 10:40 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Apr 11 18:10:11 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 11 Apr 2019 18:10:11 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4D77F2@ALA-MBD.corp.ad.wrs.com> This looks like a different issue unrelated to the original issue reported in https://bugs.launchpad.net/starlingx/+bug/1823275 https://bugs.launchpad.net/starlingx/+bug/1821938 From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 11:12 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, Logs on nova pod are not showing any ERROR, just this warning (log shows that is fixed on the next line): 2019-04-10 12:52:47,173.173 229781 WARNING nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection...: MessagingTimeout: Timed out waiting for a reply to message ID af6fd23a7b6a476985949f42947f0d4e 2019-04-10 12:52:47,192.192 229781 INFO nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] nova-conductor connection established successfully I’m not sure about the output of nova hypervisor-list DEBUG (shell:812) internalURL endpoint for compute service in RegionOne region not found Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 810, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 678, in main api_version = api_versions.discover_version(self.cs, api_version) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 261, in discover_version client) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 242, in _get_server_version_range version = client.versions.get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 70, in get_current return self._get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 52, in _get_current url = "%s" % self.api.client.get_endpoint() File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 271, in get_endpoint return self.session.get_endpoint(auth or self.auth, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1139, in get_endpoint return auth.get_endpoint(self, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 380, in get_endpoint allow_version_hack=allow_version_hack, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 279, in get_endpoint_data service_name=service_name) File "/usr/lib/python2.7/site-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for raise exceptions.EndpointNotFound(msg) EndpointNotFound: internalURL endpoint for compute service in RegionOne region not found Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 9:50 PM To: "Lemus Contreras, Cristopher J" >, "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Christopher, These images look correct to me, but I’ll ask Yang to confirm the images she tested with tomorrow. Can you confirm that nova is still returning the same error as was originally reported in https://bugs.launchpad.net/starlingx/+bug/1823275 nova is returning this error: 2019-04-04 16:22:30,902.902 168396 ERROR nova.compute.manager [req-b3bffaba-a62a-4e56-be63-3362c51a36df - - - - -] Error updating resources for node compute-0.: PciDeviceNotFoundById: PCI device 0000:b3:02.3 not found Can you also list the output of the nova hypervisor cmd on the compute node? nova hypervisor-list Thanks, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 10:40 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Apr 11 19:21:22 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 11 Apr 2019 21:21:22 +0200 Subject: [Starlingx-discuss] PTG lunch slot presentation Message-ID: Hi, I’m reaching out to you about an opportunity to give a short 5-minute overview of StarlingX at the PTG during one of the lunch slots. It is a great opportunity to further socialize the project amongst the developer community and update them with the latest activities and roadmap items to encourage collaboration. Is there anyone in the community who will attend the PTG and would like to present? Bruce offered slides for the slot which should make the preparation work easier. :) Please let me know if you have any questions. Thanks, Ildikó From build.starlingx at gmail.com Thu Apr 11 20:21:53 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Apr 2019 16:21:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 168 - Failure! Message-ID: <1901327192.141.1555014114029.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 168 Status: Failure Timestamp: 20190411T202150Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T140142Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190411T140142Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190411T140142Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T140142Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190411T140142Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Thu Apr 11 20:21:56 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 11 Apr 2019 16:21:56 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 81 - Failure! Message-ID: <690869392.144.1555014117711.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 81 Status: Failure Timestamp: 20190411T184323Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T140142Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190411T140142Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T140142Z/logs MASTER_BUILD_NUMBER: 59 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190411T140142Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190411T140142Z DOCKER_BUILD_ID: jenkins-master-20190411T140142Z-builder TIMESTAMP: 20190411T140142Z OS_VERSION: 7.6.1810 BUILD_STREAM: dev PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190411T140142Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190411T140142Z/outputs From yang.liu at windriver.com Thu Apr 11 18:39:48 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Thu, 11 Apr 2019 18:39:48 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D77F2@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D77F2@ALA-MBD.corp.ad.wrs.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F8621A666B@ALA-MBD.corp.ad.wrs.com> Are you able to run any nova cmd Christopher? $ source /etc/platform/openrc ; system host-device-list compute-0; export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3; nova hypervisor-list +------------------+--------------+----------+-----------+-----------+---------------------------+---------------------------------+----------------------------------------+-----------+---------+ | name | address | class id | vendor id | device id | class name | vendor name | device name | numa_node | enabled | +------------------+--------------+----------+-----------+-----------+---------------------------+---------------------------------+----------------------------------------+-----------+---------+ | pci_0000_08_00_0 | 0000:08:00.0 | 0b4000 | 8086 | 0435 | Co-processor | Intel Corporation | DH895XCC Series QAT | 0 | True | | pci_0000_0c_00_0 | 0000:0c:00.0 | 030000 | 102b | 0522 | VGA compatible controller | Matrox Electronics Systems Ltd. | MGA G200e [Pilot] ServerEngines (SEP1) | 0 | True | +------------------+--------------+----------+-----------+-----------+---------------------------+---------------------------------+----------------------------------------+-----------+---------+ +--------------------------------------+---------------------+-------+---------+ | ID | Hypervisor hostname | State | Status | +--------------------------------------+---------------------+-------+---------+ | 6136b80d-e2a9-4d34-97ce-d3818b305a7f | compute-0 | up | enabled | | d87821b4-4575-4238-b97c-b07589465127 | compute-2 | up | enabled | | f620e8ae-7353-4fbb-ad0d-f427b6c8bd00 | compute-3 | up | enabled | | a966bf58-6ce8-42d4-be8b-ea4453305644 | compute-1 | up | enabled | +--------------------------------------+---------------------+-------+---------+ Images are the same btw. 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 2 days ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 2 days ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 2 days ago 589MB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 2 days ago 589MB BR, Yang From: Khalil, Ghada Sent: April-11-19 2:10 PM To: Lemus Contreras, Cristopher J; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 This looks like a different issue unrelated to the original issue reported in https://bugs.launchpad.net/starlingx/+bug/1823275 https://bugs.launchpad.net/starlingx/+bug/1821938 From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 11:12 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, Logs on nova pod are not showing any ERROR, just this warning (log shows that is fixed on the next line): 2019-04-10 12:52:47,173.173 229781 WARNING nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection...: MessagingTimeout: Timed out waiting for a reply to message ID af6fd23a7b6a476985949f42947f0d4e 2019-04-10 12:52:47,192.192 229781 INFO nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] nova-conductor connection established successfully I’m not sure about the output of nova hypervisor-list DEBUG (shell:812) internalURL endpoint for compute service in RegionOne region not found Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 810, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 678, in main api_version = api_versions.discover_version(self.cs, api_version) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 261, in discover_version client) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 242, in _get_server_version_range version = client.versions.get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 70, in get_current return self._get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 52, in _get_current url = "%s" % self.api.client.get_endpoint() File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 271, in get_endpoint return self.session.get_endpoint(auth or self.auth, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1139, in get_endpoint return auth.get_endpoint(self, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 380, in get_endpoint allow_version_hack=allow_version_hack, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 279, in get_endpoint_data service_name=service_name) File "/usr/lib/python2.7/site-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for raise exceptions.EndpointNotFound(msg) EndpointNotFound: internalURL endpoint for compute service in RegionOne region not found Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 9:50 PM To: "Lemus Contreras, Cristopher J" >, "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Christopher, These images look correct to me, but I’ll ask Yang to confirm the images she tested with tomorrow. Can you confirm that nova is still returning the same error as was originally reported in https://bugs.launchpad.net/starlingx/+bug/1823275 nova is returning this error: 2019-04-04 16:22:30,902.902 168396 ERROR nova.compute.manager [req-b3bffaba-a62a-4e56-be63-3362c51a36df - - - - -] Error updating resources for node compute-0.: PciDeviceNotFoundById: PCI device 0000:b3:02.3 not found Can you also list the output of the nova hypervisor cmd on the compute node? nova hypervisor-list Thanks, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 10:40 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Thu Apr 11 22:19:00 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 11 Apr 2019 18:19:00 -0400 Subject: [Starlingx-discuss] Migrating git repos to OpenDev In-Reply-To: <871s3itl0p.fsf@meyer.lemoncheese.net> References: <871s3itl0p.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Mar 7, 2019, at 1:47 PM, James E. Blair wrote: > Hi, > > As discussed in November[1], the OpenStack project infrastructure is > being rebranded as "OpenDev" to better support a wider community of > projects. > > We are nearly ready to perform the part of this transition with the > largest impact: moving the authoritative git repositories for existing > projects. > > In this email, I'd like to introduce the new hosting system we are > preparing, discuss the transition, and invite projects to work with us > on the logistics of the change. > > Gerrit > ====== > > Gerrit is the core of our system and it will remain so in OpenDev. As > part of this move, we will rename the gerrit server from > review.openstack.org to review.opendev.org. As part of the transition, > we will automatically merge appropriate changes to all branches of all > repositories updating .gitreview and Zuul configuration files. Any > further changes (README files, etc.) we expect to be made by individual > project contributors. > > Repository Browsing > =================== > > Currently our canonical *public* repository system is the cgit server at > https://git.openstack.org/ (and git.airshipit.org, git.starlingx.io, and > git.zuul-ci.org). This is a load balanced cluster of several servers > which is designed to handle all the public git repository traffic, as it > scales much better than Gerrit (and has a more friendly domain name). > From a technical standpoint, it's excellent, but its usability could be > improved. > > Therefore, as part of this transition, we will replace the cgit servers > with a new system based on Gitea. Gitea is a complete development > collaboration system, but it's very flexible and will allow us to > disable components which we aren't using. We will operate it in a > read-only configuration where it will act as the public mirror for > Gerrit. The advantages it has over the current system are: > > * Shorter domain name in project URLs: > https://git.openstack.org/openstack/nova vs > https://opendev.org/openstack/nova > * Clone and browsing URLs are the same (with cgit, the browsing URL has > an extra path component) > * More visually pleasing code browsing > * Integrated code searching > * Ability to highlight multiple lines in links > > When we perform the transition we will install redirects from > git.openstack.org (and the other git sites) to opendev.org, and will > maintain those redirects for the foreseeable future. We will construct > them so that even existing deep links to individual files in individual > commits to cgit will redirect to the correct location on opendev.org. > > This system is up and running now with a live mirror of data from > Gerrit, and you can start testing it out today at https://opendev.org/ > > Please let us know if you encounter any problems. > > If you would like to read more about the design of this system and the > transition, see the infra-spec[2]. > > GitHub > ====== > > Currently all OpenStack projects are replicated to GitHub. We do not > plan on changing that during the transition, however, any projects > outside of the openstack*/ namespaces will not automatically be > replicated to GitHub, and we do not plan on adding that in the future. > We do, however, support projects using Zuul to run post-merge jobs to > push updates to GitHub or any other third-party mirrors with their own > credentials. We will be happy to work with anyone interested in that to > help set up jobs to do so. > > We are adopting this approach so that individual projects can have more > control over how they are represented in social media, and to give us > more flexibility in supporting our own organizational namespaces on > OpenDev without assuming they map directly to GitHub. > > Eventually we plan on moving the OpenStack project to that system as > well and retiring direct replication from Gerrit to GitHub completely. > But we will defer that work until after this transition. > > Logistics > ========= > > We can prepare much of the system in advance (as we have for the hosting > system on opendev.org), but the actual transition and renaming of the > Gerrit server will need to happen at once during an outage window. We > need to schedule that outage and begin preparing for it. > > Since all of the project git URLs are going to change (to replace > git.openstack.org with opendev.org and review.openstack.org with > review.opendev.org), we can additionally take the opportunity to > reorganize projects into different organizations. > > For example, during the transition we will rename Zuul, and it's > associated projects, from the "openstack-infra" org to "zuul". So their > new names will be "zuul/zuul", "zuul/nodepool", etc. > > This is an excellent time for the rest of the OpenStack Foundation > pilot projects to do the same. > > If the OpenStack project desires this, it would also be a good time to > move unofficial projects out of the openstack/ namespace. > > Therefore, we need your help: > > Action Items > ============ > > We need each of the following projects: > > * OpenStack > * Airship > * StarlingX > * Zuul > > To nominate a single point of contact to work with us on the transition. > It would be helpful for that person to attend the next (and possibly > next several) openstack infra team meetings in IRC [3]. We will work > with those people on scheduling the transition, as well as finalizing > the list of projects which should be renamed as part of the transition. > > If you manage an unofficial project and would like to take the > opportunity to move or rename your project, please add it to this > ethercalc[4]. > > [1] > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html > [2] > http://specs.openstack.org/openstack-infra/infra-specs/specs/opendev-gerrit.html > [3] http://eavesdrop.openstack.org/#Project_Infrastructure_Team_Meeting > [4] https://ethercalc.openstack.org/opendev-transition We've made good progress in preparing this change and are still on track to do this Friday April 19, 2019. Our project liasons have been drawing up lists of projects to rename during the outage. One big thing to keep in mind is unofficial OpenStack projects will no longer be in the "openstack" namespace. They will be placed in the 'x/' namespace instead which intends to indicate no endorsement or special ownership. One side effect of this change (as noted in Jim's earlier email) is that we will stop replicating projects that move out of the openstack namespace. Projects that wish to be replicated to Github or anywhere else they like can do so following the steps that David Moreau-Simard put together for us here [5]. This transition is likely to be a bit bumpy particularly at the start. We'll be around after the transition to help fix unexpected errors and are likely to spend a fair bit of time at the PTG improving things as well. Finally, to be extra clear, we intend to put http redirects in place so that all your old http(s) urls continue to work. Fungi has set this up for testing with details here [6] if you would like to ensure your urls redirect properly. [5] http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005007.html [6] http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004921.html Thank you for your patience and feel free to reach out with any questions you might have. Clark From dmsimard at redhat.com Thu Apr 11 21:48:39 2019 From: dmsimard at redhat.com (David Moreau Simard) Date: Thu, 11 Apr 2019 17:48:39 -0400 Subject: [Starlingx-discuss] New Zuul job to replicate a project's git repository to a remote git server Message-ID: Hi, It is now possible for projects to replicate their git repository to a custom location by inheriting from the 'upload-git-mirror' job provided by Zuul. This job wraps around the 'upload-git-mirror' Ansible role that is part of the zuul-jobs library [1]. In order to use this job, you must supply a secret in the following format: === - secret: name: data: user: host:
host_key: ssh_key: === The 'host_key' parameter can be retrieved from your known_hosts file or with a command like 'ssh-keyscan -H ' or 'ssh-keyscan -t rsa '. For example, the 'host_key' when pushing to GitHub would be, on a single line: github.com ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEAq2A7hRGmdnm9tUDbO9IDSwBK6TbQa+PXYPCPy6rbTrTtw7PHkccKrpp0yVhp5HdEIcKr6pLlVDBfOLX9QUsyCOV0wzfjIJNlGEYsdlLJizHhbn2mUjvSAHQqZETYP81eFzLQNnPHt4EVVUh7VfDESU84KezmD5QlWpXLmvU31/yMf+Se8xhHTvKSCZIFImWwoG6mbUoWf9nzpIoaSjB+weqqUUmpaaasXVal72J+UX2B+2RPW3RcT0eOzQgqlJL3RKrTJvdsjE3JEAvGq3lGHSZXy28G3skua2SmVi/w4yCE6gbODqnTWlg7+wC604ydGXA8VJiS5ap43JXiUFFAaQ== The 'ssh_key' parameter should be encrypted before being committed to the git repository. Zuul provides a tool for easily encrypting files such as SSH private keys and you can find more information about it in the documentation [2]. For example, encrypting a key for the "openstack/ara" project would look like this: === zuul/tools/encrypt_secret.py --infile /home/dmsimard/.ssh/ara_git_key --tenant openstack https://zuul.openstack.org openstack/ara === You would then use the secret in a job inheriting from 'upload-git-mirror' as such: === - job: name: -upload-git-mirror parent: upload-git-mirror description: Mirrors openstack/ to neworg/ vars: git_mirror_repository: neworg/ secrets: - name: git_mirror_credentials secret: pass-to-parent: true === Finally, the job must be set to run in your project's 'post' pipeline which is triggered every time a new commit is merged to the repository: === - project: check: jobs: # [...] gate: jobs: # [...] post: jobs: - -upload-git-mirror === Note that the replication would only begin *after* the change has merged, meaning that merging the addition of the post job would not trigger the post job itself immediately. The post job will only trigger the next time that a commit is merged. [1]: https://zuul-ci.org/docs/zuul-jobs/general-roles.html#role-upload-git-mirror [2]: https://zuul-ci.org/docs/zuul/user/encryption.html David Moreau Simard dmsimard = [irc, github, twitter] From maria.g.perez.ibarra at intel.com Thu Apr 11 23:44:06 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 11 Apr 2019 23:44:06 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190411 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-11 (link) Status: Green =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] - The failing tests are due to a synchronization issue, we are enabling a delay after application-apply is run within the test. - About virtual results we don't have results yet due to problems with helm-chart file. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipe_57 at live.com.mx Fri Apr 12 00:19:08 2019 From: felipe_57 at live.com.mx (Felipe de Jesus Ruiz Garcia) Date: Fri, 12 Apr 2019 00:19:08 +0000 Subject: [Starlingx-discuss] review Fix mtce-guest not rebuilt after mtce-common Message-ID: Hi community I would like receive feedback for the patch. https://review.openstack.org/#/c/634513/ Regards Pipo / Felipe Ruiz -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Apr 12 00:22:57 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 12 Apr 2019 00:22:57 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190411 In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F0CEEC@SHSMSX104.ccr.corp.intel.com> Nice to see that our Sanity is now turn to green again! From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Friday, April 12, 2019 7:44 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190411 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-11 (link) Status: Green =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] - The failing tests are due to a synchronization issue, we are enabling a delay after application-apply is run within the test. - About virtual results we don't have results yet due to problems with helm-chart file. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jacky at linux.com Fri Apr 12 01:57:36 2019 From: jacky at linux.com (Jacky Chen) Date: Fri, 12 Apr 2019 09:57:36 +0800 Subject: [Starlingx-discuss] Starlingx HW compatibility list Message-ID: Hi All As new starter, I would like to know if any HW model have been tested by community user. If community have a form can be provided to everyone to give feedback which HW model and components(cpu, memory, raid, chipset, nic, etc...) are supported by pre-build iso. This could be very helpful for newbie, and we can rich the HW compatibility list from everyone contribution. Thanks https://docs.starlingx.io/installation_guide/latest/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From huang.shuquan at 99cloud.net Fri Apr 12 06:22:50 2019 From: huang.shuquan at 99cloud.net (Shuquan Huang) Date: Fri, 12 Apr 2019 14:22:50 +0800 Subject: [Starlingx-discuss] Distro.openstack meeting Apr 10 2019 In-Reply-To: <9A85D2917C58154C960D95352B22818BD071A92B@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD071A92B@fmsmsx123.amr.corp.intel.com> Message-ID: <42EE4264-26AC-43D1-8A69-C1D776CA86A5@99cloud.net> Hi Bruce, These 2 fixes have been backported to stein branch at 4/11. Let’s make sure stx stein sync up to the latest stable stein. https://review.openstack.org/#/c/649320/ https://review.openstack.org/#/c/649319/ Other fixes depends on “NUMA aware live migration”. After Gerry backport the patches from Artom, it should be fixed. We’ll validate it afterwards. On Apr 9, 2019, at 9:27 PM, Jones, Bruce E wrote: Meeting notes and agenda for the 4/10 meeting · We took a decision at the release planning meeting to integrate the partially complete NUMA live migration patches from Artom into the f/stein branch to de-risk the feature. Gerry to backport and add his fixes. Bill to get an update on the status of this. · Are there any other changes from the upstream list that are also that important? · Dean's email from 4/5: I just finished resetting the stx-nova repo [0] to track upstream nova: * the old master branch is now stx/old-master for reference * master branch is a snapshot of upstream master as of about 30 min ago * stable/stein branch is a snapshot of upstream stable/stein as of about 30 min ago * stx/stein is our working copy of stable/stein and where anything we backport should land. Big Note: I am thinking about keeping a policy of periodically rebasing stx/stein on stable/stein to keep a clear history as we move forward, making it easier to see what we have added. That possibly means doing it next week when the final stein tag is added. Thoughts? Force pushes can be inconvenient for developers but I am thinking the price may be worth the return on a wider scale. · Chris replied: I like the idea of rebasing periodically to keep our changes "on top". Rather than force-pushing, it might make sense to create a new branch for each of these rebases. That way we don't need to rewrite history. · We agreed that we would create a new branch every time we pick up a new change, picking up the new upstream every time (even with other changes). This requires a build change every time but is consistent with how we are handling other similar packages e.g. Ceph. New branches to be f/stein.1/.2/.3 etc... · Have the NUMA changes from upstream been backported to the branch? Any links to reviews or stories? Bill to provide from Gerry. · 99 Cloud sent an email update: AR Bruce to ping Eric on getting eyeballs on rdb disk reviews and a couple others that look ready to go Bruce to update the master spreadsheet from the email update Shuquan to check if the fixes for "Fix stale RequestSpec instance numa topology for live-migration" are in the Stein branch (or need to be backported) Dean to create a f/stein.1 branch for the NUMA live migration backport from Gerry. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Fri Apr 12 13:24:23 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 12 Apr 2019 15:24:23 +0200 Subject: [Starlingx-discuss] OSF Lounge - project demo space Message-ID: <205821B0-025F-410F-AF25-5F6CFC1382BD@gmail.com> Hi StarlingX Community, I’m reaching out to let you know that we will have an OpenStack Foundation lounge at the Open Infrastructure Summit in Denver where there will be a demo spot for OSF projects. We will have a sign-up sheet to share the space among the projects to show demos and hold office hours. Stay tuned for further information and let me know if you have further questions at the meantime. Thanks, Ildikó From Volker.Hoesslin at swsn.de Fri Apr 12 13:59:28 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Fri, 12 Apr 2019 13:59:28 +0000 Subject: [Starlingx-discuss] Error creating volume In-Reply-To: <1554990508.3600.36.camel@windriver.com> References: , <1554990508.3600.36.camel@windriver.com> Message-ID: strike! works like charm! thx! ________________________________________ Von: Michel Thebeau [michel.thebeau at windriver.com] Gesendet: Donnerstag, 11. April 2019 15:48 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] Error creating volume Hi Volker, I think you'll want to do 'controllerfs-modify' system help controllerfs-modify You should be able to do that from the horizon interface under system configuration. I'm not looking at the first public release though, so please excuse me if the information is dated. M On Wed, 2019-04-10 at 15:10 +0000, von Hoesslin, Volker wrote: > hi, > based on first public release i got this error: > > Error creating volume. Message from driver: Failed to copy image to > volume: Insufficient free space on /opt/img-conversions for image > download and conversion. > > this happens on create an new volume based on an qcow2 image with > ~10GB. have a look at controller and checkout mounting points, i can > see this: > > Filesystem Size Used Avail Use% > Mounted on > /dev/sda3 20G 8.8G 9.4G 49% / > devtmpfs 7.7G 0 7.7G 0% /dev > tmpfs 7.7G 496K 7.7G 1% > /dev/shm > tmpfs 7.7G 12M 7.7G 1% /run > tmpfs 7.7G 0 7.7G 0% > /sys/fs/cgroup > tmpfs 1.0G 152K 1.0G 1% /tmp > /dev/mapper/cgts--vg-img--conversions--lv 20G 45M 19G 1% > /opt/img-conversions > /dev/mapper/cgts--vg-gnocchi--lv 4.8G 52M 4.5G 2% > /opt/gnocchi > /dev/mapper/cgts--vg-scratch--lv 7.8G 36M 7.4G 1% > /scratch > /dev/mapper/cgts--vg-backup--lv 50G 53M 47G 1% > /opt/backups > /dev/mapper/cgts--vg-ceph--mon--lv 20G 143M 19G 1% > /var/lib/ceph/mon > /dev/mapper/cgts--vg-log--lv 7.6G 981M 6.3G 14% > /var/log > /dev/sda2 477M 96M 353M 22% > /boot > /dev/sda1 300M 8.7M 292M 3% > /boot/efi > /dev/drbd1 2.0G 19M 1.9G 1% > /var/lib/rabbitmq > /dev/drbd5 992M 2.6M 923M 1% > /opt/extension > /dev/drbd3 9.9G 15M 9.4G 1% > /opt/cgcs > /dev/drbd2 2.0G 7.6M 1.9G 1% > /opt/platform > /dev/drbd0 40G 223M 38G 1% > /var/lib/postgresql > > > so there are only 20GB free of space for conversions... any chance to > handle this problem? > > volker... > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Fri Apr 12 14:36:49 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 12 Apr 2019 09:36:49 -0500 Subject: [Starlingx-discuss] Distro.openstack meeting Apr 10 2019 In-Reply-To: <42EE4264-26AC-43D1-8A69-C1D776CA86A5@99cloud.net> References: <9A85D2917C58154C960D95352B22818BD071A92B@fmsmsx123.amr.corp.intel.com> <42EE4264-26AC-43D1-8A69-C1D776CA86A5@99cloud.net> Message-ID: On Fri, Apr 12, 2019 at 1:24 AM Shuquan Huang wrote: > These 2 fixes have been backported to stein branch at 4/11. Let’s make sure stx stein sync up to the latest stable stein. > https://review.openstack.org/#/c/649320/ > https://review.openstack.org/#/c/649319/ > > Other fixes depends on “NUMA aware live migration”. After Gerry backport the patches from Artom, it should be fixed. We’ll validate it afterwards. Thank you Shuquan. We have not determined _when_ to do these stable branch syncs yet. I do not see a PR yet for the NUMA backports so I am thinking about doing that now, but it may impact Gerry's work (I don't know his timetable). I am open to suggestions if we should set a schedule, or just trigger the sync based on either when we have something to add or upstream adds something. The latter may be more often than we want to do, so once a week? Or we just play it by ear is fine with me too... dt -- Dean Troyer dtroyer at gmail.com From maria.g.perez.ibarra at intel.com Thu Apr 11 23:28:07 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 11 Apr 2019 23:28:07 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4D77F2@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D77F2@ALA-MBD.corp.ad.wrs.com> Message-ID: Hello Ghada, we double checked the error and it does not appear anymore. We’ve found that after running Application-apply, the system takes about 8 to 10 minutes to stabilize after that, instances can be created correctly. Regards Maria G. From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, April 11, 2019 1:10 PM To: Lemus Contreras, Cristopher J ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 This looks like a different issue unrelated to the original issue reported in https://bugs.launchpad.net/starlingx/+bug/1823275 https://bugs.launchpad.net/starlingx/+bug/1821938 From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 11:12 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, Logs on nova pod are not showing any ERROR, just this warning (log shows that is fixed on the next line): 2019-04-10 12:52:47,173.173 229781 WARNING nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection...: MessagingTimeout: Timed out waiting for a reply to message ID af6fd23a7b6a476985949f42947f0d4e 2019-04-10 12:52:47,192.192 229781 INFO nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] nova-conductor connection established successfully I’m not sure about the output of nova hypervisor-list DEBUG (shell:812) internalURL endpoint for compute service in RegionOne region not found Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 810, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 678, in main api_version = api_versions.discover_version(self.cs, api_version) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 261, in discover_version client) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 242, in _get_server_version_range version = client.versions.get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 70, in get_current return self._get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 52, in _get_current url = "%s" % self.api.client.get_endpoint() File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 271, in get_endpoint return self.session.get_endpoint(auth or self.auth, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1139, in get_endpoint return auth.get_endpoint(self, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 380, in get_endpoint allow_version_hack=allow_version_hack, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 279, in get_endpoint_data service_name=service_name) File "/usr/lib/python2.7/site-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for raise exceptions.EndpointNotFound(msg) EndpointNotFound: internalURL endpoint for compute service in RegionOne region not found Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 9:50 PM To: "Lemus Contreras, Cristopher J" >, "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Christopher, These images look correct to me, but I’ll ask Yang to confirm the images she tested with tomorrow. Can you confirm that nova is still returning the same error as was originally reported in https://bugs.launchpad.net/starlingx/+bug/1823275 nova is returning this error: 2019-04-04 16:22:30,902.902 168396 ERROR nova.compute.manager [req-b3bffaba-a62a-4e56-be63-3362c51a36df - - - - -] Error updating resources for node compute-0.: PciDeviceNotFoundById: PCI device 0000:b3:02.3 not found Can you also list the output of the nova hypervisor cmd on the compute node? nova hypervisor-list Thanks, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 10:40 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Apr 12 16:35:36 2019 From: serverascode at gmail.com (Curtis) Date: Fri, 12 Apr 2019 12:35:36 -0400 Subject: [Starlingx-discuss] ipxe boot iso? In-Reply-To: References: Message-ID: On Wed, Apr 10, 2019 at 3:57 PM Curtis wrote: > > > On Wed, Apr 10, 2019 at 9:46 AM Arce Moreno, Abraham < > abraham.arce.moreno at intel.com> wrote: > >> Hi Curtis, >> >> > Out of curiosity has anyone ipxe booted stx from the ISO? >> >> Yes, I have tried a couple of times being able to load both the kernel >> and initrd from other Linux distros and StarlingX, >> At the end I was able to boot and install other distros but not >> StarlingX, here you have all my learning written: >> https://github.com/xe1gyq/starlingx/blob/master/Packet.md >> >> > I'm doing a bit of testing in trying to get stx installed on baremetal >> packet.com >> > nodes and will need to ipxe boot. I thought I'd >> ask before I >> > went to far into working on it. The easy way of using "kernel >> > https://boot.netboot.xyz/memdisk iso raw" and the ISO via http did not >> work, >> > ran out of memory, so might have to get a little more complicated. :) >> >> However, whehn working with StarlingX ISO, I run into the issue of not >> being able to get into the console, Packet Customer Service greatly >> supported me (Chat with an agent is a great option) and reply back after >> investigating: >> >> >> >> For the ISO issue we would suggest on setting the kernel options. Let him >> know that our x86 servers require console=ttyS1,115200n8, and our aarch64 >> servers require console=ttyAMA0,115200 >> We recommend to try adding 'console=ttyS1,115200n8' on your t1.small >> server >> eg. kernel https://boot.netboot.xyz/memdisk iso raw >> console=ttyS1,115200n8 >> >> >> >> So, I tried that once but got this error: >> >> iPXE> kernel https://boot.netboot.xyz/memdisk iso raw >> console=ttyS1,115200n8 >> https://boot.netboot.xyz/memdisk... ok >> Could not select: Exec format error (http://ipxe.org/2e008081) >> >> > Interesting, as I didn't get that error, instead I received one about > running out of disk space, which I assume is b/c ipxe/memdisk doesn't load > up that much room. > > Thanks for the info, I will keep looking into this. :) > > FYI I've got an initial installation working and have documented it on the wiki: https://wiki.openstack.org/wiki/StarlingX/StarlingX_Packet.com_iPXE_Installation Not sure if that is the right place for it, and I'm still tweaking it, but that will at least install STX from an ISO on a packet.com node. Thanks, Curtis > Thanks, > Curtis > > > >> Due to other stuff going around, I did not follow up, I will try it again >> this week and see how far I get. Let me know if you want me to help with >> any specific task. >> >> Best Regards >> Abraham >> > > > -- > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Fri Apr 12 16:40:12 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Fri, 12 Apr 2019 16:40:12 +0000 Subject: [Starlingx-discuss] ipxe boot iso? In-Reply-To: References: Message-ID: > FYI I've got an initial installation working and have documented it on the wiki: > https://wiki.openstack.org/wiki/StarlingX/StarlingX_Packet.com_iPXE_Installation Great Curtis! Thanks! > Not sure if that is the right place for it, and I'm still tweaking it, but that will at > least install STX from an ISO on a packet.com node. I think this is the right place for now! From maria.g.perez.ibarra at intel.com Fri Apr 12 17:45:39 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 12 Apr 2019 17:45:39 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190412 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-12 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 56] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] -------------------------------------------------------------------------------- libvirt pod on CrashLoopBackOff state due to unreachable hugetlb sysfs. https://bugs.launchpad.net/starlingx/+bug/1824567 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Fri Apr 12 18:31:42 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Fri, 12 Apr 2019 14:31:42 -0400 Subject: [Starlingx-discuss] =?utf-8?q?Upgrading_the_lists=2E=28openstack?= =?utf-8?q?=7Cairshipit=7Cstarlingx=7Czuul-ci=29=2Eorg_server_Friday_April?= =?utf-8?q?_12?= In-Reply-To: References: Message-ID: On Tue, Apr 9, 2019, at 2:09 PM, Clark Boylan wrote: > It is that time of the Ubuntu LTS cycle again and we need to upgrade > our mailman mailing list server. We'd like to do that this Friday, > April 12. We expect the upgrade to begin at about 17:00UTC and result > in a 30-45 minute outage. The reason for the extended outage is that we > will be upgrading the server in place to preserve its mail reputation. > > Thankfully email is a persistent system and your clients should queue > up email sent until the server is back up again and accepting smtp > connections. This means the outage shouldn't be very noticeable. > > Finally, for list admins, there are new DMARC moderation action > settings. We ask that you don't change these settings and instead work > with us if you need to address DMARC problems. Our current preference > is that we pass email through unmodified so that the signatures still > validate. > > Thank you all for your patience and feel free to ask us questions, > Clark > The upgrade is complete and we believe the lists to be operational again. As always feel free to ask us questions or point out any odd behaviors if you notice them. Thanks again for your patience, Clark From jose.perez.carranza at intel.com Fri Apr 12 18:03:55 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 12 Apr 2019 18:03:55 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 In-Reply-To: References: <151EE31B9FCCA54397A757BC674650F0BA4D7564@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D75A6@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4D77F2@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A9685E4@fmsmsx101.amr.corp.intel.com> Hi All “……… after running Application-apply, the system takes about 8 to 10 minutes to stabilize” This is something expected on StarlingX deployment? If this is the case should be great to inform the user that system is not ready, maybe setting the node “Degraded” or something like this. Regards, José From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 11, 2019 6:28 PM To: Khalil, Ghada ; Lemus Contreras, Cristopher J ; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hello Ghada, we double checked the error and it does not appear anymore. We’ve found that after running Application-apply, the system takes about 8 to 10 minutes to stabilize after that, instances can be created correctly. Regards Maria G. From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, April 11, 2019 1:10 PM To: Lemus Contreras, Cristopher J >; Perez Ibarra, Maria G >; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang > Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 This looks like a different issue unrelated to the original issue reported in https://bugs.launchpad.net/starlingx/+bug/1823275 https://bugs.launchpad.net/starlingx/+bug/1821938 From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 11:12 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, Logs on nova pod are not showing any ERROR, just this warning (log shows that is fixed on the next line): 2019-04-10 12:52:47,173.173 229781 WARNING nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] Timed out waiting for nova-conductor. Is it running? Or did this service start before nova-conductor? Reattempting establishment of nova-conductor connection...: MessagingTimeout: Timed out waiting for a reply to message ID af6fd23a7b6a476985949f42947f0d4e 2019-04-10 12:52:47,192.192 229781 INFO nova.conductor.api [req-c557fabd-b379-4dda-b64e-981e71b530bf - - - - -] nova-conductor connection established successfully I’m not sure about the output of nova hypervisor-list DEBUG (shell:812) internalURL endpoint for compute service in RegionOne region not found Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 810, in main OpenStackComputeShell().main(argv) File "/usr/lib/python2.7/site-packages/novaclient/shell.py", line 678, in main api_version = api_versions.discover_version(self.cs, api_version) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 261, in discover_version client) File "/usr/lib/python2.7/site-packages/novaclient/api_versions.py", line 242, in _get_server_version_range version = client.versions.get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 70, in get_current return self._get_current() File "/usr/lib/python2.7/site-packages/novaclient/v2/versions.py", line 52, in _get_current url = "%s" % self.api.client.get_endpoint() File "/usr/lib/python2.7/site-packages/keystoneauth1/adapter.py", line 271, in get_endpoint return self.session.get_endpoint(auth or self.auth, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/session.py", line 1139, in get_endpoint return auth.get_endpoint(self, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 380, in get_endpoint allow_version_hack=allow_version_hack, **kwargs) File "/usr/lib/python2.7/site-packages/keystoneauth1/identity/base.py", line 279, in get_endpoint_data service_name=service_name) File "/usr/lib/python2.7/site-packages/keystoneauth1/access/service_catalog.py", line 462, in endpoint_data_for raise exceptions.EndpointNotFound(msg) EndpointNotFound: internalURL endpoint for compute service in RegionOne region not found Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 9:50 PM To: "Lemus Contreras, Cristopher J" >, "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Christopher, These images look correct to me, but I’ll ask Yang to confirm the images she tested with tomorrow. Can you confirm that nova is still returning the same error as was originally reported in https://bugs.launchpad.net/starlingx/+bug/1823275 nova is returning this error: 2019-04-04 16:22:30,902.902 168396 ERROR nova.compute.manager [req-b3bffaba-a62a-4e56-be63-3362c51a36df - - - - -] Error updating resources for node compute-0.: PciDeviceNotFoundById: PCI device 0000:b3:02.3 not found Can you also list the output of the nova hypervisor cmd on the compute node? nova hypervisor-list Thanks, Ghada From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, April 10, 2019 10:40 PM To: Khalil, Ghada; Perez Ibarra, Maria G; starlingx-discuss at lists.starlingx.io Cc: Liu, Yang Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Ghada, These are the starlingx nova images used on the servers where the sanity was executed: controller-0:~# docker images |grep starlingx/stx-nova 192.168.100.60/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.204.2:9001/docker.io/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 30 hours ago 1.18GB 192.168.100.60/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB 192.168.204.2:9001/docker.io/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 31 hours ago 589MB controller-0:~# For stx-nova, id: 6c8a3a356110 and for stx-nova-api-proxy, id: c5853883d561 I manually verified the images on our local registry, and tried to download the latest version: [root at registry ~]# docker pull starlingx/stx-nova:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova Digest: sha256:e9fecb6998ad0cf0a6621a8c5c2422378d0972c73ed80b636dcbd4bb5183794e Status: Image is up to date for starlingx/stx-nova:master-centos-stable-latest [root at registry ~]# docker pull starlingx/stx-nova-api-proxy:master-centos-stable-latest master-centos-stable-latest: Pulling from starlingx/stx-nova-api-proxy Digest: sha256:06c9bf3546eeec5025e62ccf00c24dbaa20e6d79614d89709c0a71ffa639c5f8 Status: Image is up to date for starlingx/stx-nova-api-proxy:master-centos-stable-latest [root at registry ~]# docker images |grep starlingx/stx-nova registry.zpn.intel.com/starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova master-centos-stable-latest 6c8a3a356110 28 hours ago 1.18GB starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB registry.zpn.intel.com/starlingx/stx-nova-api-proxy master-centos-stable-latest c5853883d561 29 hours ago 589MB Images are up to date, and the timestamp matches what is logged in dockerhub: https://hub.docker.com/r/starlingx/stx-nova/tags . I assume that those are the images according to the comments on https://bugs.launchpad.net/starlingx/+bug/1821938, are there other nova images that we need to pull? I did a quick check and it looks like all of our local images (the ones deployed to the sanity servers) are updated. Thanks & Regards, Cristopher Lemus From: "Khalil, Ghada" > Date: Wednesday, April 10, 2019 at 8:27 PM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Cc: "Liu, Yang" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Hi Maria, Regarding https://bugs.launchpad.net/starlingx/+bug/1821938, Yang successfully verified that the issue is resolved using the same load you list below (see her note in the LP). Can you confirm that you are using the stable docker images built yesterday? The fix should be included in the nova image. Thanks, Ghada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Wednesday, April 10, 2019 8:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190410 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-10 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 10 TCs FAIL Sanity Platform 07 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [PASS : 44] [Fail : 13] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 10 TCs FAIL Sanity Platform 05 TCs [PASS] | 01 TCs FAIL TOTAL: 57 TCS [PASS : 46 TCs] [Fail : 11 Tcs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: [ 61 TCs PASS ] -------------------------------------------------------------------------------- No nova hypervisor can be enabled on workers with QAT devices https://bugs.launchpad.net/starlingx/+bug/1821938 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Fri Apr 12 19:42:55 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 12 Apr 2019 19:42:55 +0000 Subject: [Starlingx-discuss] [Containers] Test Scenarios based on feature plan Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A96863F@fmsmsx101.amr.corp.intel.com> Hi All We are working on the analysis of the storyboards already merged on feature plan [1] to create test cases, we kindly ask for your feedback on this development so we can create more accurate tests for the different functionalities, you can find our continuous contribution on a google sheet [2]. Some scenarios have only general description whilst other have been completed with Test Steps. 1 - https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4MbeA_UINnBliumg2Y/edit#gid=991138079 2- https://docs.google.com/spreadsheets/d/1dwcBwY4Yq1Lo9Der4RylzQ6KYp0BsMHohhEmhwpauDo/edit#gid=637180508 Regards, José From build.starlingx at gmail.com Sat Apr 13 02:03:56 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Apr 2019 22:03:56 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 170 - Failure! Message-ID: <1928201988.150.1555121037447.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 170 Status: Failure Timestamp: 20190413T020352Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190412T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190412T233000Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190412T233000Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190412T233000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190412T233000Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Sat Apr 13 02:03:59 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 12 Apr 2019 22:03:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 61 - Failure! Message-ID: <437730115.153.1555121041006.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 61 Status: Failure Timestamp: 20190412T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190412T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From cindy.xie at intel.com Sat Apr 13 03:29:42 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Sat, 13 Apr 2019 03:29:42 +0000 Subject: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up References: <6594B51DBE477C48AAE23675314E6C466459EDD6@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35ED90C1@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466459F616@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35EF86FB@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C46645A3E23@fmsmsx107.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F1395E@SHSMSX104.ccr.corp.intel.com> Mario, Starting from next week, Ran An from my team will have some bandwidth working together with you for SB#2004008. Just want to double check with you if you see a needs here, I see we still have 3 tasks in "todo" status but not sure if they are independent enough to allow parallel work btw you and Ran. How about the test results from your side about integration of FM chart w/ Armada? I see some good progress made on those patches already uploaded. Thx. - cindy -----Original Message----- From: Xie, Cindy Sent: Friday, April 5, 2019 10:21 AM To: Arevalo, Mario Alfredo C ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Thanks Mario for the update. Please continue the integration & testing for the FM chart w/ Armada system for those pending patches. You can share the test cases to the community as well so we can have a review. For the tasks still "todo", when you think we can upload initial patches? Or you are not working on those for now? Just need to know the ETA for those. Mingyuan is interested but he is still working on Ironic so we may still need to rely on you for FM at this moment. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Thursday, April 4, 2019 11:00 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, Actually, Luis and me have had some issues related to the integration of the FM chart with armada system in some local tests, I have been working on some patches updates to solve this. Right now I am creating an ISO image from scratch with these patches in order to test them in a clean environment. At this moment I would like to focus on this issue during the rest of the week and I will continue with the other patches related to horizon and another one about the implementation of the PUT method for the FM restful API.. At this moment my progress in the pending patches is research, however if there are someone interested about these pending patches, let me know. Thank you for your attention. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Wednesday, April 03, 2019 4:12 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi, Mario, I see that you made very good progress in uploading several patches against SB#2004008 - anything needs help for the remaining 3 tasks so far? Thx. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Wednesday, March 27, 2019 3:38 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, The first version of the patches will take around 2 weeks, after that, a validation step will start. In this step I am going to update the patches according to the feedback received from the community and Luis Botello will help to validate the functionality of the patches. As final step, I would like to execute the sanity when all patches are reviewed by the community an they are ready to be merged. This final step could vary around 2-3 weeks, it will depend on the response time from the community and the complexity of the required updates, in addition to the validation tasks. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Monday, March 25, 2019 5:44 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Mario, Nice to know that you're getting all information and having better understanding for the tasks. We probably needs to get a little bit more detail granularity of your plan, for each task in the storyboard: - when the patches will be uploaded for review; - what tests you're planning to do? Any support required from Ada's team? and when... - when you expect the patch review comments can be addressed and patch merged to master. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Tuesday, March 26, 2019 8:38 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com Subject: Re: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi team, Thank you for your feedback from our last meeting, and this is my update. I am checking all points described in this thread. Actually I have got progress in the topic related to snmp and the relation with oamcontroller, sysinv and cgts-client. I plan to send a PR with information about findings/architecture to the stx-fault/doc in a future. I think, it is not necessary another meeting as it was mentioned, I think I have enough information to continue and I am going to update the current reviews and send news according to the points discussed until today, and contact Tao for specific questions. Thanks Tao, Abraham and Frank. Best regards. Mario. ________________________________________ From: Arce Moreno, Abraham Sent: Friday, March 22, 2019 10:37 AM To: starlingx-discuss at lists.starlingx.io Cc: Arevalo, Mario Alfredo C; Tao Liu Subject: Fault Management Containerization (SB 2004008) Follow Up Thanks Frank for setting this up. Thanks everyone for your attendance to this meeting, here you have high level notes and ToDos based in the topics covered. In Summary - The presentation Stx-Fault/Containers is located at [0]. - Tao will kindly update the Fault Management architecture diagram, slide 8. - Mario will send an email no later than Monday afternoon with the latest findings / questions based in his 5 ToDos. - We will meet again on Tuesday to finalize on tasks and implementation details. If we are forgetting about any key point in this email, please do not hesitate to reply. StarlingX Architecture - 2 instances for each of the following projects: - Keystone - Horizon - Barbican - Fault Management will have 2 instances as well. Fault Management Architecture - [ToDo] [Tao] to modify the Fault Management architecture (Slide 8) Thanks Tao! - fm-api runs in compute node, snmp provide interfaces - [ToDo] [Mario] to check these statements Fault Management REST API - [ToDo] [Mario] to write the next level of details for REST API mapping / implementation, consider to include PUT to Event Log. Fault Management Architecture - python-fmclient is a wrapper to fm_cli / fm_api - [ToDo] [Mario] to understand more about fm_cli as a wrapper and how does it interact and affects fault management containerized strategy. FM Proposal - Remove mysql, fm-api, fm-common - [ToDo] [Mario] to understand about the removal of fm-api and fm-common from the containerized instance. - Dependency to cgts-client - [ToDo] [Mario] to understand what is cgtc-client and how does it interacts with fault management and the new containerized instance. OpenStack Applications The following 2 projects will make use of the Fault Management containerized: - starlingx-dashboard - stx-nfv [0] https://docs.google.com/presentation/d/1_vG83aHTToXlIdJxaJpVL-MHWfRGnxLuyEdFDt-nfwo/edit?usp=sharing _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Sat Apr 13 23:58:28 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 13 Apr 2019 19:58:28 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 266 - Failure! Message-ID: <1595486124.160.1555199909406.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 266 Status: Failure Timestamp: 20190413T233055Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190413T233001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190413T233001Z DOCKER_DL_ID: jenkins-master-20190413T233001Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190413T233001Z/logs DOCKER_DL_TAG: master-20190413T233001Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190413T233001Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sat Apr 13 23:58:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 13 Apr 2019 19:58:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 62 - Still Failing! In-Reply-To: <721625648.151.1555121038339.JavaMail.javamailuser@localhost> References: <721625648.151.1555121038339.JavaMail.javamailuser@localhost> Message-ID: <1542125764.163.1555199913428.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 62 Status: Still Failing Timestamp: 20190413T233001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190413T233001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From build.starlingx at gmail.com Mon Apr 15 01:50:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 14 Apr 2019 21:50:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 171 - Still Failing! In-Reply-To: <1719170157.148.1555121033929.JavaMail.javamailuser@localhost> References: <1719170157.148.1555121033929.JavaMail.javamailuser@localhost> Message-ID: <547252685.168.1555293032608.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 171 Status: Still Failing Timestamp: 20190415T015027Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190414T233001Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190414T233001Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190414T233001Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Mon Apr 15 01:50:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 14 Apr 2019 21:50:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 63 - Still Failing! In-Reply-To: <1342505705.161.1555199910259.JavaMail.javamailuser@localhost> References: <1342505705.161.1555199910259.JavaMail.javamailuser@localhost> Message-ID: <2132862362.171.1555293036139.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 63 Status: Still Failing Timestamp: 20190414T233001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From Frank.Miller at windriver.com Mon Apr 15 03:05:56 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 15 Apr 2019 03:05:56 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Weekly Containerization Meeting Message-ID: Re-sending due to moderator approval required. -----Original Appointment----- From: Miller, Frank Sent: Sunday, April 14, 2019 11:02 PM To: 'starlingx-discuss at lists.starlingx.io' Cc: 'Carlos Cebrian'; 'Xie, Cindy'; Rowsell, Brent; 'Jones, Bruce E'; 'Sun, Austin'; Dinescu, Stefan; Smith, Tyler; Qian, Bin; 'Armstrong, Robert H'; Friesen, Chris; Seiler, Glenn; 'Chen, Tingjie'; Waines, Greg; 'Zhi Zhi2 Chang'; 'Gomez, Juan P'; Chen, Jacky; 'Hu, Wei W'; Eslimi, Dariush Subject: Canceled: StarlingX Weekly Containerization Meeting When: Monday, April 15, 2019 11:00 AM-11:30 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 Importance: High Please note that for Monday April 15th I need to cancel our weekly containerization meeting. For any questions until next week please use the community email. Frank ============== For those contributing to or interested in the Containerization subproject, the plan is to meet weekly until the containerization StoryBoards are completed. Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Mon Apr 15 13:55:53 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 15 Apr 2019 09:55:53 -0400 Subject: [Starlingx-discuss] Public sandbox? Message-ID: Hi All, One of the ideas that came up in community discussion (not the only idea, just one and we can do multiple) for using the packet.com infrastructure was a public sandbox. I think this is a good idea. However, what does that actually mean? It could be a lot of things. My first thought is that it is a publicly accessible AIO Simplex node that people can easily access and get a feel for what STX actually is, and that is completely reset every X hours. But that's just me so I wanted to check with the community and see what other thoughts are out there. There are a few ways we could do this as well: 1. A single virtualized instance that is reset every X hours 2. A single baremetal instance that is reset every X hours 3. Multiple virtualized instances with some kind of reservation system 4. Other??? #1 would be easiest as that looks a lot like what we are doing for the workshop. But again, that's just my initial thoughts. Open to ideas/comments/criticisms. :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Mon Apr 15 14:28:33 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Mon, 15 Apr 2019 14:28:33 +0000 Subject: [Starlingx-discuss] Public sandbox? In-Reply-To: References: Message-ID: Thanks Curtis! > One of the ideas that came up in community discussion (not the only idea, just > one and we can do multiple) for using the packet.com > infrastructure was a public sandbox. I think this is a good idea. However, what > does that actually mean? It could be a lot of things. Some people would love to see also a virtual showroom of StarlingX features, more below... > My first thought is that it is a publicly accessible AIO Simplex node that people > can easily access and get a feel for what STX actually is, and that is completely > reset every X hours. But that's just me so I wanted to check with the community > and see what other thoughts are out there. Agree, this is a must have so new players can play around. > But again, that's just my initial thoughts. Open to ideas/comments/criticisms. :) Last week, we discussed, among other things, the usage for Packet and we determined it would be good to have a StarlingX virtual showroom, which is a space where any type of edge computing workloads are landed, which aligns perfectly to what Packet is looking for. We can start easily right away, deploying the demos that you guys have presented in the past conferences, and for every demo / workload / whatever you want to call it, we must have generate a: - Reference Architecture - Solution Brief - Application Note This looks to contribute, among other things, into the definition of Edge Computing use cases. Best Regards Some people from StarlingX community. From jose.perez.carranza at intel.com Mon Apr 15 14:48:17 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 15 Apr 2019 14:48:17 +0000 Subject: [Starlingx-discuss] [Containers]How to put a node in FAIL Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A9689C6@fmsmsx101.amr.corp.intel.com> Hi All I'm validating this feature [1] and one of the points to validate is that pods get EVICTED on failed nodes in less than 50 seconds, Did anyone knows a way to put the node (controller or compute) on FAIL? I tried turning off the node but seems like this is not detected as failed node because pods are not evicted and actually the status of the node is show as OFFLINE. Regards, José From serverascode at gmail.com Mon Apr 15 14:50:19 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 15 Apr 2019 10:50:19 -0400 Subject: [Starlingx-discuss] Public sandbox? In-Reply-To: References: Message-ID: On Mon, Apr 15, 2019 at 10:28 AM Arce Moreno, Abraham < abraham.arce.moreno at intel.com> wrote: > Thanks Curtis! > > > One of the ideas that came up in community discussion (not the only > idea, just > > one and we can do multiple) for using the packet.com > > infrastructure was a public sandbox. I think this is a good idea. > However, what > > does that actually mean? It could be a lot of things. > > Some people would love to see also a virtual showroom of StarlingX > features, more below... > > > My first thought is that it is a publicly accessible AIO Simplex node > that people > > can easily access and get a feel for what STX actually is, and that is > completely > > reset every X hours. But that's just me so I wanted to check with the > community > > and see what other thoughts are out there. > > Agree, this is a must have so new players can play around. > > > But again, that's just my initial thoughts. Open to > ideas/comments/criticisms. :) > > Last week, we discussed, among other things, the usage for Packet and we > determined it would be good to have a StarlingX virtual showroom, which is > a space where any type of edge computing workloads are landed, which aligns > perfectly to what Packet is looking for. > > Sounds great, where did this discussion occur? I'd like to take part if I can, though I am somewhat limited in N/A daytime meeting times. > We can start easily right away, deploying the demos that you guys have > presented in the past conferences, and for every demo / workload / whatever > you want to call it, we must have generate a: > > - Reference Architecture > - Solution Brief > - Application Note > > Where does these requirements come from? Thanks, Curtis > This looks to contribute, among other things, into the definition of Edge > Computing use cases. > > Best Regards > Some people from StarlingX community. > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Apr 15 14:51:33 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Apr 2019 10:51:33 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 172 - Still Failing! In-Reply-To: <714009796.166.1555293028484.JavaMail.javamailuser@localhost> References: <714009796.166.1555293028484.JavaMail.javamailuser@localhost> Message-ID: <1383521608.175.1555339894150.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 172 Status: Still Failing Timestamp: 20190415T145128Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190414T233001Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190414T233001Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190414T233001Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Mon Apr 15 14:53:44 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Apr 2019 10:53:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 173 - Still Failing! In-Reply-To: <1783047066.173.1555339890361.JavaMail.javamailuser@localhost> References: <1783047066.173.1555339890361.JavaMail.javamailuser@localhost> Message-ID: <939230163.178.1555340025196.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 173 Status: Still Failing Timestamp: 20190415T145341Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190414T233001Z OS: centos DOCKER_BUILD_ID: jenkins-master-20190414T233001Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190414T233001Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From cindy.xie at intel.com Mon Apr 15 15:08:09 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 15 Apr 2019 15:08:09 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190412 In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F17785@SHSMSX104.ccr.corp.intel.com> Ada/Maria, Do you have a way to automatically trigger the sanity as soon as a successful Cengen build is available? I am not exactly sure if we already have full automation so that it could be launched in night time, or how much manual setup is still required? Thx. - cindy From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Saturday, April 13, 2019 1:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190412 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-12 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 56] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] -------------------------------------------------------------------------------- libvirt pod on CrashLoopBackOff state due to unreachable hugetlb sysfs. https://bugs.launchpad.net/starlingx/+bug/1824567 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Mon Apr 15 15:46:46 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Mon, 15 Apr 2019 15:46:46 +0000 Subject: [Starlingx-discuss] [Containers]How to put a node in FAIL In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A9689C6@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A9689C6@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C4ED@ALA-MBD.corp.ad.wrs.com> José, You forgot the reference. A couple notes for you: - Powering off the node would be a good way to test this. - Using "kubectl get nodes", you should see the status change to NotReady in roughly 20 seconds or so. - Kubernetes will not evict most of the kube-system and openstack pods, since these pods are either tied to a particular node (e.g. in a DaemonSet) or prevented from being co-located with anti-affinity policies. - You probably want to test this by launching your own pod and then verifying that it gets evicted when the node it is running on is powered down. Bart -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: April 15, 2019 10:48 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers]How to put a node in FAIL Hi All I'm validating this feature [1] and one of the points to validate is that pods get EVICTED on failed nodes in less than 50 seconds, Did anyone knows a way to put the node (controller or compute) on FAIL? I tried turning off the node but seems like this is not detected as failed node because pods are not evicted and actually the status of the node is show as OFFLINE. Regards, José _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jose.perez.carranza at intel.com Mon Apr 15 15:52:12 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 15 Apr 2019 15:52:12 +0000 Subject: [Starlingx-discuss] [Containers]How to put a node in FAIL In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C4ED@ALA-MBD.corp.ad.wrs.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A9689C6@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C4ED@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A968A17@fmsmsx101.amr.corp.intel.com> Thanks Bart, Sorry for the reference, here is [1], and thanks for your notes, they will helpe a lot !! 1. https://review.openstack.org/#/c/597123/ Regards, José > -----Original Message----- > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > Sent: Monday, April 15, 2019 10:47 AM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io > Subject: RE: [Containers]How to put a node in FAIL > > José, > > You forgot the reference. A couple notes for you: > - Powering off the node would be a good way to test this. > - Using "kubectl get nodes", you should see the status change to NotReady in > roughly 20 seconds or so. > - Kubernetes will not evict most of the kube-system and openstack pods, since > these pods are either tied to a particular node (e.g. in a DaemonSet) or > prevented from being co-located with anti-affinity policies. > - You probably want to test this by launching your own pod and then verifying > that it gets evicted when the node it is running on is powered down. > > Bart > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: April 15, 2019 10:48 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Containers]How to put a node in FAIL > > Hi All > > I'm validating this feature [1] and one of the points to validate is that pods get > EVICTED on failed nodes in less than 50 seconds, Did anyone knows a way to > put the node (controller or compute) on FAIL? > > I tried turning off the node but seems like this is not detected as failed node > because pods are not evicted and actually the status of the node is show as > OFFLINE. > > Regards, > José > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.carlos.alonso at intel.com Mon Apr 15 16:03:45 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 15 Apr 2019 16:03:45 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190412 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F17785@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F17785@SHSMSX104.ccr.corp.intel.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153CBE1E0@FMSMSX108.amr.corp.intel.com> Hi, Yes, Jenkins infrastructure checks for a new CENG ISO in the early morning, if there is a new ISO the deployment and sanity start. All deployment process is automated. There are some manual test cases only. Regards. Juan Carlos Alonso From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Monday, April 15, 2019 10:08 AM To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190412 Ada/Maria, Do you have a way to automatically trigger the sanity as soon as a successful Cengen build is available? I am not exactly sure if we already have full automation so that it could be launched in night time, or how much manual setup is still required? Thx. - cindy From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Saturday, April 13, 2019 1:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190412 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-12 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 56] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 56 TCs] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] -------------------------------------------------------------------------------- libvirt pod on CrashLoopBackOff state due to unreachable hugetlb sysfs. https://bugs.launchpad.net/starlingx/+bug/1824567 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Apr 15 16:14:18 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 15 Apr 2019 11:14:18 -0500 Subject: [Starlingx-discuss] Edge Computing Use Case, Deployment Advice Needed In-Reply-To: References: Message-ID: sorry for the late reply On Wed, Apr 3, 2019 at 7:21 AM Curtis wrote: > > On Mon, Apr 1, 2019 at 7:43 PM Arce Moreno, Abraham wrote: >> >> > I added some points/questions inline. >> >> Thanks Curtis for your time! >> >> > > We are integrating this demo in our spare time to ramp up in cloud >> > > technologies and one of its imperatives is a working solution. It started as a use >> > > case proposal around unmanned aerial systems [0], then decided to avoid some >> > > of the complexity involved in flying the drones, and finally landed it as a use case >> > > around home automation / smart cities at the network edge. >> >> > First off, I'd like to let people know that we are planning on doing some kind of >> > "edge" proof-of-concept with Packet.com resources, so perhaps the project you >> > discuss could fit in with that. I'm sure we'll chat about it at some point here. >> > >> > At the next TSC meeting we'll discuss how to get the packet projects off the >> > ground, so feel free to attend. :) >> >> Awesome! We will be paying attention to community communications about this topic. >> >> > > This demo has currently integrated the following acceleration resources: >> > > - GPU >> > > - VPU (Movidius NCS) >> >> > I would not expect a USB device like the Movidius NCS to be available in most >> > STX deployments, but maybe? >> >> Maybe, Movidius NCS seems to be one of one those exploration paths to offload some workloads, and where budget could make a difference in comparison with FPGAs. > > > Oh for sure, cost effective. I see what you mean. > >> >> > > [ StarlingX Deployment ] [ Offload ] >> > > What would be the preferred way to deploy this use case proposal in >> > > StarlingX? We understand the following options are available including its >> > > preference: >> > > >> > > 1. Via Kubernetes (Not Preferred) >> > > 2. Via Virtual Machine (Preferred) >> > > 3. Via Bare Metal (Preferred) We should support either of them in my opinion, your project is a great example of cases where potential user in the future might not want to run their container apps in a vm I can take the AR of test the boundaries of what are the needs in kernel space for the latest kernel that other distributions provide. Maybe having an alternative latest LTS kernel for centos might not be that crazy after this kind of approach . Regards >> > > >> > > Are the above options and their preference, correct? If not, can you >> > > please give us some hints behind your answer. >> >> > From my standpoint, I think #3 would be the least common option. #2 would be >> > a good place to start, but I don't think #1 is "not preferred", I guess it depends >> > on where these preferences are coming from. >> >> Understood, we think it is worth to try option 2 initially at least for the core applications of the use case. >> >> > > [ StarlingX Deployment ] [ Provisioning ] >> > > >> > > As mentioned at the beginning, another of our imperatives, is to >> > > exercise zero touch provisioning. >> > > >> > > Does it makes sense to split the provisioning in 2 parts based in the >> > > required time for the demo components to live? >> > > >> > > - The core applications 100% uptime >> > > - Services on demand / 100 uptime in some cases >> >> > By zero touch provisioning do you just mean automation using IaaS APIs? eg. >> > the docker compose file you link to? Or something else? >> >> We understand the term from its definition but that "something else" is not in our knowledge yet. From our current understanding, that zero touch provisioning will allow us to deploy with one single instruction: >> >> - The core applications part of the use case (e.g. access to the different dashboards) >> - The services part of the use case: the start and stop of X service (e.g. face recognition, object recognition, etc.) for each of the wanted video streams. >> >> We will appreciate if you can share any online resource where we can learn more about this zero touch concept in a practical way (e.g. whitepaper, use case) so we can land into our use case. > > > I'm interested in "zero touch" and I'll be doing some research over the next while. This is also potentially something that can benefit stx. This is just me talking, but I think there is a difference between zero touch and automation. To me the canonical example of ZT would be turning on a device, typically physical, and that device starts up, registers, and then is scheduled and takes on some kind of personality for whatever workload is scheduled to it, all without any human intervention. > > Manually initiating an automation workflow, like say a docker compose run, doesn't feel like ZT to me, but again I'm still working to define it for myself. :) > >> >> >> Again, thank you Curtis for your time and help to answer our questions. > > > No thank you, I think this is great. :) > > Are you going to be doing your work in the public, like in a public git repo? > > Thanks, > Curtis > > > -- > Blog: serverascode.com > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Mon Apr 15 17:28:49 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 15 Apr 2019 17:28:49 +0000 Subject: [Starlingx-discuss] [ Test ] meeting - not taking place on 04/15 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC4D2F@FMSMSX114.amr.corp.intel.com> Hello, Several members of the testing team are out this week. We won't have a meeting tomorrow, but you can send your questions or comments to the mailing list, Numan and me. Thanks Ada From mario.alfredo.c.arevalo at intel.com Mon Apr 15 18:11:46 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Mon, 15 Apr 2019 18:11:46 +0000 Subject: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F1395E@SHSMSX104.ccr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466459EDD6@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35ED90C1@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466459F616@fmsmsx107.amr.corp.intel.com>, <2FD5DDB5A04D264C80D42CA35194914F35EF86FB@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C46645A3E23@fmsmsx107.amr.corp.intel.com> ,<2FD5DDB5A04D264C80D42CA35194914F35F1395E@SHSMSX104.ccr.corp.intel.com> Message-ID: <6594B51DBE477C48AAE23675314E6C46645B7F35@fmsmsx107.amr.corp.intel.com> Hi Cindy, The last week I have received great feedback from the community by gerrit and by IRC (thanks for that), and actually I am working on the new patch versions and making some tests with them in order to send the update this week. As you mention there are 3 pending patches, 2 related to horizon and 1 of them about VIM. The patches which I am working right now are dependencies, however the current available WIP version can be a refernce point and it will possible to send WIP versions about this pending patches. Thank you. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Friday, April 12, 2019 8:29 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com; Botello Ortega, Luis; An, Ran1; Sun, Austin Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Mario, Starting from next week, Ran An from my team will have some bandwidth working together with you for SB#2004008. Just want to double check with you if you see a needs here, I see we still have 3 tasks in "todo" status but not sure if they are independent enough to allow parallel work btw you and Ran. How about the test results from your side about integration of FM chart w/ Armada? I see some good progress made on those patches already uploaded. Thx. - cindy -----Original Message----- From: Xie, Cindy Sent: Friday, April 5, 2019 10:21 AM To: Arevalo, Mario Alfredo C ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Thanks Mario for the update. Please continue the integration & testing for the FM chart w/ Armada system for those pending patches. You can share the test cases to the community as well so we can have a review. For the tasks still "todo", when you think we can upload initial patches? Or you are not working on those for now? Just need to know the ETA for those. Mingyuan is interested but he is still working on Ironic so we may still need to rely on you for FM at this moment. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Thursday, April 4, 2019 11:00 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, Actually, Luis and me have had some issues related to the integration of the FM chart with armada system in some local tests, I have been working on some patches updates to solve this. Right now I am creating an ISO image from scratch with these patches in order to test them in a clean environment. At this moment I would like to focus on this issue during the rest of the week and I will continue with the other patches related to horizon and another one about the implementation of the PUT method for the FM restful API.. At this moment my progress in the pending patches is research, however if there are someone interested about these pending patches, let me know. Thank you for your attention. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Wednesday, April 03, 2019 4:12 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi, Mario, I see that you made very good progress in uploading several patches against SB#2004008 - anything needs help for the remaining 3 tasks so far? Thx. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C Sent: Wednesday, March 27, 2019 3:38 AM To: Xie, Cindy ; Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com; Botello Ortega, Luis Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi Cindy, The first version of the patches will take around 2 weeks, after that, a validation step will start. In this step I am going to update the patches according to the feedback received from the community and Luis Botello will help to validate the functionality of the patches. As final step, I would like to execute the sanity when all patches are reviewed by the community an they are ready to be merged. This final step could vary around 2-3 weeks, it will depend on the response time from the community and the complexity of the required updates, in addition to the validation tasks. Best regards. Mario. ________________________________________ From: Xie, Cindy Sent: Monday, March 25, 2019 5:44 PM To: Arevalo, Mario Alfredo C; Arce Moreno, Abraham; starlingx-discuss at lists.starlingx.io Cc: Tao Liu; Frank.Miller at windriver.com Subject: RE: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Mario, Nice to know that you're getting all information and having better understanding for the tasks. We probably needs to get a little bit more detail granularity of your plan, for each task in the storyboard: - when the patches will be uploaded for review; - what tests you're planning to do? Any support required from Ada's team? and when... - when you expect the patch review comments can be addressed and patch merged to master. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Tuesday, March 26, 2019 8:38 AM To: Arce Moreno, Abraham ; starlingx-discuss at lists.starlingx.io Cc: Tao Liu ; Frank.Miller at windriver.com Subject: Re: [Starlingx-discuss] Fault Management Containerization (SB 2004008) Follow Up Hi team, Thank you for your feedback from our last meeting, and this is my update. I am checking all points described in this thread. Actually I have got progress in the topic related to snmp and the relation with oamcontroller, sysinv and cgts-client. I plan to send a PR with information about findings/architecture to the stx-fault/doc in a future. I think, it is not necessary another meeting as it was mentioned, I think I have enough information to continue and I am going to update the current reviews and send news according to the points discussed until today, and contact Tao for specific questions. Thanks Tao, Abraham and Frank. Best regards. Mario. ________________________________________ From: Arce Moreno, Abraham Sent: Friday, March 22, 2019 10:37 AM To: starlingx-discuss at lists.starlingx.io Cc: Arevalo, Mario Alfredo C; Tao Liu Subject: Fault Management Containerization (SB 2004008) Follow Up Thanks Frank for setting this up. Thanks everyone for your attendance to this meeting, here you have high level notes and ToDos based in the topics covered. In Summary - The presentation Stx-Fault/Containers is located at [0]. - Tao will kindly update the Fault Management architecture diagram, slide 8. - Mario will send an email no later than Monday afternoon with the latest findings / questions based in his 5 ToDos. - We will meet again on Tuesday to finalize on tasks and implementation details. If we are forgetting about any key point in this email, please do not hesitate to reply. StarlingX Architecture - 2 instances for each of the following projects: - Keystone - Horizon - Barbican - Fault Management will have 2 instances as well. Fault Management Architecture - [ToDo] [Tao] to modify the Fault Management architecture (Slide 8) Thanks Tao! - fm-api runs in compute node, snmp provide interfaces - [ToDo] [Mario] to check these statements Fault Management REST API - [ToDo] [Mario] to write the next level of details for REST API mapping / implementation, consider to include PUT to Event Log. Fault Management Architecture - python-fmclient is a wrapper to fm_cli / fm_api - [ToDo] [Mario] to understand more about fm_cli as a wrapper and how does it interact and affects fault management containerized strategy. FM Proposal - Remove mysql, fm-api, fm-common - [ToDo] [Mario] to understand about the removal of fm-api and fm-common from the containerized instance. - Dependency to cgts-client - [ToDo] [Mario] to understand what is cgtc-client and how does it interacts with fault management and the new containerized instance. OpenStack Applications The following 2 projects will make use of the Fault Management containerized: - starlingx-dashboard - stx-nfv [0] https://docs.google.com/presentation/d/1_vG83aHTToXlIdJxaJpVL-MHWfRGnxLuyEdFDt-nfwo/edit?usp=sharing _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Mon Apr 15 18:12:11 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 15 Apr 2019 18:12:11 +0000 Subject: [Starlingx-discuss] Starlingx HW compatibility list In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BD071E90E@fmsmsx123.amr.corp.intel.com> Greetings and welcome to the community! You can find a high level description of the hardware needed in the deployment guide documentation, for example here [0]. We are in the process of updating our documents, but the basic hardware requirements are not expected to change. The guide just talks about minimum hardware. Any reasonable server will do for basic operation. If you are looking to use advanced processor features to optimize a particular workload, you’ll need to use hardware that supports those features. brucej [0] https://docs.starlingx.io/deployment_guides/current/simplex.html From: Jacky Chen [mailto:jacky at linux.com] Sent: Thursday, April 11, 2019 6:58 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Starlingx HW compatibility list Hi All As new starter, I would like to know if any HW model have been tested by community user. If community have a form can be provided to everyone to give feedback which HW model and components(cpu, memory, raid, chipset, nic, etc...) are supported by pre-build iso. This could be very helpful for newbie, and we can rich the HW compatibility list from everyone contribution. Thanks https://docs.starlingx.io/installation_guide/latest/index.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Mon Apr 15 18:21:04 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Mon, 15 Apr 2019 18:21:04 +0000 Subject: [Starlingx-discuss] [Containers] Test Scenarios based on feature plan In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A96863F@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A96863F@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C640@ALA-MBD.corp.ad.wrs.com> José, One general comment I have is that your testcases seem to be based on specific commits (at least that is what appears in the references column). I'd prefer to see your testcases based on what is captured in the specs and the storyboards. The problem with defining testcases at the commit level is that there are often several intermediate commits required to complete a piece of functionality. If you are creating a testcase based on a particular commit message, it is very possible that the behavior was changed in subsequent commits and your testcase won't be right. Instead, basing your testcases on the behavior described in the spec or the storyboard should give a better view of what the final system behavior is supposed to be. Bart -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: April 12, 2019 3:43 PM To: starlingx-discuss at lists.starlingx.io Cc: Miller, Frank; Wensley, Barton Subject: [Containers] Test Scenarios based on feature plan Hi All We are working on the analysis of the storyboards already merged on feature plan [1] to create test cases, we kindly ask for your feedback on this development so we can create more accurate tests for the different functionalities, you can find our continuous contribution on a google sheet [2]. Some scenarios have only general description whilst other have been completed with Test Steps. 1 - https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4MbeA_UINnBliumg2Y/edit#gid=991138079 2- https://docs.google.com/spreadsheets/d/1dwcBwY4Yq1Lo9Der4RylzQ6KYp0BsMHohhEmhwpauDo/edit#gid=637180508 Regards, José From jose.perez.carranza at intel.com Mon Apr 15 18:49:13 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 15 Apr 2019 18:49:13 +0000 Subject: [Starlingx-discuss] [Containers] Test Scenarios based on feature plan In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C640@ALA-MBD.corp.ad.wrs.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A96863F@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C640@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A968B72@fmsmsx101.amr.corp.intel.com> Hi Bart Thanks for your feedback, sure I'll base the scenarios more on the on the storyboard specification rather than the feature commits. Regards, José > -----Original Message----- > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > Sent: Monday, April 15, 2019 1:21 PM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io > Cc: Miller, Frank > Subject: RE: [Containers] Test Scenarios based on feature plan > > José, > > One general comment I have is that your testcases seem to be based on > specific commits (at least that is what appears in the references column). I'd > prefer to see your testcases based on what is captured in the specs and the > storyboards. The problem with defining testcases at the commit level is that > there are often several intermediate commits required to complete a piece of > functionality. If you are creating a testcase based on a particular commit > message, it is very possible that the behavior was changed in subsequent > commits and your testcase won't be right. Instead, basing your testcases on > the behavior described in the spec or the storyboard should give a better view > of what the final system behavior is supposed to be. > > Bart > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: April 12, 2019 3:43 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Miller, Frank; Wensley, Barton > Subject: [Containers] Test Scenarios based on feature plan > > Hi All > > We are working on the analysis of the storyboards already merged on feature > plan [1] to create test cases, we kindly ask for your feedback on this > development so we can create more accurate tests for the different > functionalities, you can find our continuous contribution on a google sheet [2]. > Some scenarios have only general description whilst other have been > completed with Test Steps. > > 1 - > https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4 > MbeA_UINnBliumg2Y/edit#gid=991138079 > 2- > https://docs.google.com/spreadsheets/d/1dwcBwY4Yq1Lo9Der4RylzQ6KYp0Bs > MHohhEmhwpauDo/edit#gid=637180508 > > > Regards, > José > From cesar.lara at intel.com Mon Apr 15 19:49:13 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Mon, 15 Apr 2019 19:49:13 +0000 Subject: [Starlingx-discuss] [multios][meetings] Multi-OS team meeting minutes 4/15/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710FF3617@fmsmsx123.amr.corp.intel.com> Mutli-OS team meeting Agenda for 4/15/2019 - General Discussion Notes - Next phase for PoC We agreed that the next step for the project will have to include the following vectors of work - Bring up a Controller 0 - Build the platform to consume OpenStack containers For any of these vectors the required amount of engineering is considerable, so we might take a look on how many resources are available and what is the first step among those vectors. AR - Cesar to follow up with local Multi-OS team Meeting for the next 2 week is cancelled due to Mexico holidays and Open Infrastructure summit Update - Cesar did follow up with local team, outside of the meeting scope and we agreed upon taking first on the platform (K8 consuming STX OpenStack over our modified Ubuntu image) and start the discovery on the requirements for the controller 0, due that controller 0 will require a lot more effort and dependency analysis before we can start to replicate this. Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Mon Apr 15 19:56:02 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 15 Apr 2019 19:56:02 +0000 Subject: [Starlingx-discuss] [Containers] Test Scenarios based on feature plan References: <0A5D9A624DF90343892F8F3FE7DE525A2A96863F@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C640@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A968BD0@fmsmsx101.amr.corp.intel.com> Hi Bart Also regarding for your feedback, for some storyboard the description is quite generic and with this sometimes is hard to visualize the test scenarios, that is why we are checking directly the patches to try to get more specific information, did you know if there is another place where this is documented? A wiki or a spec document where we can take as a base for our scenarios proposal? Regards, José > -----Original Message----- > From: Perez Carranza, Jose > Sent: Monday, April 15, 2019 1:49 PM > To: 'Wensley, Barton' ; starlingx- > discuss at lists.starlingx.io > Cc: Miller, Frank > Subject: RE: [Containers] Test Scenarios based on feature plan > > Hi Bart > > Thanks for your feedback, sure I'll base the scenarios more on the on the > storyboard specification rather than the feature commits. > > Regards, > José > > > > -----Original Message----- > > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > > Sent: Monday, April 15, 2019 1:21 PM > > To: Perez Carranza, Jose ; starlingx- > > discuss at lists.starlingx.io > > Cc: Miller, Frank > > Subject: RE: [Containers] Test Scenarios based on feature plan > > > > José, > > > > One general comment I have is that your testcases seem to be based on > > specific commits (at least that is what appears in the references > > column). I'd prefer to see your testcases based on what is captured in > > the specs and the storyboards. The problem with defining testcases at > > the commit level is that there are often several intermediate commits > > required to complete a piece of functionality. If you are creating a > > testcase based on a particular commit message, it is very possible > > that the behavior was changed in subsequent commits and your testcase > > won't be right. Instead, basing your testcases on the behavior > > described in the spec or the storyboard should give a better view of what the > final system behavior is supposed to be. > > > > Bart > > > > -----Original Message----- > > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > > Sent: April 12, 2019 3:43 PM > > To: starlingx-discuss at lists.starlingx.io > > Cc: Miller, Frank; Wensley, Barton > > Subject: [Containers] Test Scenarios based on feature plan > > > > Hi All > > > > We are working on the analysis of the storyboards already merged on > > feature plan [1] to create test cases, we kindly ask for your feedback > > on this development so we can create more accurate tests for the > > different functionalities, you can find our continuous contribution on a > google sheet [2]. > > Some scenarios have only general description whilst other have been > > completed with Test Steps. > > > > 1 - > > > https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4 > > MbeA_UINnBliumg2Y/edit#gid=991138079 > > 2- > > > https://docs.google.com/spreadsheets/d/1dwcBwY4Yq1Lo9Der4RylzQ6KYp0Bs > > MHohhEmhwpauDo/edit#gid=637180508 > > > > > > Regards, > > José > > From Numan.Waheed at windriver.com Mon Apr 15 20:49:46 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Mon, 15 Apr 2019 20:49:46 +0000 Subject: [Starlingx-discuss] Upstreaming Automation Framework Message-ID: <3CAA827B7A79BA46B15B280EC82088FE4829B261@ALA-MBD.corp.ad.wrs.com> StarlingX community has felt the lack of an automation framework since the beginning of this project. I am excited to share that we are working on upstreaming the automation framework that Wind River has been using for over three years now. This automation framework is based on PyTest but has been customized by adding Keywords that help test case creation simple and quick for this project. PyTest was chosen as automation framework because of its maintainability, debugability, flexibility and scalability. It has simple syntax and parametrization capability that allows to scale quickly. It possesses strong support for test fixtures and state management via setup/teardown hooks. Test case selection and deselection is fairly easy with the use of Markers. As mentioned earlier, this framework has been in use for over three years. The framework and a set of test cases will become available to community in phases. In the first phase, we will be upstreaming the framework and related keywords. Next phase will include upstreaming the test case. We also plan to create a wiki for helping community members in using this framework and executing automated test cases or writing their own test cases. Stay tuned. Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Apr 15 21:01:00 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Apr 2019 17:01:00 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 80 - Failure! Message-ID: <1803123296.184.1555362061355.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 80 Status: Failure Timestamp: 20190415T191410Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T145427Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190415T145427Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: master-dev-20190415T145427Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T145427Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: master PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190415T145427Z/logs PUBLISH_TIMESTAMP: 20190415T145427Z FLOCK_VERSION: master-centos-dev-20190415T145427Z PREFIX: master TIMESTAMP: 20190415T145427Z BUILD_STREAM: dev REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190415T145427Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Mon Apr 15 21:01:04 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 15 Apr 2019 17:01:04 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 83 - Failure! Message-ID: <967743901.187.1555362065169.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 83 Status: Failure Timestamp: 20190415T190654Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T145427Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190415T145427Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T145427Z/logs MASTER_BUILD_NUMBER: 64 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190415T145427Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190415T145427Z DOCKER_BUILD_ID: jenkins-master-20190415T145427Z-builder TIMESTAMP: 20190415T145427Z OS_VERSION: 7.6.1810 BUILD_STREAM: dev PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190415T145427Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190415T145427Z/outputs From abraham.arce.moreno at intel.com Tue Apr 16 00:27:17 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 16 Apr 2019 00:27:17 +0000 Subject: [Starlingx-discuss] Public sandbox? In-Reply-To: References: Message-ID: > > Some people would love to see also a virtual showroom of StarlingX > > features, more below... > Sounds great, where did this discussion occur? I'd like to take part if I can, > though I am somewhat limited in N/A daytime meeting times. It was an informal team talk while we were planning some self-learning activities for the next months. We can arrange a sync up meeting If you want to get more into the specific details. > > We can start easily right away, deploying the demos that you guys have > > presented in the past conferences, and for every demo / workload / whatever > > you want to call it, we must have generate a: > > > > - Reference Architecture > > - Solution Brief > > - Application Note > Where does these requirements come from? At Intel, we use the above artifacts when we do customer enablement and they are the core of our solutions libraries. Not a specific requirement from anyone but our proposal for ways to teach, give a working solution base or simply create awareness on StarlingX. Let us know how do we proceed with this proposal. From maria.g.perez.ibarra at intel.com Tue Apr 16 01:53:11 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 16 Apr 2019 01:53:11 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190415 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-15 (link) Status: Yellow Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] 3 TCS Fail Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] [Fail : 3 TCs] Due to an issue in our automation environment, only the execution for simplex virtual mode was completed. We are working on fixing the issue to deliver results as soon possible. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Tue Apr 16 11:47:42 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Tue, 16 Apr 2019 11:47:42 +0000 Subject: [Starlingx-discuss] [Containers] Test Scenarios based on feature plan In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A968BD0@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A96863F@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C640@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2A968BD0@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8C992@ALA-MBD.corp.ad.wrs.com> José, In that case, please try to look at the full set of commits for a storyboard before writing the testcases - that way you will have the big picture and can avoid writing testcases that have been invalidated by a later commit. Bart -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: April 15, 2019 3:56 PM To: Wensley, Barton; starlingx-discuss at lists.starlingx.io Cc: Miller, Frank Subject: RE: [Containers] Test Scenarios based on feature plan Hi Bart Also regarding for your feedback, for some storyboard the description is quite generic and with this sometimes is hard to visualize the test scenarios, that is why we are checking directly the patches to try to get more specific information, did you know if there is another place where this is documented? A wiki or a spec document where we can take as a base for our scenarios proposal? Regards, José > -----Original Message----- > From: Perez Carranza, Jose > Sent: Monday, April 15, 2019 1:49 PM > To: 'Wensley, Barton' ; starlingx- > discuss at lists.starlingx.io > Cc: Miller, Frank > Subject: RE: [Containers] Test Scenarios based on feature plan > > Hi Bart > > Thanks for your feedback, sure I'll base the scenarios more on the on the > storyboard specification rather than the feature commits. > > Regards, > José > > > > -----Original Message----- > > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > > Sent: Monday, April 15, 2019 1:21 PM > > To: Perez Carranza, Jose ; starlingx- > > discuss at lists.starlingx.io > > Cc: Miller, Frank > > Subject: RE: [Containers] Test Scenarios based on feature plan > > > > José, > > > > One general comment I have is that your testcases seem to be based on > > specific commits (at least that is what appears in the references > > column). I'd prefer to see your testcases based on what is captured in > > the specs and the storyboards. The problem with defining testcases at > > the commit level is that there are often several intermediate commits > > required to complete a piece of functionality. If you are creating a > > testcase based on a particular commit message, it is very possible > > that the behavior was changed in subsequent commits and your testcase > > won't be right. Instead, basing your testcases on the behavior > > described in the spec or the storyboard should give a better view of what the > final system behavior is supposed to be. > > > > Bart > > > > -----Original Message----- > > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > > Sent: April 12, 2019 3:43 PM > > To: starlingx-discuss at lists.starlingx.io > > Cc: Miller, Frank; Wensley, Barton > > Subject: [Containers] Test Scenarios based on feature plan > > > > Hi All > > > > We are working on the analysis of the storyboards already merged on > > feature plan [1] to create test cases, we kindly ask for your feedback > > on this development so we can create more accurate tests for the > > different functionalities, you can find our continuous contribution on a > google sheet [2]. > > Some scenarios have only general description whilst other have been > > completed with Test Steps. > > > > 1 - > > > https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4 > > MbeA_UINnBliumg2Y/edit#gid=991138079 > > 2- > > > https://docs.google.com/spreadsheets/d/1dwcBwY4Yq1Lo9Der4RylzQ6KYp0Bs > > MHohhEmhwpauDo/edit#gid=637180508 > > > > > > Regards, > > José > > From serverascode at gmail.com Tue Apr 16 12:07:42 2019 From: serverascode at gmail.com (Curtis) Date: Tue, 16 Apr 2019 08:07:42 -0400 Subject: [Starlingx-discuss] Public sandbox? In-Reply-To: References: Message-ID: On Mon, Apr 15, 2019 at 8:27 PM Arce Moreno, Abraham < abraham.arce.moreno at intel.com> wrote: > > > Some people would love to see also a virtual showroom of StarlingX > > > features, more below... > > > Sounds great, where did this discussion occur? I'd like to take part if > I can, > > though I am somewhat limited in N/A daytime meeting times. > > It was an informal team talk while we were planning some self-learning > activities for the next months. > We can arrange a sync up meeting If you want to get more into the specific > details. > Ok cool. My suggestion is going to be that we setup a Special Interest Group (SIG) around the packet.com work. This SIG would exist for a short time and we would funnel all the related packet.com work through it for the time being, at least in term of what resources are being used and how. Then, once we felt everything was well understood in terms of how we organize and use the packet.com infrastructure we would disband it and work would continue as normal in the teams that are using the infrastructure. I'll mention this at the community and TSC meetings this week. > > > > We can start easily right away, deploying the demos that you guys > have > > > presented in the past conferences, and for every demo / workload / > whatever > > > you want to call it, we must have generate a: > > > > > > - Reference Architecture > > > - Solution Brief > > > - Application Note > > > Where does these requirements come from? > > At Intel, we use the above artifacts when we do customer enablement and > they are the core of our solutions libraries. Not a specific requirement > from anyone but our proposal for ways to teach, give a working solution > base or simply create awareness on StarlingX. Let us know how do we proceed > with this proposal. > I think we would just want to make sure that whatever artifacts we generate are valuable to the community and we shouldn't just bring in an existing process if it doesn't match up with community requirements, though, of course, it might end up being exactly what we want. I mean I think I can understand what a reference architecture is, but I have no context for what an application note is. :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Apr 16 13:08:13 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 16 Apr 2019 13:08:13 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/17 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F196BB@SHSMSX104.ccr.corp.intel.com> Agena for 4/17 meeting: - Ceph upgrade status 1. patch review status (Daniel) 2. Ceph dev build validation status (Fernando) - QAT driver upgrade 1. QAT driver status (Haitao) 2. test plan prepration (Ricardo) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 17, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From bruce.e.jones at intel.com Tue Apr 16 13:36:34 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 16 Apr 2019 13:36:34 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting Apr 16 2019 Message-ID: <9A85D2917C58154C960D95352B22818BD071F4AE@fmsmsx123.amr.corp.intel.com> Meeting notes and agenda for the 4/16 meeting * Shuquan's email: * > These 2 fixes have been backported to stein branch at 4/11. Let's make sure stx stein sync up to the latest stable stein. ? > https://review.openstack.org/#/c/649320/ ? > https://review.openstack.org/#/c/649319/ * Dean is checking our stx-stein.1 branch. Bruce to file a LP and assign to Dean. ? StarlingX bug is https://bugs.launchpad.net/starlingx/+bug/1824989 ? These reviews merged after 19.0.0.rc2 was tagged and are not in stable/stein. The stx/stein.1 branch as of 15Apr2019 is stable/stein, these are new backports since then, it is prudent to include them, Gerry has already begun testing his stack and may not want to start over at this point (he is working on the release branch). TBD... * Update on stx-stein.1 branch - ready to go. Needs the .gitreview file removed. * Update on backport of Artom's patches - Gerry is working on this, has the code ported to a branch and is testing it. Should merge this week. * Update on other open issues - no updates, Nova community is working on spec reviews * Hack-a-thon in China this week - Ildiko and Alex and 99Cloud team will attend. Should help move things forward. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Apr 16 15:00:26 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 16 Apr 2019 15:00:26 +0000 Subject: [Starlingx-discuss] Community Call (April 17, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A28BE1@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190417T1400 From scott.little at windriver.com Tue Apr 16 15:16:57 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 16 Apr 2019 11:16:57 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 170 - Failure! In-Reply-To: <1928201988.150.1555121037447.JavaMail.javamailuser@localhost> References: <1928201988.150.1555121037447.JavaMail.javamailuser@localhost> Message-ID: It took a few tries to get helm charts building correctly after Angie's changes. Good charts start Apr 15. Changes ... - helm charts must now build after docker images. - helm chart creation requires that you supply a list of images to add or override. - we create 4 helm chart combinations... dev/stable ... versioned/latest - if docker images were not built by the current build, use an image list from the most recent build that did. Scott On 2019-04-12 10:03 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_helm_charts > Build #: 170 > Status: Failure > Timestamp: 20190413T020352Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190412T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190412T233000Z > OS: centos > DOCKER_BUILD_ID: jenkins-master-20190412T233000Z-builder > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190412T233000Z/logs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190412T233000Z/logs > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Apr 16 15:30:42 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 16 Apr 2019 11:30:42 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 63 - Still Failing! In-Reply-To: <2132862362.171.1555293036139.JavaMail.javamailuser@localhost> References: <1342505705.161.1555199910259.JavaMail.javamailuser@localhost> <2132862362.171.1555293036139.JavaMail.javamailuser@localhost> Message-ID: Several build failures are due to helm chart creation issues. Corrected on Apr 15 with the 20190415T145427Z build. Scott On 2019-04-14 9:50 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_master > Build #: 63 > Status: Still Failing > Timestamp: 20190414T233001Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190414T233001Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Apr 16 15:55:30 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Apr 2019 11:55:30 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] email-test - Build # 39 - Still Failing! - foo - bar In-Reply-To: <928286800.215.1547758578469.JavaMail.javamailuser@localhost> References: <928286800.215.1547758578469.JavaMail.javamailuser@localhost> Message-ID: <358813012.193.1555430132093.JavaMail.javamailuser@localhost> Project: email-test Build #: 39 Status: Still Failing Timestamp: 20190416T155529Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/ -------------------------------------------------------------------------------- Parameters P1: foo P2: bar PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/ PUBLISH_LOGS_BASE: /tmp/logs From build.starlingx at gmail.com Tue Apr 16 16:03:20 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Apr 2019 12:03:20 -0400 (EDT) Subject: [Starlingx-discuss] [foo] [build-report] email-test - Build # 40 - Still Failing! In-Reply-To: <358813012.193.1555430132093.JavaMail.javamailuser@localhost> References: <358813012.193.1555430132093.JavaMail.javamailuser@localhost> Message-ID: <392438256.195.1555430602115.JavaMail.javamailuser@localhost> Project: email-test Build #: 40 Status: Still Failing Timestamp: 20190416T160320Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/ -------------------------------------------------------------------------------- Parameters P1: foo P2: bar PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/ PUBLISH_LOGS_BASE: /tmp/logs From jose.perez.carranza at intel.com Tue Apr 16 17:27:37 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 16 Apr 2019 17:27:37 +0000 Subject: [Starlingx-discuss] [Containers] Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A968DCC@fmsmsx101.amr.corp.intel.com> Hi Bart/Frank I was checking this feature [1] and it explains about "openstack based worker nodes" ( openstack-compute-node=enabled), on a normal deployment all the worker nodes that I installed are labeled as 'openstack-compute-node=enabled', so my question is what is process to install a "NON-openstack based worker node" ? 1- https://storyboard.openstack.org/#!/story/2004762 Regards, José From Barton.Wensley at windriver.com Tue Apr 16 18:17:51 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Tue, 16 Apr 2019 18:17:51 +0000 Subject: [Starlingx-discuss] [Containers] In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A968DCC@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A968DCC@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8CCCC@ALA-MBD.corp.ad.wrs.com> José, Once the stx-openstack application has been installed, all worker nodes are labelled as compute nodes (as you mention below). There is a launchpad opened to address this: https://bugs.launchpad.net/starlingx/+bug/1823705 Until that bug has been fixed, you can only test non-openstack based worker nodes on a system where the stx-openstack application has not been installed. Bart -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: April 16, 2019 1:28 PM To: starlingx-discuss at lists.starlingx.io; Wensley, Barton; Miller, Frank Subject: [Containers] Hi Bart/Frank I was checking this feature [1] and it explains about "openstack based worker nodes" ( openstack-compute-node=enabled), on a normal deployment all the worker nodes that I installed are labeled as 'openstack-compute-node=enabled', so my question is what is process to install a "NON-openstack based worker node" ? 1- https://storyboard.openstack.org/#!/story/2004762 Regards, José From Ovidiu.Poncea at windriver.com Tue Apr 16 19:36:12 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Tue, 16 Apr 2019 19:36:12 +0000 Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D647241@ALA-MBD.corp.ad.wrs.com> Hi Folks, Be aware that we will merge a change soon that may impact automated deployments and testing. The gerrit is this: https://review.openstack.org/#/c/644256/ and it will make Ceph the default storage backend. Therefore, once merged, users will no longer have to run this (as currently stated by the wikis): echo ">>> Enable primary Ceph backend" system storage-backend-add ceph --confirmed echo ">>> Wait for primary ceph backend to be configured" echo ">>> This step really takes a long time" while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done echo ">>> Ceph health" ceph -s Regards, Ovidiu -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Apr 16 21:45:14 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 16 Apr 2019 21:45:14 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190415 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-16 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 57 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] 1 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] AIO - Duplex Setup 04 TCs [PASS] 2 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Local Storage Setup 04 TCs [PASS] 2 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] 2 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] -------------------------------------------------------------------------------- system host-lock failed after swact during lab setup https://bugs.launchpad.net/starlingx/+bug/1824994 application-apply stx-openstack failed due to neutron pods failure https://bugs.launchpad.net/starlingx/+bug/1825045 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cboylan at sapwetik.org Wed Apr 17 00:11:41 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Tue, 16 Apr 2019 20:11:41 -0400 Subject: [Starlingx-discuss] Migrating git repos to OpenDev In-Reply-To: References: <871s3itl0p.fsf@meyer.lemoncheese.net> Message-ID: On Thu, Apr 11, 2019, at 3:19 PM, Clark Boylan wrote: > On Thu, Mar 7, 2019, at 1:47 PM, James E. Blair wrote: > > Hi, > > > > As discussed in November[1], the OpenStack project infrastructure is > > being rebranded as "OpenDev" to better support a wider community of > > projects. > > > > We are nearly ready to perform the part of this transition with the > > largest impact: moving the authoritative git repositories for existing > > projects. > > > > In this email, I'd like to introduce the new hosting system we are > > preparing, discuss the transition, and invite projects to work with us > > on the logistics of the change. > > > > Gerrit > > ====== > > > > Gerrit is the core of our system and it will remain so in OpenDev. As > > part of this move, we will rename the gerrit server from > > review.openstack.org to review.opendev.org. As part of the transition, > > we will automatically merge appropriate changes to all branches of all > > repositories updating .gitreview and Zuul configuration files. Any > > further changes (README files, etc.) we expect to be made by individual > > project contributors. > > > > Repository Browsing > > =================== > > > > Currently our canonical *public* repository system is the cgit server at > > https://git.openstack.org/ (and git.airshipit.org, git.starlingx.io, and > > git.zuul-ci.org). This is a load balanced cluster of several servers > > which is designed to handle all the public git repository traffic, as it > > scales much better than Gerrit (and has a more friendly domain name). > > From a technical standpoint, it's excellent, but its usability could be > > improved. > > > > Therefore, as part of this transition, we will replace the cgit servers > > with a new system based on Gitea. Gitea is a complete development > > collaboration system, but it's very flexible and will allow us to > > disable components which we aren't using. We will operate it in a > > read-only configuration where it will act as the public mirror for > > Gerrit. The advantages it has over the current system are: > > > > * Shorter domain name in project URLs: > > https://git.openstack.org/openstack/nova vs > > https://opendev.org/openstack/nova > > * Clone and browsing URLs are the same (with cgit, the browsing URL has > > an extra path component) > > * More visually pleasing code browsing > > * Integrated code searching > > * Ability to highlight multiple lines in links > > > > When we perform the transition we will install redirects from > > git.openstack.org (and the other git sites) to opendev.org, and will > > maintain those redirects for the foreseeable future. We will construct > > them so that even existing deep links to individual files in individual > > commits to cgit will redirect to the correct location on opendev.org. > > > > This system is up and running now with a live mirror of data from > > Gerrit, and you can start testing it out today at https://opendev.org/ > > > > Please let us know if you encounter any problems. > > > > If you would like to read more about the design of this system and the > > transition, see the infra-spec[2]. > > > > GitHub > > ====== > > > > Currently all OpenStack projects are replicated to GitHub. We do not > > plan on changing that during the transition, however, any projects > > outside of the openstack*/ namespaces will not automatically be > > replicated to GitHub, and we do not plan on adding that in the future. > > We do, however, support projects using Zuul to run post-merge jobs to > > push updates to GitHub or any other third-party mirrors with their own > > credentials. We will be happy to work with anyone interested in that to > > help set up jobs to do so. > > > > We are adopting this approach so that individual projects can have more > > control over how they are represented in social media, and to give us > > more flexibility in supporting our own organizational namespaces on > > OpenDev without assuming they map directly to GitHub. > > > > Eventually we plan on moving the OpenStack project to that system as > > well and retiring direct replication from Gerrit to GitHub completely. > > But we will defer that work until after this transition. > > > > Logistics > > ========= > > > > We can prepare much of the system in advance (as we have for the hosting > > system on opendev.org), but the actual transition and renaming of the > > Gerrit server will need to happen at once during an outage window. We > > need to schedule that outage and begin preparing for it. > > > > Since all of the project git URLs are going to change (to replace > > git.openstack.org with opendev.org and review.openstack.org with > > review.opendev.org), we can additionally take the opportunity to > > reorganize projects into different organizations. > > > > For example, during the transition we will rename Zuul, and it's > > associated projects, from the "openstack-infra" org to "zuul". So their > > new names will be "zuul/zuul", "zuul/nodepool", etc. > > > > This is an excellent time for the rest of the OpenStack Foundation > > pilot projects to do the same. > > > > If the OpenStack project desires this, it would also be a good time to > > move unofficial projects out of the openstack/ namespace. > > > > Therefore, we need your help: > > > > Action Items > > ============ > > > > We need each of the following projects: > > > > * OpenStack > > * Airship > > * StarlingX > > * Zuul > > > > To nominate a single point of contact to work with us on the transition. > > It would be helpful for that person to attend the next (and possibly > > next several) openstack infra team meetings in IRC [3]. We will work > > with those people on scheduling the transition, as well as finalizing > > the list of projects which should be renamed as part of the transition. > > > > If you manage an unofficial project and would like to take the > > opportunity to move or rename your project, please add it to this > > ethercalc[4]. > > > > [1] > > http://lists.openstack.org/pipermail/openstack-dev/2018-November/136403.html > > [2] > > http://specs.openstack.org/openstack-infra/infra-specs/specs/opendev-gerrit.html > > [3] http://eavesdrop.openstack.org/#Project_Infrastructure_Team_Meeting > > [4] https://ethercalc.openstack.org/opendev-transition > > We've made good progress in preparing this change and are still on > track to do this Friday April 19, 2019. Our project liasons have been > drawing up lists of projects to rename during the outage. One big thing > to keep in mind is unofficial OpenStack projects will no longer be in > the "openstack" namespace. They will be placed in the 'x/' namespace > instead which intends to indicate no endorsement or special ownership. > > One side effect of this change (as noted in Jim's earlier email) is > that we will stop replicating projects that move out of the openstack > namespace. Projects that wish to be replicated to Github or anywhere > else they like can do so following the steps that David Moreau-Simard > put together for us here [5]. > > This transition is likely to be a bit bumpy particularly at the start. > We'll be around after the transition to help fix unexpected errors and > are likely to spend a fair bit of time at the PTG improving things as > well. > > Finally, to be extra clear, we intend to put http redirects in place so > that all your old http(s) urls continue to work. Fungi has set this up > for testing with details here [6] if you would like to ensure your urls > redirect properly. > > [5] > http://lists.openstack.org/pipermail/openstack-discuss/2019-April/005007.html > [6] > http://lists.openstack.org/pipermail/openstack-discuss/2019-April/004921.html > The infra/OpenDev team continues to make good progress towards this transition and plans to perform the transition on April 19, 2019 as previously scheduled. We will begin the transition at 15:00UTC and users should plan for intermittent Gerrit and git repo outages through the day. We expect most of those will be closer to 15:00UTC than 23:00UTC. Fungi has generated a master list of project renames for the openstack namespaces: http://paste.openstack.org/show/749402/. If you have a moment please quickly review these planned renames for any obvious errors or issues. For the airship, starlingx, and zuul repo renames the repositories listed at git.airshipit.org, git.starlingx.io, and git.zuul-ci.org were used placing repos in airship/, starlingx/ and zuul/ namespaces. Any repo name prefix (like stx- and airship-) is dropped. As always we are happy to answer any questions you might have or address any concerns. Feel free to reach out. Thank you for your patience, Clark From build.starlingx at gmail.com Wed Apr 17 01:54:41 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Apr 2019 21:54:41 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 270 - Failure! Message-ID: <876083256.198.1555466082906.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 270 Status: Failure Timestamp: 20190417T013114Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190417T013001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190417T013001Z DOCKER_DL_ID: jenkins-master-20190417T013001Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190417T013001Z/logs DOCKER_DL_TAG: master-20190417T013001Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190417T013001Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Apr 17 01:54:45 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 16 Apr 2019 21:54:45 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 66 - Failure! Message-ID: <2066219445.201.1555466086911.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 66 Status: Failure Timestamp: 20190417T013001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190417T013001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From ildiko.vancsa at gmail.com Wed Apr 17 03:36:44 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 17 Apr 2019 11:36:44 +0800 Subject: [Starlingx-discuss] Community marketing planning call reminder Message-ID: Hi, It is a friendly reminder that we are having the next community marketing planning call today at 8am PST. Agenda is on the etherpad: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans Thanks, Ildikó Sent from my iPhone From zhipengs.liu at intel.com Wed Apr 17 10:46:04 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 17 Apr 2019 10:46:04 +0000 Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module Message-ID: <93814834B4855241994F290E959305C753069B2B@SHSMSX104.ccr.corp.intel.com> Hi all, >From nfv-vim.log I see below 2019-04-12T08:38:11.409 controller-0 VIM_Thread[31835] INFO rpc_listener.py.127 RPC-Listener not connected to exchange nova, queue=notifications.nfvi_nova_listener_queue. It seems NFV could not get notifications from nova now. Have we enabled rabbitmq listener in NFV after containerized version? Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Wed Apr 17 12:23:39 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 17 Apr 2019 12:23:39 +0000 Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module In-Reply-To: <93814834B4855241994F290E959305C753069B2B@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C753069B2B@SHSMSX104.ccr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8CF98@ALA-MBD.corp.ad.wrs.com> Zhipeng, These logs are normal in cases where the containerized rabbitmq servers are temporarily unavailable (i.e. the osh-openstack-rabbitmq-rabbitmq-0/1 pods). This can happen when a controller is locked or rebooted. The VIM should automatically re-connect to the rabbitmq servers - this could take anywhere from a few seconds to as long as a minute or two depending on the reason the rabbitmq server(s) were not available. Bart From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: April 17, 2019 6:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module Hi all, >From nfv-vim.log I see below 2019-04-12T08:38:11.409 controller-0 VIM_Thread[31835] INFO rpc_listener.py.127 RPC-Listener not connected to exchange nova, queue=notifications.nfvi_nova_listener_queue. It seems NFV could not get notifications from nova now. Have we enabled rabbitmq listener in NFV after containerized version? Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Apr 17 13:55:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 17 Apr 2019 13:55:51 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 4/17 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F1BB2C@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 4/17 meeting: - Ceph upgrade status 1. patch review status (Daniel) https://review.openstack.org/#/q/topic:ceph-mimic-upgrade+(status:open+OR+status:merged) patch merge is waiting for the testing confirmation before it can be workflow +1. Dean question: do we need to generate the code again or not? Daniel: is there any chance to do Ceph upgrade again, the answer is no. So the current solution is OK because we are not going to regenerate the code again. Daniel will maintaine the code in case there is any change required. Due to the fact that we are going to containerize Ceph, the code will not be required. 2. Ceph dev build validation status (Fernando) in the middle of 2+2+2 install, I am at the point where all the nodes have personality and are online but Storage-1 was never online Sanity + P1 test cases for pre-merge identified, under review. Expect to have 3 days. Trending to have test results for the 1st ISO by Friday. Tingjie to work w/ Fernando for the Ceph specific test cases in parallel. - QAT driver upgrade 1. QAT driver status (Haitao) Enable QAT 4.5 driver on CentOS 7.6; next step is to integrate the RPM onto StarlingX. CWs are back to ODC and working on environment setup. VF on CentOS 7.6 is loading. passing QATZip testing. 2. test plan prepration (Ricardo) Ricardo is reviewing the cases provided by Numan and working on prioritizing the cases. M3: 4/15: test ISO provided to Ricardo (Haitao) has been delayed. Haitao: to provide a new estimate date - trending end of April. - Opens (all) - Libvirt/qemu patch removal: SB#2005212 (Jim Somerville) - Pre-submission testing by Numan's team has been successful - Jim is getting ready to send out the Pull Requests to Dean/Saul and proceed with submission - Code is expected to merge by April 26, so we are two weeks ahead of schedule Concern on open development of doing the work offline and deliver the "done work" afterwards. -----Original Message----- From: Xie, Cindy Sent: Tuesday, April 16, 2019 9:08 PM To: starlingx-discuss at lists.starlingx.io Cc: Wold, Saul ; Rowsell, Brent ; 'Badea, Daniel' ; Hernandez Gonzalez, Fernando ; Wang, Hai Tao ; Perez, Ricardo O ; Chen, Tingjie ; Hu, Yong Subject: Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/17 Agena for 4/17 meeting: - Ceph upgrade status 1. patch review status (Daniel) 2. Ceph dev build validation status (Fernando) - QAT driver upgrade 1. QAT driver status (Haitao) 2. test plan prepration (Ricardo) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 17, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From cindy.xie at intel.com Wed Apr 17 14:36:34 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 17 Apr 2019 14:36:34 +0000 Subject: [Starlingx-discuss] Redfish support in StarlingX Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F1BB84@SHSMSX104.ccr.corp.intel.com> Hi, One of our customer raise the request if StarlingX can support Redfish. Their edge server is based on OpenBMC/Redfish, not IMPI due to the security concern. According to our StarlingX roadmap, we are not supporting Redfish now and not in our incoming release. Any suggestions from community about how to accelerate this? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Apr 17 14:40:24 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 17 Apr 2019 14:40:24 +0000 Subject: [Starlingx-discuss] Redfish support in StarlingX In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F1BB84@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F1BB84@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB46C52E@ALA-MBD.corp.ad.wrs.com> Cindy, This item had been added to the candidate list for R3 for discussion at the ptg. https://etherpad.openstack.org/p/stx-ptg-denver Brent From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, April 17, 2019 10:37 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Redfish support in StarlingX Hi, One of our customer raise the request if StarlingX can support Redfish. Their edge server is based on OpenBMC/Redfish, not IMPI due to the security concern. According to our StarlingX roadmap, we are not supporting Redfish now and not in our incoming release. Any suggestions from community about how to accelerate this? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Apr 17 14:44:13 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 17 Apr 2019 14:44:13 +0000 Subject: [Starlingx-discuss] Community Call (April 17, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A2928F@ALA-MBD.corp.ad.wrs.com> Notes from today's meeting... - Kata Containers Supports (Bruce) - Kata containers has a nice "Supporters" page: https://katacontainers.io/supporters/ - can / should we have something similar? - Ildiko said it is in progress to highlight infrastructure donors and contributing companies as a starting point - for first run we'll have "contributing" companies, supporting will come later - it'll be soon, possibly not before the Summit, since that's coming up very soon - Python 2 to Python 3 (Bruce) - it is highly likely that we will only do one release in 2019 - is Py2->Py3 now more important and/or release gating? - how much of our code is still on Py2? - Cindy: all Flock services (but one) are already on Py3 - larger concern is on 3rd Party packages - Dean: have we tested on Py3 yet? not yet - we agreed to discuss at the PTG re: Rel 3 - Infra needs for testing (Ildiko) - to sync up with the infra team on the PTG we should bring up the topic on openstack-infra at lists.openstack.org - any volunteers to drive this? - how to move testing into the Public? - Ildiko wants to facilitate setting up a meeting with the Infrastructure guys about this - their PTG meetings are on Thu/Fri (Infra / QA) https://www.openstack.org/ptg/#tab_schedule - Saul volunteered to take this - he'll work with Dean, Ada & Numan - then will reach out to the Infrastructure team to set up a 1 hour cross-project meeting - OpenDev (Dean) - it'll be this Friday April 19 @1500 UTC (8am Pacific, 11am Eastern) - gerrit will go down, anywhere from 2h to 8h - see details in email from Clark Boyan http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/004062.html - Core Reviewers Review (Cindy) - Cindy requested core reviewer attention to some reviews (Huge Page, Calico, PCI Affinity) - follow ups will be done - Security Patches (Victor) - Victor asked for some reviews (perl, systemd) - Ghada sent this link to show who the core reviewers are for a repo: - https://review.openstack.org/#/admin/groups/?filter=starlingx - general consensus that Community members should feel free to reach out to Core Reviewers as needed Bill... -----Original Message----- From: Zvonar, Bill Sent: Tuesday, April 16, 2019 11:00 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (April 17, 2019) Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190417T1400 From Ian.Jolliffe at windriver.com Wed Apr 17 15:38:38 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Wed, 17 Apr 2019 15:38:38 +0000 Subject: [Starlingx-discuss] [TSC] Minutes - 4/11 meeting Message-ID: <3F9424F8-B622-4DCC-A275-3EC355A8DA40@windriver.com> Release Plan Update (release team) options at https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=0 We discussed the 2 options in detail for 30 min. In the end we didn’t reach consensus on which option to select and a third compromise option came up. This will be on the agenda for the TSC call on 4/18 – tomorrow @TSC members – please make every effort to be on the TSC call. We will be making a decision on the release plan tomorrow. Project mission statement - https://etherpad.openstack.org/p/stx-mission-statement (ildikov) TSC please review proposal needs to be done soon Review next week Raised on community call as well How do we revalidate? Writing visions is a more involved process. There is a framework that can be used for this - use time as a way to frame - Vision, mission, goals. Add to PTG agenda Small team to work on a proposal for how to move this forward. Dean, Ian, PTG lunch slot presentation? (ildikov) the idea is a 5 minutes long presentation about the project to give an overview and what's new sign up for the team photo in PTG(shuquan): https://ethercalc.openstack.org/3qd1fj5f3tt3 Need a volunteer for this - presentation - feel free to sign up - Bruce has slides - OSF Board meeting update (ildikov) F2F Board meeting is in Denver, April 28 - This is the Sunday 10-15 minute overview presentation Need a TSC member - likely afternoon need to discuss what messages we want to send to the board Packet projects Curtis - MOU has been signed by both packet.com and the openstack foundation I sent an email to all the TSC members with the signed MOU attached Do we store these kinds of agreements anywhere? Eg. CENGN hosting agreement? - Is the latter signed by the Foundation as well? OSF has likely stored the MOU in some fashion May want to look into ensuring the CENGN hosting MOU/whatever is also stored in the same fashion Where to store these kinds of community agreements? Next step is to try to get STX booting on bare metal They use iPXE causing some issues Work being done here: https://etherpad.openstack.org/p/stx-packet-baremetal-boot -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Wed Apr 17 17:19:33 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 17 Apr 2019 17:19:33 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Test meeting - 9:00 PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC60E5@FMSMSX114.amr.corp.intel.com> Weekly meetings on Tuesdays at 9am PDT / 1600 UTC * Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2074 bytes Desc: not available URL: From hayde.martinez.landa at intel.com Wed Apr 17 18:38:39 2019 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Wed, 17 Apr 2019 18:38:39 +0000 Subject: [Starlingx-discuss] [StarlingX in a box] Canceling tomorrow's meeting Message-ID: <0FFF4A77-A77A-4399-88ED-295AF71B362C@intel.com> Hi All, Due to Mexican Holiday (Maundy Thursday) tomorrow, the StarlingX in a box bi-weekly meeting will be canceled. Best, Hayde -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Apr 17 18:58:39 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 17 Apr 2019 13:58:39 -0500 Subject: [Starlingx-discuss] [StarlingX in a box] Canceling tomorrow's meeting In-Reply-To: <0FFF4A77-A77A-4399-88ED-295AF71B362C@intel.com> References: <0FFF4A77-A77A-4399-88ED-295AF71B362C@intel.com> Message-ID: Can you please send an update? Just wondering what is the current state and problems that we, as a community, are facing Regards Victor Rodriguez On Wed, Apr 17, 2019 at 1:39 PM Martinez Landa, Hayde wrote: > > Hi All, > > > > Due to Mexican Holiday (Maundy Thursday) tomorrow, the StarlingX in a box bi-weekly meeting will be canceled. > > Best, > > Hayde > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From hayde.martinez.landa at intel.com Wed Apr 17 19:14:26 2019 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Wed, 17 Apr 2019 19:14:26 +0000 Subject: [Starlingx-discuss] [StarlingX in a box] Canceling tomorrow's meeting In-Reply-To: References: <0FFF4A77-A77A-4399-88ED-295AF71B362C@intel.com> Message-ID: Sure, Due to release gating priorities we are designating more time to other activities, Nevertheless, the script that Memo has been working on, that creates the virtual machine was added to the gitlab repo where our draft scripts are located, will continue working on that. All the important links and latest updates are located on our etherpad. [0] [0] https://etherpad.openstack.org/p/stx-inabox Best, Hayde On 4/17/19, 11:59 AM, "Victor Rodriguez" wrote: Can you please send an update? Just wondering what is the current state and problems that we, as a community, are facing Regards Victor Rodriguez On Wed, Apr 17, 2019 at 1:39 PM Martinez Landa, Hayde wrote: > > Hi All, > > > > Due to Mexican Holiday (Maundy Thursday) tomorrow, the StarlingX in a box bi-weekly meeting will be canceled. > > Best, > > Hayde > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Daniel.Badea at windriver.com Wed Apr 17 20:11:29 2019 From: Daniel.Badea at windriver.com (Badea, Daniel) Date: Wed, 17 Apr 2019 20:11:29 +0000 Subject: [Starlingx-discuss] Contrib or Experimental tools location ?? In-Reply-To: References: <48a4e21c-25ad-04ad-dba6-1abeba14cd07@linux.intel.com> , Message-ID: <9174DAE490321844AE273F6AD001E3EA9D853523@ALA-MBD.corp.ad.wrs.com> I have a script that can be used to automatically add code reviewers for a commit (instead of opening the list of core reviewers in one browser tab and manually add them one by one in the review page). Should this be a GitHub gist, a small repo under my GitHub account or a subfolder in starlingx-staging/unofficial-tools-where-code-goes-to-die? Thanks, Daniel ________________________________________ From: Dean Troyer [dtroyer at gmail.com] Sent: Friday, March 15, 2019 15:55 To: Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? On Fri, Mar 15, 2019 at 8:39 AM Curtis wrote: > Ultimately I believe we are arguing different goals with the same points. > > I'm ok with bit rot, it's inevitable, and can actually be a good thing. I'm ok with code with lower standards being contributed to a place where it can be legitimized. > > These things are pros to me. :) I am not against having a place for unofficial code to go and rot, I am against it being associated with the StarlingX name in a way that drags down the perception of the code we produce. And that is all we produce in the end, code in repositories. > There would have to be some standards, eg. no pyc files, no -2s to new contributors, etc. Arbitrary no, curated yes. To me 'curated' includes vetting suitability for purpose. Untested code is broken code. I would support a repo in github.com/starlingx-staging or an index anywhere but not a repo in Gerrit without meeting a certain minimum of quality and accountability. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Wed Apr 17 20:14:39 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 17 Apr 2019 20:14:39 +0000 Subject: [Starlingx-discuss] Contrib or Experimental tools location ?? In-Reply-To: <9174DAE490321844AE273F6AD001E3EA9D853523@ALA-MBD.corp.ad.wrs.com> References: <48a4e21c-25ad-04ad-dba6-1abeba14cd07@linux.intel.com> , <9174DAE490321844AE273F6AD001E3EA9D853523@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA47AF52@ALA-MBD.corp.ad.wrs.com> Core reviewers should be watching the repos on which they're a core. If there's a specific person required for an update as an SME, add them. But otherwise, I wouldn't think it should be necessary to explicitly add the cores to a review. -----Original Message----- From: Badea, Daniel [mailto:Daniel.Badea at windriver.com] Sent: Wednesday, April 17, 2019 4:11 PM To: Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? I have a script that can be used to automatically add code reviewers for a commit (instead of opening the list of core reviewers in one browser tab and manually add them one by one in the review page). Should this be a GitHub gist, a small repo under my GitHub account or a subfolder in starlingx-staging/unofficial-tools-where-code-goes-to-die? Thanks, Daniel ________________________________________ From: Dean Troyer [dtroyer at gmail.com] Sent: Friday, March 15, 2019 15:55 To: Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? On Fri, Mar 15, 2019 at 8:39 AM Curtis wrote: > Ultimately I believe we are arguing different goals with the same points. > > I'm ok with bit rot, it's inevitable, and can actually be a good thing. I'm ok with code with lower standards being contributed to a place where it can be legitimized. > > These things are pros to me. :) I am not against having a place for unofficial code to go and rot, I am against it being associated with the StarlingX name in a way that drags down the perception of the code we produce. And that is all we produce in the end, code in repositories. > There would have to be some standards, eg. no pyc files, no -2s to new contributors, etc. Arbitrary no, curated yes. To me 'curated' includes vetting suitability for purpose. Untested code is broken code. I would support a repo in github.com/starlingx-staging or an index anywhere but not a repo in Gerrit without meeting a certain minimum of quality and accountability. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Daniel.Badea at windriver.com Wed Apr 17 20:31:02 2019 From: Daniel.Badea at windriver.com (Badea, Daniel) Date: Wed, 17 Apr 2019 20:31:02 +0000 Subject: [Starlingx-discuss] Contrib or Experimental tools location ?? In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA47AF52@ALA-MBD.corp.ad.wrs.com> References: <48a4e21c-25ad-04ad-dba6-1abeba14cd07@linux.intel.com> , <9174DAE490321844AE273F6AD001E3EA9D853523@ALA-MBD.corp.ad.wrs.com>, <6703202FD9FDFF4A8DA9ACF104AE129FBA47AF52@ALA-MBD.corp.ad.wrs.com> Message-ID: <9174DAE490321844AE273F6AD001E3EA9D853539@ALA-MBD.corp.ad.wrs.com> That's not what https://wiki.openstack.org/wiki/StarlingX/CodeSubmissionGuidelines says: ... Add the core reviewers for the affected sub-project to the review as well as any other interested reviewers The core reviewers are listed on each sub-project wiki pages. The list of sub-projects is available here ... ________________________________________ From: Penney, Don Sent: Wednesday, April 17, 2019 23:14 To: Badea, Daniel; Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Contrib or Experimental tools location ?? Core reviewers should be watching the repos on which they're a core. If there's a specific person required for an update as an SME, add them. But otherwise, I wouldn't think it should be necessary to explicitly add the cores to a review. -----Original Message----- From: Badea, Daniel [mailto:Daniel.Badea at windriver.com] Sent: Wednesday, April 17, 2019 4:11 PM To: Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? I have a script that can be used to automatically add code reviewers for a commit (instead of opening the list of core reviewers in one browser tab and manually add them one by one in the review page). Should this be a GitHub gist, a small repo under my GitHub account or a subfolder in starlingx-staging/unofficial-tools-where-code-goes-to-die? Thanks, Daniel ________________________________________ From: Dean Troyer [dtroyer at gmail.com] Sent: Friday, March 15, 2019 15:55 To: Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? On Fri, Mar 15, 2019 at 8:39 AM Curtis wrote: > Ultimately I believe we are arguing different goals with the same points. > > I'm ok with bit rot, it's inevitable, and can actually be a good thing. I'm ok with code with lower standards being contributed to a place where it can be legitimized. > > These things are pros to me. :) I am not against having a place for unofficial code to go and rot, I am against it being associated with the StarlingX name in a way that drags down the perception of the code we produce. And that is all we produce in the end, code in repositories. > There would have to be some standards, eg. no pyc files, no -2s to new contributors, etc. Arbitrary no, curated yes. To me 'curated' includes vetting suitability for purpose. Untested code is broken code. I would support a repo in github.com/starlingx-staging or an index anywhere but not a repo in Gerrit without meeting a certain minimum of quality and accountability. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Gerry.Kopec at windriver.com Wed Apr 17 20:42:24 2019 From: Gerry.Kopec at windriver.com (Kopec, Gerald (Gerry)) Date: Wed, 17 Apr 2019 20:42:24 +0000 Subject: [Starlingx-discuss] stx-nova numa aware live migration Message-ID: <58CF5BABC9A76946A638A0E8AE48D17371830A74@ALA-MBD.corp.ad.wrs.com> I've created a stx-nova PR for the backport of the upstream numa aware live migration feature: https://github.com/starlingx-staging/stx-nova/pull/23 It's the 7 open nova reviews from https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration plus a temporary change to address some of the review comments related to live migration and resource tracking. Please review and comment, Thanks, Gerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From Daniel.Badea at windriver.com Wed Apr 17 21:07:57 2019 From: Daniel.Badea at windriver.com (Badea, Daniel) Date: Wed, 17 Apr 2019 21:07:57 +0000 Subject: [Starlingx-discuss] Contrib or Experimental tools location ?? In-Reply-To: <9174DAE490321844AE273F6AD001E3EA9D853539@ALA-MBD.corp.ad.wrs.com> References: <48a4e21c-25ad-04ad-dba6-1abeba14cd07@linux.intel.com> , <9174DAE490321844AE273F6AD001E3EA9D853523@ALA-MBD.corp.ad.wrs.com>, <6703202FD9FDFF4A8DA9ACF104AE129FBA47AF52@ALA-MBD.corp.ad.wrs.com>, <9174DAE490321844AE273F6AD001E3EA9D853539@ALA-MBD.corp.ad.wrs.com> Message-ID: <9174DAE490321844AE273F6AD001E3EA9D853553@ALA-MBD.corp.ad.wrs.com> Ok, let's say explicitly setting core reviewers is not required. I have another example: if a pod or job fails while applying stx-openstack it is possible its logs are lost before I get a chance to view them. So I wrote another tool to preserve all kubernetes logs without touching current configuration (the proper way to save logs is to use a logging service). Where should I share this tool/script? (others might find it useful for now) -Daniel ________________________________________ From: Badea, Daniel [Daniel.Badea at windriver.com] Sent: Wednesday, April 17, 2019 23:31 To: Penney, Don; Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? That's not what https://wiki.openstack.org/wiki/StarlingX/CodeSubmissionGuidelines says: ... Add the core reviewers for the affected sub-project to the review as well as any other interested reviewers The core reviewers are listed on each sub-project wiki pages. The list of sub-projects is available here ... ________________________________________ From: Penney, Don Sent: Wednesday, April 17, 2019 23:14 To: Badea, Daniel; Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Contrib or Experimental tools location ?? Core reviewers should be watching the repos on which they're a core. If there's a specific person required for an update as an SME, add them. But otherwise, I wouldn't think it should be necessary to explicitly add the cores to a review. -----Original Message----- From: Badea, Daniel [mailto:Daniel.Badea at windriver.com] Sent: Wednesday, April 17, 2019 4:11 PM To: Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? I have a script that can be used to automatically add code reviewers for a commit (instead of opening the list of core reviewers in one browser tab and manually add them one by one in the review page). Should this be a GitHub gist, a small repo under my GitHub account or a subfolder in starlingx-staging/unofficial-tools-where-code-goes-to-die? Thanks, Daniel ________________________________________ From: Dean Troyer [dtroyer at gmail.com] Sent: Friday, March 15, 2019 15:55 To: Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? On Fri, Mar 15, 2019 at 8:39 AM Curtis wrote: > Ultimately I believe we are arguing different goals with the same points. > > I'm ok with bit rot, it's inevitable, and can actually be a good thing. I'm ok with code with lower standards being contributed to a place where it can be legitimized. > > These things are pros to me. :) I am not against having a place for unofficial code to go and rot, I am against it being associated with the StarlingX name in a way that drags down the perception of the code we produce. And that is all we produce in the end, code in repositories. > There would have to be some standards, eg. no pyc files, no -2s to new contributors, etc. Arbitrary no, curated yes. To me 'curated' includes vetting suitability for purpose. Untested code is broken code. I would support a repo in github.com/starlingx-staging or an index anywhere but not a repo in Gerrit without meeting a certain minimum of quality and accountability. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michael.l.tullis at intel.com Wed Apr 17 21:15:16 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 17 Apr 2019 21:15:16 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 4/17/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B0265B@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Wed Apr 17 21:50:05 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Wed, 17 Apr 2019 21:50:05 +0000 Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19E9D647241@ALA-MBD.corp.ad.wrs.com> References: <4C60D9C5C8176C47874FFF36647AA19E9D647241@ALA-MBD.corp.ad.wrs.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D64742F@ALA-MBD.corp.ad.wrs.com> Change was merged, wiki was updated... ________________________________ From: Poncea, Ovidiu [Ovidiu.Poncea at windriver.com] Sent: Tuesday, April 16, 2019 10:36 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Folks, Be aware that we will merge a change soon that may impact automated deployments and testing. The gerrit is this: https://review.openstack.org/#/c/644256/ and it will make Ceph the default storage backend. Therefore, once merged, users will no longer have to run this (as currently stated by the wikis): echo ">>> Enable primary Ceph backend" system storage-backend-add ceph --confirmed echo ">>> Wait for primary ceph backend to be configured" echo ">>> This step really takes a long time" while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done echo ">>> Ceph health" ceph -s Regards, Ovidiu -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Apr 17 22:46:33 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 17 Apr 2019 17:46:33 -0500 Subject: [Starlingx-discuss] [Multi-OS] POC project update Message-ID: Hello everyone in STX community We are very happy to share with you the current state of the Multi-OS POC project we started a few weeks ago: https://github.com/starlingx-staging/stx-packaging Since our last update: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-March/003618.html There have been new features that we have added: 1) Able to build Ubuntu Live image tooling that resolves runtime dependencies: https://github.com/starlingx-staging/stx-packaging#building-an-image-wip-as-poc-state-now $ make iso ISO_TEMPLATE=ubuntu-16.04.6-server-amd64.iso 2) Able to build the DEB files inside Ubuntu containers ( so you don't need to have an Ubuntu system as workstation ). Even If you are not in a Linux machine but it has docker and Makefile tools, you still can build a Starling X package, for example: $ make package PKG=x.stx-fault/fm-mgr DISTRO=ubuntu BUILD_W_CONT=y The flag BUILD_W_CONT=y will create a docker image with all the environment necessary to build the package and leave the results in stx-packaging/configs/docker-ubuntu-img/results/ You can also set there a specific Packages.gz that you prefer with fixed packages versions for your build. 3) Able to build Centos/Fedora/EPEL RPMs using the same tooling. Here is how it works so far: Inside the directory stx-packaging/configs/docker-centos-img/ the developers can run: $ make build PKG=systemd-219-62.el7.src.rpm MOCK_CONFIG=centos-7-i386 The SRPM should be in the container volume ( in this case /tmp/rpmbuild ) , it is also possible to pass the URL variable as part of the make build sentence $ make build PKG= MOCK_CONFIG= URL= It has been tested with multiple MOCK_CONFIG files, the supported ones include Centos / Fedora and EPEL ( check /etc/mock for the specific versions ). Is also possible to add your personal MOCK_CONFIG that can point to your local repository or the mirror you prefer. It also has the capability to pass the .spec file and tarball if developers prefer. A more detailed README will be updated soon with examples on how to use it. Summary: * The first goal of the POC which was to build packages of the project in multiple operating systems ( including Ubuntu/Centos ) works! * This POC does not need to modify the current directory structure * Is scalable to other operating systems Next steps and what current state of the POC doesn't do (yet) : * Feedback from community, especially from STX (current and future) developers * Research on a better approach to solving build dependencies * Bring up a Controller 0 * Build the platform to consume OpenStack containers Thanks a lot to contributors to this project: https://github.com/starlingx-staging/stx-packaging/graphs/contributors Regards Victor Rodriguez From maria.g.perez.ibarra at intel.com Wed Apr 17 23:56:33 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 17 Apr 2019 23:56:33 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190417 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-17 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 57 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] | 3 TCs FAIL Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS [Fail : 3] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] 1 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] AIO - Duplex Setup 04 TCs [PASS] 2 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Local Storage Setup 04 TCs [PASS] 2 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] 2 TCS FAIL Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] -------------------------------------------------------------------------------- system host-lock failed after swact during lab setup https://bugs.launchpad.net/starlingx/+bug/1824994 application-apply stx-openstack failed due to neutron pods failure https://bugs.launchpad.net/starlingx/+bug/1825045 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Apr 18 00:11:35 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 18 Apr 2019 00:11:35 +0000 Subject: [Starlingx-discuss] DevStack next Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F1C48D@SHSMSX104.ccr.corp.intel.com> Dean, As we've enabled majority of flocks services with DevStack, I am trying to understand the plan of next step to use those services in Zuul. My understanding is that more tests needs to be added in Zuul to gate the pre-merge. Do you have a plan to write StoryBoards so that we can have community engineers work on those individual tests? Thanks. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Apr 18 01:05:04 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 17 Apr 2019 20:05:04 -0500 Subject: [Starlingx-discuss] DevStack next In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F1C48D@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F1C48D@SHSMSX104.ccr.corp.intel.com> Message-ID: On Wed, Apr 17, 2019 at 7:12 PM Xie, Cindy wrote: > As we’ve enabled majority of flocks services with DevStack, I am trying to understand the plan of next step to use those services in Zuul. My understanding is that more tests needs to be added in Zuul to gate the pre-merge. Do you have a plan to write StoryBoards so that we can have community engineers work on those individual tests? I have not started writing any stories. I _have_ started working on setting up a job that actually does some testing (quick API tests using Gabbi) to work out some of the things that are different from the usual OpenStack testing, mostly around having multiple packages (here defined by the tox.ini file) in a single repo, OpenStack does not do that so there are some differences. Actually deciding on the priorities for writing tests and jobs is outside my sphere, my suggestion would be to see how much, if any, of the system test cases could be covered via things like tempest plugins. Of course those need to be created also. dt -- Dean Troyer dtroyer at gmail.com From yong.hu at intel.com Thu Apr 18 01:10:40 2019 From: yong.hu at intel.com (Hu, Yong) Date: Thu, 18 Apr 2019 01:10:40 +0000 Subject: [Starlingx-discuss] Contrib or Experimental tools location ?? In-Reply-To: <9174DAE490321844AE273F6AD001E3EA9D853553@ALA-MBD.corp.ad.wrs.com> References: <48a4e21c-25ad-04ad-dba6-1abeba14cd07@linux.intel.com> <9174DAE490321844AE273F6AD001E3EA9D853523@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA47AF52@ALA-MBD.corp.ad.wrs.com> <9174DAE490321844AE273F6AD001E3EA9D853539@ALA-MBD.corp.ad.wrs.com> <9174DAE490321844AE273F6AD001E3EA9D853553@ALA-MBD.corp.ad.wrs.com> Message-ID: <5F369F37-FCAD-400A-82A6-D53A4B5B0775@intel.com> I think this tool is useful, and there has been a similar tool "collect", under " ./cgcs-root/stx/stx-integ/tools/collector/scripts/collect" Maybe "./cgcs-root/stx/stx-integ/tools/" is a place to go. On 18/04/2019, 5:10 AM, "Badea, Daniel" wrote: Ok, let's say explicitly setting core reviewers is not required. I have another example: if a pod or job fails while applying stx-openstack it is possible its logs are lost before I get a chance to view them. So I wrote another tool to preserve all kubernetes logs without touching current configuration (the proper way to save logs is to use a logging service). Where should I share this tool/script? (others might find it useful for now) -Daniel ________________________________________ From: Badea, Daniel [Daniel.Badea at windriver.com] Sent: Wednesday, April 17, 2019 23:31 To: Penney, Don; Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? That's not what https://wiki.openstack.org/wiki/StarlingX/CodeSubmissionGuidelines says: ... Add the core reviewers for the affected sub-project to the review as well as any other interested reviewers The core reviewers are listed on each sub-project wiki pages. The list of sub-projects is available here ... ________________________________________ From: Penney, Don Sent: Wednesday, April 17, 2019 23:14 To: Badea, Daniel; Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Contrib or Experimental tools location ?? Core reviewers should be watching the repos on which they're a core. If there's a specific person required for an update as an SME, add them. But otherwise, I wouldn't think it should be necessary to explicitly add the cores to a review. -----Original Message----- From: Badea, Daniel [mailto:Daniel.Badea at windriver.com] Sent: Wednesday, April 17, 2019 4:11 PM To: Dean Troyer; Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? I have a script that can be used to automatically add code reviewers for a commit (instead of opening the list of core reviewers in one browser tab and manually add them one by one in the review page). Should this be a GitHub gist, a small repo under my GitHub account or a subfolder in starlingx-staging/unofficial-tools-where-code-goes-to-die? Thanks, Daniel ________________________________________ From: Dean Troyer [dtroyer at gmail.com] Sent: Friday, March 15, 2019 15:55 To: Curtis Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Contrib or Experimental tools location ?? On Fri, Mar 15, 2019 at 8:39 AM Curtis wrote: > Ultimately I believe we are arguing different goals with the same points. > > I'm ok with bit rot, it's inevitable, and can actually be a good thing. I'm ok with code with lower standards being contributed to a place where it can be legitimized. > > These things are pros to me. :) I am not against having a place for unofficial code to go and rot, I am against it being associated with the StarlingX name in a way that drags down the perception of the code we produce. And that is all we produce in the end, code in repositories. > There would have to be some standards, eg. no pyc files, no -2s to new contributors, etc. Arbitrary no, curated yes. To me 'curated' includes vetting suitability for purpose. Untested code is broken code. I would support a repo in github.com/starlingx-staging or an index anywhere but not a repo in Gerrit without meeting a certain minimum of quality and accountability. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Thu Apr 18 01:16:31 2019 From: yong.hu at intel.com (Hu, Yong) Date: Thu, 18 Apr 2019 01:16:31 +0000 Subject: [Starlingx-discuss] Upstreaming Automation Framework Message-ID: <7FE2008F-C7FC-4F02-9346-D28F608520AF@intel.com> This is indeed a very meaningful contribution to the community. Thanks! Regarding “Next phase will include upstreaming the test case.”, what kind of test cases would we expect to see? API test cases, functional test cases, and/or performance test cases? Regards, -Yong On 16/04/2019, 4:51 AM, "Waheed, Numan" > wrote: StarlingX community has felt the lack of an automation framework since the beginning of this project. I am excited to share that we are working on upstreaming the automation framework that Wind River has been using for over three years now. This automation framework is based on PyTest but has been customized by adding Keywords that help test case creation simple and quick for this project. PyTest was chosen as automation framework because of its maintainability, debugability, flexibility and scalability. It has simple syntax and parametrization capability that allows to scale quickly. It possesses strong support for test fixtures and state management via setup/teardown hooks. Test case selection and deselection is fairly easy with the use of Markers. As mentioned earlier, this framework has been in use for over three years. The framework and a set of test cases will become available to community in phases. In the first phase, we will be upstreaming the framework and related keywords. Next phase will include upstreaming the test case. We also plan to create a wiki for helping community members in using this framework and executing automated test cases or writing their own test cases. Stay tuned. Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Apr 18 09:17:21 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 18 Apr 2019 09:17:21 +0000 Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8CF98@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C753069B2B@SHSMSX104.ccr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8CF98@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C753069E17@SHSMSX104.ccr.corp.intel.com> Hi Bart, Thanks! It exactly works in nfv. I still have a question about Rabbitmq access. Currently, I’m working on pci-interrupt-affinity service which will run on every worker node. It need to listen to rabbitmq to get notifications from nova Before containerized version, I can use configurations in /etc/sysinv/sysinv.conf to get connect to rabbitmq. Now, if I use below configuration, it can work. /opt/platform/puppet/19.01/hieradata/system.yaml nfv::nfvi::platform_username: admin nfv::nfvi::rabbit_host: rabbitmq.openstack.svc.cluster.local nfv::nfvi::rabbit_password: 28a5834cf803Ti0* nfv::nfvi::rabbit_port: 5672 nfv::nfvi::rabbit_userid: nova-rabbitmq-user nfv::nfvi::rabbit_virtual_host: nova Then, I tried to add below code to get above configuration. But it didn’t work. “Utils.is_openstack_installed” this check failed! And also cannot get “helm_data” Who can help? Any comment or proposal on it? BTW, can I get these configuration from /opt/platform/puppet/19.01/hieradata/system.yaml directly? Is it reasonable? =================================================================== from sysinv.helm import helm from sysinv.common import utils from sysinv.db import api as db_api ... dbapi = db_api.get_instance() if dbapi and utils.is_openstack_installed(dbapi): helm_data = helm.HelmOperatorData(dbapi) nova_oslo_messaging_data = helm_data.get_nova_oslo_messaging_data() rabbit_cfg['rabbit_host'] = nova_oslo_messaging_data['host'] rabbit_cfg['rabbit_userid'] = nova_oslo_messaging_data['username'] rabbit_cfg['rabbit_password'] = nova_oslo_messaging_data['password'] rabbit_cfg['rabbit_virtual_host'] = nova_oslo_messaging_data['virt_host'] Thanks! Zhipeng From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: 2019年4月17日 20:24 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Does rabbitmq listener work in NFV module Zhipeng, These logs are normal in cases where the containerized rabbitmq servers are temporarily unavailable (i.e. the osh-openstack-rabbitmq-rabbitmq-0/1 pods). This can happen when a controller is locked or rebooted. The VIM should automatically re-connect to the rabbitmq servers - this could take anywhere from a few seconds to as long as a minute or two depending on the reason the rabbitmq server(s) were not available. Bart From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: April 17, 2019 6:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module Hi all, >From nfv-vim.log I see below 2019-04-12T08:38:11.409 controller-0 VIM_Thread[31835] INFO rpc_listener.py.127 RPC-Listener not connected to exchange nova, queue=notifications.nfvi_nova_listener_queue. It seems NFV could not get notifications from nova now. Have we enabled rabbitmq listener in NFV after containerized version? Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Apr 18 12:19:34 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 18 Apr 2019 12:19:34 +0000 Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module In-Reply-To: <93814834B4855241994F290E959305C753069E17@SHSMSX104.ccr.corp.intel.com> References: <93814834B4855241994F290E959305C753069B2B@SHSMSX104.ccr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8CF98@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C753069E17@SHSMSX104.ccr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8D56D@ALA-MBD.corp.ad.wrs.com> Zhipeng, First, I don’t think you should be basing your new pci-interrupt-affinity service on any of the nfv-vim code. I have added a comment to https://review.openstack.org/#/c/640264 to explain. We can discuss more in the context of that review. To answer your questions below, you cannot access hieradata directly and you should not be using any of the nfv-vim configuration. I think the right way for you to get the rabbitmq configuration to your new service would be by creating a new puppet module which would create a new configuration file for your service (e.g. /etc/pci-interrupt-affinity/pci-interrupt-affinity.conf). You can use the puppet-nfv puppet module as an example and you can see in sysinv/puppet/nfv.py how the rabbit configuration is being retrieved from the helm data for nova - this would be similar to the code you have below, but would be running in the sysinv-conductor process as it prepares the hieradata for your new service. Bart From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: April 18, 2019 5:17 AM To: Wensley, Barton; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Does rabbitmq listener work in NFV module Hi Bart, Thanks! It exactly works in nfv. I still have a question about Rabbitmq access. Currently, I’m working on pci-interrupt-affinity service which will run on every worker node. It need to listen to rabbitmq to get notifications from nova Before containerized version, I can use configurations in /etc/sysinv/sysinv.conf to get connect to rabbitmq. Now, if I use below configuration, it can work. /opt/platform/puppet/19.01/hieradata/system.yaml nfv::nfvi::platform_username: admin nfv::nfvi::rabbit_host: rabbitmq.openstack.svc.cluster.local nfv::nfvi::rabbit_password: 28a5834cf803Ti0* nfv::nfvi::rabbit_port: 5672 nfv::nfvi::rabbit_userid: nova-rabbitmq-user nfv::nfvi::rabbit_virtual_host: nova Then, I tried to add below code to get above configuration. But it didn’t work. “Utils.is_openstack_installed” this check failed! And also cannot get “helm_data” Who can help? Any comment or proposal on it? BTW, can I get these configuration from /opt/platform/puppet/19.01/hieradata/system.yaml directly? Is it reasonable? =================================================================== from sysinv.helm import helm from sysinv.common import utils from sysinv.db import api as db_api ... dbapi = db_api.get_instance() if dbapi and utils.is_openstack_installed(dbapi): helm_data = helm.HelmOperatorData(dbapi) nova_oslo_messaging_data = helm_data.get_nova_oslo_messaging_data() rabbit_cfg['rabbit_host'] = nova_oslo_messaging_data['host'] rabbit_cfg['rabbit_userid'] = nova_oslo_messaging_data['username'] rabbit_cfg['rabbit_password'] = nova_oslo_messaging_data['password'] rabbit_cfg['rabbit_virtual_host'] = nova_oslo_messaging_data['virt_host'] Thanks! Zhipeng From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: 2019年4月17日 20:24 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Does rabbitmq listener work in NFV module Zhipeng, These logs are normal in cases where the containerized rabbitmq servers are temporarily unavailable (i.e. the osh-openstack-rabbitmq-rabbitmq-0/1 pods). This can happen when a controller is locked or rebooted. The VIM should automatically re-connect to the rabbitmq servers - this could take anywhere from a few seconds to as long as a minute or two depending on the reason the rabbitmq server(s) were not available. Bart From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: April 17, 2019 6:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module Hi all, >From nfv-vim.log I see below 2019-04-12T08:38:11.409 controller-0 VIM_Thread[31835] INFO rpc_listener.py.127 RPC-Listener not connected to exchange nova, queue=notifications.nfvi_nova_listener_queue. It seems NFV could not get notifications from nova now. Have we enabled rabbitmq listener in NFV after containerized version? Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Apr 15 03:01:43 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 15 Apr 2019 03:01:43 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Weekly Containerization Meeting Message-ID: Please note that for Monday April 15th I need to cancel our weekly containerization meeting. For any questions until next week please use the community email. Frank ============== For those contributing to or interested in the Containerization subproject, the plan is to meet weekly until the containerization StoryBoards are completed. Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4574 bytes Desc: not available URL: From sgw at linux.intel.com Thu Apr 18 02:49:19 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 17 Apr 2019 19:49:19 -0700 Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure Message-ID: <443db87b-a364-3187-2189-e07251c168e2@linux.intel.com> Hi Folks, I have been trying to get a deployment up in a libvirt/qemu environment (non-proxy). I am seeing the following issue. I am using the image that passed (mostly) Sanity Test on Monday 4/15 [0]. I am setting this up in AIO-Simplex mode, I have not setup any kind of registry. It seems to start up all the contains and kubectl get pods shows all the pods Running or Completed. I retrieved the stx-openstack-apply.log from armada as recommended by the Container Debug FAQ [1]. I see multiple Errors that the Application apply aborted due to what seems like download failures. As I said, I am not behind any proxy or firewall. It seems to fail during processing chart: osh-openstack-neutron at 65% Not sure what the next steps are to debug this issue. Thanks Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T233001Z/ [1] https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ -------------- next part -------------- A non-text attachment was scrubbed... Name: stx-openstack-apply.log Type: text/x-log Size: 177190 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: sysinv.log Type: text/x-log Size: 538652 bytes Desc: not available URL: From Ovidiu.Poncea at windriver.com Thu Apr 18 11:20:59 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Thu, 18 Apr 2019 11:20:59 +0000 Subject: [Starlingx-discuss] FW: Association of unused OSDS to storage In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19E9D647539@ALA-MBD.corp.ad.wrs.com> References: <1466AF2176E6F040BD63860D0A241BBD46CB85ED@FMSMSX109.amr.corp.intel.com>, <4C60D9C5C8176C47874FFF36647AA19E9D647539@ALA-MBD.corp.ad.wrs.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D647588@ALA-MBD.corp.ad.wrs.com> Hi Martinez, We have partial storage tiers support in the gui. Thus, storage tiers can only be created through cli but, once created, users can add OSDs to the new tiers through gui (see the interface for adding OSDs). [X] Ovidiu ________________________________ From: Martinez Monroy, Elio [elio.martinez.monroy at intel.com] Sent: Tuesday, April 09, 2019 9:44 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Association of unused OSDS to storage Hi, The scenario requires the association of unused OSDs to a storage tier. For this, the test creates a storage tier in a ceph cluster, which I do and consult this way in CLI: [cid:image002.png at 01D4EEDA.52761AA0] Next, the test requires the list of OSDs already created in association with a disk, of which I then take a disk with enough available space to create an OSD into: [cid:image003.png at 01D4EEDA.52761AA0] However, in Horizon, I am able to see and create partitions and OSDs in the host detail page for the host that I am modifying here: [cid:image004.png at 01D4EEDA.52761AA0] But the only place where I am able to see the storage tiers and clusters is in the Storage Overview tab under Platform, and I am unable to modify anything from it within this page: [cid:image005.png at 01D4EEDA.52761AA0] My question would be regarding where else I could find the option to create the storage tier within the ceph_cluster in Horizon, and if there is no way to do it from Horizon, if I could create the tier through CLI commands and then update the OSDs through Horizon instead. [cid:image001.png at 01CF8BAC.3B4C5DD0] Martinez Monroy, Elio. QA Engineer. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4914 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 18070 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 34759 bytes Desc: image003.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.png Type: image/png Size: 78901 bytes Desc: image004.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.png Type: image/png Size: 38574 bytes Desc: image005.png URL: From Frank.Miller at windriver.com Thu Apr 18 14:50:44 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 18 Apr 2019 14:50:44 +0000 Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure In-Reply-To: <443db87b-a364-3187-2189-e07251c168e2@linux.intel.com> References: <443db87b-a364-3187-2189-e07251c168e2@linux.intel.com> Message-ID: Saul: I'll let the community members more familiar with how to debug to answer specific debug questions, but it looks like you are hitting this LP reported in sanity: https://bugs.launchpad.net/starlingx/+bug/1825045 That one does not yet have a solution. Frank -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, April 17, 2019 10:49 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure Hi Folks, I have been trying to get a deployment up in a libvirt/qemu environment (non-proxy). I am seeing the following issue. I am using the image that passed (mostly) Sanity Test on Monday 4/15 [0]. I am setting this up in AIO-Simplex mode, I have not setup any kind of registry. It seems to start up all the contains and kubectl get pods shows all the pods Running or Completed. I retrieved the stx-openstack-apply.log from armada as recommended by the Container Debug FAQ [1]. I see multiple Errors that the Application apply aborted due to what seems like download failures. As I said, I am not behind any proxy or firewall. It seems to fail during processing chart: osh-openstack-neutron at 65% Not sure what the next steps are to debug this issue. Thanks Sau! [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T233001Z/ [1] https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ From erich.cm.lists at yandex.com Thu Apr 18 14:58:36 2019 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Thu, 18 Apr 2019 07:58:36 -0700 Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure In-Reply-To: References: <443db87b-a364-3187-2189-e07251c168e2@linux.intel.com> Message-ID: <20998161555599516@sas1-0a6c2e2b59d7.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Apr 18 15:06:05 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 18 Apr 2019 08:06:05 -0700 Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure In-Reply-To: References: <443db87b-a364-3187-2189-e07251c168e2@linux.intel.com> Message-ID: <694491ed-fe89-7d27-6f02-615b729395f3@linux.intel.com> On 4/18/19 7:50 AM, Miller, Frank wrote: > Saul: > > I'll let the community members more familiar with how to debug to answer specific debug questions, but it looks like you are hitting this LP reported in sanity: > https://bugs.launchpad.net/starlingx/+bug/1825045 > I looked at that one yesterday, since this is a simplex setup, I don't have the neutron-ovs-agent-compute node and I could not find any CrashLoop related messages. The logs from what might be close neutron-opvs-agent-controller does show this: > kubectl logs neutron-ovs-agent-controller-0-9626473e-jrlzv -n openstack -c neutron-ovs-agent-init > + chown neutron: /run/openvswitch/db.sock > + neutron-sanity-check --version > + timeout 3m neutron-sanity-check --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --ovsdb_native --nokeepalived_ipv6_support > 2019-04-18 02:37:29.837 41 INFO neutron.common.config [-] Logging enabled! > 2019-04-18 02:37:29.837 41 INFO neutron.common.config [-] /var/lib/openstack/bin/neutron-sanity-check version 14.0.0.0b4.dev16 > 2019-04-18 02:37:30.922 41 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:127.0.0.1:6640 to retrieve schema: Connection refused Maybe this is the problem, not sure if it's the same as the LP you mentioned. > 2019-04-18 02:37:32.748 41 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/openvswitch_agent.ini', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpxbjXcQ/privsep.sock'] > 2019-04-18 02:37:36.263 41 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap > ++ sed 's/[{}"]//g' /tmp/auto_bridge_add > ++ tr , '\n' > + for bmap in '`sed '\''s/[{}"]//g'\'' /tmp/auto_bridge_add | tr "," "\n"`' > + bridge=br-phy0 > + iface=eth1000 > + ovs-vsctl --no-wait --may-exist add-br br-phy0 > + '[' -n eth1000 ']' > + '[' eth1000 '!=' null ']' > + ovs-vsctl --no-wait --may-exist add-port br-phy0 eth1000 > + ip link set dev eth1000 up > + for bmap in '`sed '\''s/[{}"]//g'\'' /tmp/auto_bridge_add | tr "," "\n"`' > + bridge=br-phy1 > + iface=eth1001 > + ovs-vsctl --no-wait --may-exist add-br br-phy1 > + '[' -n eth1001 ']' > + '[' eth1001 '!=' null ']' > + ovs-vsctl --no-wait --may-exist add-port br-phy1 eth1001 > + ip link set dev eth1001 up > + tunnel_interface=docker0 > + '[' -z docker0 ']' > ++ ip a s docker0 > ++ grep 'inet ' > ++ awk '{print $2}' > ++ awk -F / '{print $1}' > + LOCAL_IP=172.17.0.1 > + '[' -z 172.17.0.1 ']' > + tee > That one does not yet have a solution. > > Frank > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, April 17, 2019 10:49 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure > > > Hi Folks, > > I have been trying to get a deployment up in a libvirt/qemu environment (non-proxy). I am seeing the following issue. I am using the image that passed (mostly) Sanity Test on Monday 4/15 [0]. > > I am setting this up in AIO-Simplex mode, I have not setup any kind of registry. It seems to start up all the contains and kubectl get pods shows all the pods Running or Completed. I retrieved the stx-openstack-apply.log from armada as recommended by the Container Debug FAQ [1]. > I see multiple Errors that the Application apply aborted due to what seems like download failures. As I said, I am not behind any proxy or firewall. > > It seems to fail during processing chart: osh-openstack-neutron at 65% > > Not sure what the next steps are to debug this issue. > > Thanks > Sau! > > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T233001Z/ > [1] https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Al.Bailey at windriver.com Thu Apr 18 15:16:14 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 18 Apr 2019 15:16:14 +0000 Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure In-Reply-To: <694491ed-fe89-7d27-6f02-615b729395f3@linux.intel.com> References: <443db87b-a364-3187-2189-e07251c168e2@linux.intel.com> <694491ed-fe89-7d27-6f02-615b729395f3@linux.intel.com> Message-ID: From what I have seen so far, there are not crashed pods, but when armada gets to the compute-kit chartgroup (nova, libvirt, openvswitch, nova-api-proxy, neutron) that entire section takes more than 30 minutes. Currently it will timeout on the openvswitch portion because that timer is the default (15 minutes), but even if you increase that to 30 minutes, it will still timeout. On my vbox env, the load average during that chart section is over 50. All the processes are only running on only 1 of the 4 virtual cpus, while the other 3 cpus are idle. I have not tried experimenting with the newly changed /etc/systemd/system.conf.d/platform-cpuaffinity.conf to see if that makes a difference. Al -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, April 18, 2019 11:06 AM To: Miller, Frank; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure On 4/18/19 7:50 AM, Miller, Frank wrote: > Saul: > > I'll let the community members more familiar with how to debug to answer specific debug questions, but it looks like you are hitting this LP reported in sanity: > https://bugs.launchpad.net/starlingx/+bug/1825045 > I looked at that one yesterday, since this is a simplex setup, I don't have the neutron-ovs-agent-compute node and I could not find any CrashLoop related messages. The logs from what might be close neutron-opvs-agent-controller does show this: > kubectl logs neutron-ovs-agent-controller-0-9626473e-jrlzv -n openstack -c neutron-ovs-agent-init > + chown neutron: /run/openvswitch/db.sock > + neutron-sanity-check --version > + timeout 3m neutron-sanity-check --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --ovsdb_native --nokeepalived_ipv6_support > 2019-04-18 02:37:29.837 41 INFO neutron.common.config [-] Logging enabled! > 2019-04-18 02:37:29.837 41 INFO neutron.common.config [-] /var/lib/openstack/bin/neutron-sanity-check version 14.0.0.0b4.dev16 > 2019-04-18 02:37:30.922 41 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:127.0.0.1:6640 to retrieve schema: Connection refused Maybe this is the problem, not sure if it's the same as the LP you mentioned. > 2019-04-18 02:37:32.748 41 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/openvswitch_agent.ini', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpxbjXcQ/privsep.sock'] > 2019-04-18 02:37:36.263 41 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap > ++ sed 's/[{}"]//g' /tmp/auto_bridge_add > ++ tr , '\n' > + for bmap in '`sed '\''s/[{}"]//g'\'' /tmp/auto_bridge_add | tr "," "\n"`' > + bridge=br-phy0 > + iface=eth1000 > + ovs-vsctl --no-wait --may-exist add-br br-phy0 > + '[' -n eth1000 ']' > + '[' eth1000 '!=' null ']' > + ovs-vsctl --no-wait --may-exist add-port br-phy0 eth1000 > + ip link set dev eth1000 up > + for bmap in '`sed '\''s/[{}"]//g'\'' /tmp/auto_bridge_add | tr "," "\n"`' > + bridge=br-phy1 > + iface=eth1001 > + ovs-vsctl --no-wait --may-exist add-br br-phy1 > + '[' -n eth1001 ']' > + '[' eth1001 '!=' null ']' > + ovs-vsctl --no-wait --may-exist add-port br-phy1 eth1001 > + ip link set dev eth1001 up > + tunnel_interface=docker0 > + '[' -z docker0 ']' > ++ ip a s docker0 > ++ grep 'inet ' > ++ awk '{print $2}' > ++ awk -F / '{print $1}' > + LOCAL_IP=172.17.0.1 > + '[' -z 172.17.0.1 ']' > + tee > That one does not yet have a solution. > > Frank > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, April 17, 2019 10:49 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure > > > Hi Folks, > > I have been trying to get a deployment up in a libvirt/qemu environment (non-proxy). I am seeing the following issue. I am using the image that passed (mostly) Sanity Test on Monday 4/15 [0]. > > I am setting this up in AIO-Simplex mode, I have not setup any kind of registry. It seems to start up all the contains and kubectl get pods shows all the pods Running or Completed. I retrieved the stx-openstack-apply.log from armada as recommended by the Container Debug FAQ [1]. > I see multiple Errors that the Application apply aborted due to what seems like download failures. As I said, I am not behind any proxy or firewall. > > It seems to fail during processing chart: osh-openstack-neutron at 65% > > Not sure what the next steps are to debug this issue. > > Thanks > Sau! > > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T233001Z/ > [1] https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jim.somerville at windriver.com Thu Apr 18 15:21:02 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Thu, 18 Apr 2019 11:21:02 -0400 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction Message-ID: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> Hi Dean and other interested parties, I've finished reducing the patches on libvirt and qemu. I was able to get rid of virtually all of the RHEL patches, replacing them with just a minor "support for running on CentOS" patch or two. This will make our lives a lot easier moving to newer versions. qemu went from 97 patches down to 14, and libvirt from 23 to 13. The STX patches themselves required very little rework, this was mostly a testing exercise in the container realm with things changing frequently, making it quite challenging. This passed our regular sanity test run, and we subsequently did a full regression test run. All of the interesting failures in the regression run were explainable via existing bug reports. I feel reasonably confident that this isn't going to break anything, but, hey, famous last words and all that. Once you're satisfied with the review, I'll issue pull requests. Once you've pulled and created new branches, I'll follow up with the two commits, one referring to the new branches in the manifest, and the other with minor changes to the qemu spec file in the stx-integ repo. Linked so they both go in together. One issue concerns me a bit, and that is the tis patch number. It starts counting from the last upstream commit, and with me removing patches, it is now lower than it used to be. If this is a real concern I could just add a fixed 100 to the gitrevcount in both qemu and libvirt build_data files, guaranteeing package versions will not collide with ones in the past. Your thoughts? https://github.com/jsomervi/stx-qemu/commits/v3.0.0-patch-reduction-1 https://github.com/jsomervi/stx-libvirt-1/commits/v4.7.0-patch-reduction-1 Thanks, -Jim From jim.somerville at windriver.com Thu Apr 18 15:59:10 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Thu, 18 Apr 2019 11:59:10 -0400 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> Message-ID: <6f7e10fc-b5c5-1429-8885-140db06f81e9@windriver.com> On 2019-04-18 11:21 a.m., Jim Somerville wrote: > Hi Dean and other interested parties, > > I've finished reducing the patches on libvirt and qemu.  I was able to > get rid of virtually all of the RHEL patches, replacing them with just a > minor "support for running on CentOS" patch or two.  This will make our > lives a lot easier moving to newer versions.  qemu went from 97 patches > down to 14, and libvirt from 23 to 13.  The STX patches themselves > required very little rework, this was mostly a testing exercise in the > container realm with things changing frequently, making it quite > challenging. > > This passed our regular sanity test run, and we subsequently did a full > regression test run.  All of the interesting failures in the regression > run were explainable via existing bug reports.  I feel reasonably > confident that this isn't going to break anything, but, hey, famous last > words and all that. > > Once you're satisfied with the review, I'll issue pull requests.  Once > you've pulled and created new branches, I'll follow up with the two > commits, one referring to the new branches in the manifest, and the > other with minor changes to the qemu spec file in the stx-integ repo. > Linked so they both go in together. > > One issue concerns me a bit, and that is the tis patch number.  It > starts counting from the last upstream commit, and with me removing > patches, it is now lower than it used to be.  If this is a real concern > I could just add a fixed 100 to the gitrevcount in both qemu and libvirt > build_data files, guaranteeing package versions will not collide with > ones in the past.  Your thoughts? > > https://github.com/jsomervi/stx-qemu/commits/v3.0.0-patch-reduction-1 > https://github.com/jsomervi/stx-libvirt-1/commits/v4.7.0-patch-reduction-1 Link to the story: https://storyboard.openstack.org/#!/story/2005212 -Jim > > Thanks, > > -Jim > > > > > From michael.l.tullis at intel.com Thu Apr 18 16:34:24 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Thu, 18 Apr 2019 16:34:24 +0000 Subject: [Starlingx-discuss] [DOCS] Providing input to the docs.starlingx.io documentation Message-ID: <3808363B39586544A6839C76CF81445EA1B02E50@ORSMSX104.amr.corp.intel.com> If you are a StarlingX community member that is making changes to StarlingX code that impacts the end user documentation at https://docs.starlingx.io, then please, after completing your gerrit code review, choose one of the options below to help us update the docs: OPTION #1 * Send an email to starlingx-discuss at lists.starlingx.io adding the string “[DOCS] ” as a prefix on the message subject line. * The email should contain: * Short summary of the impact to documentation. * Links to any related Storyboard user stories. * Links to related gerrit code review(s). * Description of new or changed functionality, such as: * behavioral changes * procedural changes * CLI syntax changes * Etc. The StarlingX DOCS team will then take this info and integrate it into https://docs.starlingx.io, update the source .rst file(s) for the doc web pages, submit a PR, and add you as a code reviewer. OPTION #2 If you’re passionate about documentation and would like to update the documentation yourself, even better. See the StarlingX Documentation Contributor Guide at https://docs.starlingx.io/contributor/doc_contribute_guide.html. Thanks, The STX DOCS Team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Apr 18 16:48:24 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 18 Apr 2019 09:48:24 -0700 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> Message-ID: <4656fc0e-263f-1165-71bb-941faf5f4f03@linux.intel.com> Hi Jim, This looks like great work and a strong effort to reduce patches, thanks! On 4/18/19 8:21 AM, Jim Somerville wrote: > Hi Dean and other interested parties, > > I've finished reducing the patches on libvirt and qemu.  I was able to > get rid of virtually all of the RHEL patches, replacing them with just a > minor "support for running on CentOS" patch or two.  This will make our > lives a lot easier moving to newer versions.  qemu went from 97 patches > down to 14, and libvirt from 23 to 13.  The STX patches themselves > required very little rework, this was mostly a testing exercise in the > container realm with things changing frequently, making it quite > challenging. > I have not yet reviewed your repos, but want to know if you have given thoughts to upstreaming any of the remaining patches to qemu or libvirt as appropriate? > This passed our regular sanity test run, and we subsequently did a full > regression test run.  All of the interesting failures in the regression > run were explainable via existing bug reports.  I feel reasonably > confident that this isn't going to break anything, but, hey, famous last > words and all that. > > Once you're satisfied with the review, I'll issue pull requests.  Once > you've pulled and created new branches, I'll follow up with the two > commits, one referring to the new branches in the manifest, and the > other with minor changes to the qemu spec file in the stx-integ repo. > Linked so they both go in together. > Is there a reason to not issue the pull requests directly to the stx-staging repos now if your ready? > One issue concerns me a bit, and that is the tis patch number.  It > starts counting from the last upstream commit, and with me removing > patches, it is now lower than it used to be.  If this is a real concern > I could just add a fixed 100 to the gitrevcount in both qemu and libvirt > build_data files, guaranteeing package versions will not collide with > ones in the past.  Your thoughts? > At the last F2F in Chandler the discussion about TIS_PATCH_VER determined that it was a sequential version number, and not a count of patches. If this was a rebase with a version change, then you would start at 1 again, but since this is a rebase without, you should bump TIS_PATCH_VER by 1. > https://github.com/jsomervi/stx-qemu/commits/v3.0.0-patch-reduction-1 > https://github.com/jsomervi/stx-libvirt-1/commits/v4.7.0-patch-reduction-1 Thanks Sau! > Thanks, > > -Jim > > > > > From jim.somerville at windriver.com Thu Apr 18 17:07:51 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Thu, 18 Apr 2019 13:07:51 -0400 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: <4656fc0e-263f-1165-71bb-941faf5f4f03@linux.intel.com> References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> <4656fc0e-263f-1165-71bb-941faf5f4f03@linux.intel.com> Message-ID: <58045fcf-7d1a-0b11-7223-97f938e02b66@windriver.com> On 2019-04-18 12:48 p.m., Saul Wold wrote: > Hi Jim, > > This looks like great work and a strong effort to reduce patches, thanks! Thanks Saul, appreciated. > > On 4/18/19 8:21 AM, Jim Somerville wrote: >> Hi Dean and other interested parties, >> >> I've finished reducing the patches on libvirt and qemu.  I was able to >> get rid of virtually all of the RHEL patches, replacing them with just >> a minor "support for running on CentOS" patch or two.  This will make >> our lives a lot easier moving to newer versions.  qemu went from 97 >> patches down to 14, and libvirt from 23 to 13.  The STX patches >> themselves required very little rework, this was mostly a testing >> exercise in the container realm with things changing frequently, >> making it quite challenging. >> > I have not yet reviewed your repos, but want to know if you have given > thoughts to upstreaming any of the remaining patches to qemu or libvirt > as appropriate? I haven't given it much thought. Not being the actual author of most of them, I don't feel all that qualified to embark on the sales job of getting them in upstream. > >> This passed our regular sanity test run, and we subsequently did a >> full regression test run.  All of the interesting failures in the >> regression run were explainable via existing bug reports.  I feel >> reasonably confident that this isn't going to break anything, but, >> hey, famous last words and all that. >> >> Once you're satisfied with the review, I'll issue pull requests.  Once >> you've pulled and created new branches, I'll follow up with the two >> commits, one referring to the new branches in the manifest, and the >> other with minor changes to the qemu spec file in the stx-integ repo. >> Linked so they both go in together. >> > Is there a reason to not issue the pull requests directly to the > stx-staging repos now if your ready? No reason other than I just wanted folks to have a chance to look/review before I pestered the stx-staging repo controllers with pull requests. > >> One issue concerns me a bit, and that is the tis patch number.  It >> starts counting from the last upstream commit, and with me removing >> patches, it is now lower than it used to be.  If this is a real >> concern I could just add a fixed 100 to the gitrevcount in both qemu >> and libvirt build_data files, guaranteeing package versions will not >> collide with ones in the past.  Your thoughts? >> > At the last F2F in Chandler the discussion about TIS_PATCH_VER > determined that it was a sequential version number, and not a count of > patches. If this was a rebase with a version change, then you would > start at 1 again, but since this is a rebase without, you should bump > TIS_PATCH_VER by 1. The way it is currently done in libvirt/qemu is via the GITREVCOUNT mechanism. This change I'm making is essentially just rewriting a repo branch, and doesn't include an underlying version change to the code such as 3.0.0 to 3.0.1. I could abandon GITREVCOUNT and just set TIS_PATCH_VER to a version manually, 98 for qemu and 24 for libvirt. -Jim > >> https://github.com/jsomervi/stx-qemu/commits/v3.0.0-patch-reduction-1 >> https://github.com/jsomervi/stx-libvirt-1/commits/v4.7.0-patch-reduction-1 >> > > > Thanks >   Sau! > > >> Thanks, >> >> -Jim >> >> >> >> >> From dtroyer at gmail.com Thu Apr 18 20:04:56 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 18 Apr 2019 15:04:56 -0500 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> Message-ID: On Thu, Apr 18, 2019 at 10:22 AM Jim Somerville wrote: > I've finished reducing the patches on libvirt and qemu. I was able to > get rid of virtually all of the RHEL patches, replacing them with just a > minor "support for running on CentOS" patch or two. This will make our > lives a lot easier moving to newer versions. qemu went from 97 patches > down to 14, and libvirt from 23 to 13. The STX patches themselves > required very little rework, this was mostly a testing exercise in the > container realm with things changing frequently, making it quite > challenging. Awesome! > Once you're satisfied with the review, I'll issue pull requests. Once > you've pulled and created new branches, I'll follow up with the two > commits, one referring to the new branches in the manifest, and the > other with minor changes to the qemu spec file in the stx-integ repo. > Linked so they both go in together. It looks like these are on the same upstream base version, correct? We'll have to add a suffix but that isn't a problem. I'll use '-N' for that so it doesn't look like part of the upstream version (we used '.N' for the Nova stable branch in stx-nova, /me kicks self). I have created stx-qemu/stx/v3.0.0-1 and stx-libvirt/stx/v4.7.0-1. Fire away with the PRs. > One issue concerns me a bit, and that is the tis patch number. It > starts counting from the last upstream commit, and with me removing > patches, it is now lower than it used to be. If this is a real concern > I could just add a fixed 100 to the gitrevcount in both qemu and libvirt > build_data files, guaranteeing package versions will not collide with > ones in the past. Your thoughts? Is this that number that is supposed to be based on the patch count? I think we should get rid of that idea and just increment it every time it need to be incremented. Overloading things like that just makes everything more brittle. Also... I still want to encourage folks to do dev work in the primary places (Gerrit and starlngx-staging on GitHub), this is a very important part of The Four Opens[0] that is fundamental to being part of the OpenStack Foundation. In this case it isn't so much development as cleanup but it still counts as working in the open. Updating a WIP PR is just as doable as a WIP Gerrit review as things progress. And that lets people find the work without having to know beforehand where it is, even as in this case it was on GitHub anyway. [I am trying to not pick on Jim specifically here but I did recently say something in a meeting about this particular work and I thought this was a good place to expand on why I feel so strongly on this topic. These principles are fundamental to StarlingX being accepted as an OpenStack Foundation project and we _will_ be judged on things like this. We already are (informally) in fact...] dt [0] The Four Opens: https://governance.openstack.org/tc/reference/opens.html -- Dean Troyer dtroyer at gmail.com From Marvin.Huang at windriver.com Thu Apr 18 20:38:56 2019 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Thu, 18 Apr 2019 20:38:56 +0000 Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19E9D64742F@ALA-MBD.corp.ad.wrs.com> References: <4C60D9C5C8176C47874FFF36647AA19E9D647241@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D64742F@ALA-MBD.corp.ad.wrs.com> Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE47BC7@ALA-MBD.corp.ad.wrs.com> Hi Ovidiu, How long (in worst case) should users wait for Ceph-backend be configured? Or what is the timeout value we should use for automated waiting/polling, after which we can considerate that there is likely an issue? Thanks, Marvin From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, April 17, 2019 5:50 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Change was merged, wiki was updated... ________________________________ From: Poncea, Ovidiu [Ovidiu.Poncea at windriver.com] Sent: Tuesday, April 16, 2019 10:36 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Folks, Be aware that we will merge a change soon that may impact automated deployments and testing. The gerrit is this: https://review.openstack.org/#/c/644256/ and it will make Ceph the default storage backend. Therefore, once merged, users will no longer have to run this (as currently stated by the wikis): echo ">>> Enable primary Ceph backend" system storage-backend-add ceph --confirmed echo ">>> Wait for primary ceph backend to be configured" echo ">>> This step really takes a long time" while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done echo ">>> Ceph health" ceph -s Regards, Ovidiu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Thu Apr 18 21:11:14 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 18 Apr 2019 21:11:14 +0000 Subject: [Starlingx-discuss] Update on sanity status Message-ID: This week has seen a number of issues that impacted sanity. Two of the issues were addressed by commits earlier in the week as well as this morning. It looks like at least one issue remains that is preventing the stx-openstack application from successfully coming up on some platforms. This issue is being tracked by https://bugs.launchpad.net/starlingx/+bug/1825423 Until a solution is merged and sanity results are good, I suggest you revert to the most recent sane loads which were from the April 10 and April 11 builds: Apr 11: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T013000Z/outputs/iso/ Apr 10: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T013000Z/ An update will be sent on Monday. Frank P.S. The most recent sanity report is indicating LP https://bugs.launchpad.net/starlingx/+bug/1825045 is causing failures but while the symptoms look the same we do not feel this is actually the LP that is causing sanity to fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Thu Apr 18 22:12:37 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 18 Apr 2019 22:12:37 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 04/18 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4DA3CA@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Networking Test Status - TC Definition for Regression (update by Elio, Matt and Chris) - Still working on this. - Elio is working on comments provided by Matt by email. - Lots of changes are required. Elio coordinating with the rest of the test team. - Expect to have a new version for review in the next two weeks - 1st wk of May - Link to work in progress documents: - https://drive.google.com/drive/folders/1LS-gSj0Obfa9Qle2Ver2j5dZjmQJ9etB - Regression Execution (update by Elio) - Agreed to include some networking TCs in the automated sanity - Still not able to ping testing between 2 VMs on virtual env -- this was a limitation with ovs-dpdk. With the use of ovs in virtual env, this should no longer be an issue. - For automated sanity in virtual env, test with OVS. For automated sanity on baremetal, use OVS-DPDK - Action: Elio to share the TCs planned for sanity with the team. Send to starlingx-discuss mailing list. - Elio to provide time-line for adding TCs to sanity by next week. - Feature Testing for ovs-dpdk upversion (update by Ricardo and Elio) Testing on two systems: - Duplex (config 4) - No issues found with testing. Trunk re-test successful. Still need to re-test dns. - 2+2 (config 4) - Facing issues potentially with connectivity - https://bugs.launchpad.net/starlingx/+bug/1824923 - Given testing is successful on Duplex & 2+2 with another set of NICs, this points more to a networking configuration issue. - Christopher will look into the switch configuration to see if there is an issue there - Feature Testing for ovs-dpdk firewall (update by Elio) - TC creation is about 50%. Will work to incorporate Matt and Kailun's feedback - Expect to start execution next week - OVS process monitoring and alarming (update by Chenjie) - Code has been merged: - https://review.openstack.org/#/c/648330/ - https://review.openstack.org/#/c/648367/ - Ready for test verification - Containerized OVS Integration - Remaining item: Follow-up on version of ovs used in the container image. - Right now, the default docker image (with ovs 2.8.0) is used. - Agreed to align with the openvswitch version of the host. Target is 2.11.0 once the ovs-dpdk upversion feature is merged. - This means that we need to build/update the ovs docker image in starlingx - Need to get more information from Don on how to do this -- is a build required? or can we specify the ovs version via manifest? - Agreed to wait until the openvswitch 2.11 is merged in stx master and gets some soak first before proceeding with changing the version in the container - Bugs - All bugs are now assigned and being worked by Forrest's team - Expect to be able to address the current backlog in 2-3wks From Ghada.Khalil at windriver.com Thu Apr 18 22:18:09 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 18 Apr 2019 22:18:09 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - April 18/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4DA3DE@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release meeting agenda / notes Apr 18 2019 - TSC approved (by consensus) the release plan option without Dist Cloud - Chosen option updated on release plan in google docs - Action: Ghada to update release dates on Release Planning Wiki: https://wiki.openstack.org/wiki/StarlingX/Release_Plan - Execution / Delivery time! - Patch Elimination Update: - NUMA live migration patch backport should complete within 1-2 weeks. We have a f/stein.X branch that will be re-branched when 1) new content is merged to upstream stable Stein or 2) we backport additional code. - Status of other long poll items? - Went through all the features and updated the status. - See: https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=405844719 - Test team not covered (Ada/Numan are out). Will get an update in the next community meeting. - Release plan proposal for PTG - Brucej: Want to move toward continuous integration - want to catch more bugs and do more testing at code check in time. - would like to move toward more frequent releases with less excitement :) - Significant challenges in doing builds and running system level tests - need some creative thinking - Need to make a distinction between the type of release we are delivering - Release with foundational changes - Like stx.2.0 which introduced support for containers and containerization openstack - Lessons from stx.2.0: Under-estimated the complexity and the scale of the changes and the time/effort it would take to test it - Release with a number of smaller indepdent features - This would be more deterministic to align on time-based releases moving forward - For stx.3.0, we feel this will be the case. - Need to plan in more automation and CI/CD pipe-line work - Issue with having systems out in the public - Leverage virtual env testing - Bugs - We want to do a scrub of the stx.2.0 bugs. Schedule in the release planning meeting for May 9 or later. - Ask the domain owners to do a pre-review and make recommendations From glenn.seiler at windriver.com Thu Apr 18 22:45:13 2019 From: glenn.seiler at windriver.com (Seiler, Glenn) Date: Thu, 18 Apr 2019 22:45:13 +0000 Subject: [Starlingx-discuss] [TSC] Minutes - 4/11 meeting In-Reply-To: <3F9424F8-B622-4DCC-A275-3EC355A8DA40@windriver.com> References: <3F9424F8-B622-4DCC-A275-3EC355A8DA40@windriver.com> Message-ID: I am happy to help with mission statement effort if that is still an open item. Reviewing, helping with draft, crazy ideas. Whatever help you need. I’m not sure if I misread or if it was just a typo or cut/paste error, but it seems the sign-up URL for the team photo actually goes to the signup for the 5 minute PTG presentation slot. Which appears Ian has volunteered for. -glenn From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Wednesday, April 17, 2019 8:39 AM To: StarlingX ML Subject: [Starlingx-discuss] [TSC] Minutes - 4/11 meeting Release Plan Update (release team) options at https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=0 We discussed the 2 options in detail for 30 min. In the end we didn’t reach consensus on which option to select and a third compromise option came up. This will be on the agenda for the TSC call on 4/18 – tomorrow @TSC members – please make every effort to be on the TSC call. We will be making a decision on the release plan tomorrow. Project mission statement - https://etherpad.openstack.org/p/stx-mission-statement (ildikov) TSC please review proposal needs to be done soon Review next week Raised on community call as well How do we revalidate? Writing visions is a more involved process. There is a framework that can be used for this - use time as a way to frame - Vision, mission, goals. Add to PTG agenda Small team to work on a proposal for how to move this forward. Dean, Ian, PTG lunch slot presentation? (ildikov) the idea is a 5 minutes long presentation about the project to give an overview and what's new sign up for the team photo in PTG(shuquan): https://ethercalc.openstack.org/3qd1fj5f3tt3 Need a volunteer for this - presentation - feel free to sign up - Bruce has slides - OSF Board meeting update (ildikov) F2F Board meeting is in Denver, April 28 - This is the Sunday 10-15 minute overview presentation Need a TSC member - likely afternoon need to discuss what messages we want to send to the board Packet projects Curtis - MOU has been signed by both packet.com and the openstack foundation I sent an email to all the TSC members with the signed MOU attached Do we store these kinds of agreements anywhere? Eg. CENGN hosting agreement? - Is the latter signed by the Foundation as well? OSF has likely stored the MOU in some fashion May want to look into ensuring the CENGN hosting MOU/whatever is also stored in the same fashion Where to store these kinds of community agreements? Next step is to try to get STX booting on bare metal They use iPXE causing some issues Work being done here: https://etherpad.openstack.org/p/stx-packet-baremetal-boot -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruijing.guo at intel.com Fri Apr 19 00:22:57 2019 From: ruijing.guo at intel.com (Guo, Ruijing) Date: Fri, 19 Apr 2019 00:22:57 +0000 Subject: [Starlingx-discuss] StarlingX PTG Agenda Message-ID: <2EE296D083DF2940BF4EBB91D39BB89F40C5AE84@SHSMSX104.ccr.corp.intel.com> Hi, All, I am looking for starlingX PTG agenda. In https://etherpad.openstack.org/p/stx-ptg-denver, I can see items but I don't see timeslot for the items. Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Fri Apr 19 09:38:47 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Fri, 19 Apr 2019 09:38:47 +0000 Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon In-Reply-To: <74D9C1EDDC44EF468303629CF9A2832C9CE47BC7@ALA-MBD.corp.ad.wrs.com> References: <4C60D9C5C8176C47874FFF36647AA19E9D647241@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D64742F@ALA-MBD.corp.ad.wrs.com>, <74D9C1EDDC44EF468303629CF9A2832C9CE47BC7@ALA-MBD.corp.ad.wrs.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D6476A8@ALA-MBD.corp.ad.wrs.com> Hi Marvin, There is no need to wait for the primary tier storage backend to be configured, it's enabled by default at config_controller. Thus, this backend is already in 'configured' state. Ovidiu ________________________________ From: Huang, Marvin Sent: Thursday, April 18, 2019 11:38 PM To: Poncea, Ovidiu; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Ovidiu, How long (in worst case) should users wait for Ceph-backend be configured? Or what is the timeout value we should use for automated waiting/polling, after which we can considerate that there is likely an issue? Thanks, Marvin From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, April 17, 2019 5:50 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Change was merged, wiki was updated... ________________________________ From: Poncea, Ovidiu [Ovidiu.Poncea at windriver.com] Sent: Tuesday, April 16, 2019 10:36 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Folks, Be aware that we will merge a change soon that may impact automated deployments and testing. The gerrit is this: https://review.openstack.org/#/c/644256/ and it will make Ceph the default storage backend. Therefore, once merged, users will no longer have to run this (as currently stated by the wikis): echo ">>> Enable primary Ceph backend" system storage-backend-add ceph --confirmed echo ">>> Wait for primary ceph backend to be configured" echo ">>> This step really takes a long time" while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done echo ">>> Ceph health" ceph -s Regards, Ovidiu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Fri Apr 19 14:49:57 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 19 Apr 2019 14:49:57 +0000 Subject: [Starlingx-discuss] StarlingX PTG Agenda In-Reply-To: <2EE296D083DF2940BF4EBB91D39BB89F40C5AE84@SHSMSX104.ccr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F40C5AE84@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB47184F@ALA-MBD.corp.ad.wrs.com> Hi Ruijing, The agenda for the PTG is still being worked on. Brent From: Guo, Ruijing [mailto:ruijing.guo at intel.com] Sent: Thursday, April 18, 2019 8:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX PTG Agenda Hi, All, I am looking for starlingX PTG agenda. In https://etherpad.openstack.org/p/stx-ptg-denver, I can see items but I don't see timeslot for the items. Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Fri Apr 19 15:54:02 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 19 Apr 2019 15:54:02 +0000 Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8D56D@ALA-MBD.corp.ad.wrs.com> References: <93814834B4855241994F290E959305C753069B2B@SHSMSX104.ccr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8CF98@ALA-MBD.corp.ad.wrs.com> <93814834B4855241994F290E959305C753069E17@SHSMSX104.ccr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8D56D@ALA-MBD.corp.ad.wrs.com> Message-ID: <93814834B4855241994F290E959305C75306A0FA@SHSMSX104.ccr.corp.intel.com> Hi Barton, Thanks for your great comment! I have updated my patch to use oslo api instead of nfv-common related code. As for fetching rabbitmq configuration, I will try to do it according to your proposal. Thanks! Zhipeng From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: 2019年4月18日 20:20 To: Liu, ZhipengS ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Does rabbitmq listener work in NFV module Zhipeng, First, I don’t think you should be basing your new pci-interrupt-affinity service on any of the nfv-vim code. I have added a comment to https://review.openstack.org/#/c/640264 to explain. We can discuss more in the context of that review. To answer your questions below, you cannot access hieradata directly and you should not be using any of the nfv-vim configuration. I think the right way for you to get the rabbitmq configuration to your new service would be by creating a new puppet module which would create a new configuration file for your service (e.g. /etc/pci-interrupt-affinity/pci-interrupt-affinity.conf). You can use the puppet-nfv puppet module as an example and you can see in sysinv/puppet/nfv.py how the rabbit configuration is being retrieved from the helm data for nova - this would be similar to the code you have below, but would be running in the sysinv-conductor process as it prepares the hieradata for your new service. Bart From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: April 18, 2019 5:17 AM To: Wensley, Barton; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Does rabbitmq listener work in NFV module Hi Bart, Thanks! It exactly works in nfv. I still have a question about Rabbitmq access. Currently, I’m working on pci-interrupt-affinity service which will run on every worker node. It need to listen to rabbitmq to get notifications from nova Before containerized version, I can use configurations in /etc/sysinv/sysinv.conf to get connect to rabbitmq. Now, if I use below configuration, it can work. /opt/platform/puppet/19.01/hieradata/system.yaml nfv::nfvi::platform_username: admin nfv::nfvi::rabbit_host: rabbitmq.openstack.svc.cluster.local nfv::nfvi::rabbit_password: 28a5834cf803Ti0* nfv::nfvi::rabbit_port: 5672 nfv::nfvi::rabbit_userid: nova-rabbitmq-user nfv::nfvi::rabbit_virtual_host: nova Then, I tried to add below code to get above configuration. But it didn’t work. “Utils.is_openstack_installed” this check failed! And also cannot get “helm_data” Who can help? Any comment or proposal on it? BTW, can I get these configuration from /opt/platform/puppet/19.01/hieradata/system.yaml directly? Is it reasonable? =================================================================== from sysinv.helm import helm from sysinv.common import utils from sysinv.db import api as db_api ... dbapi = db_api.get_instance() if dbapi and utils.is_openstack_installed(dbapi): helm_data = helm.HelmOperatorData(dbapi) nova_oslo_messaging_data = helm_data.get_nova_oslo_messaging_data() rabbit_cfg['rabbit_host'] = nova_oslo_messaging_data['host'] rabbit_cfg['rabbit_userid'] = nova_oslo_messaging_data['username'] rabbit_cfg['rabbit_password'] = nova_oslo_messaging_data['password'] rabbit_cfg['rabbit_virtual_host'] = nova_oslo_messaging_data['virt_host'] Thanks! Zhipeng From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: 2019年4月17日 20:24 To: Liu, ZhipengS >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Does rabbitmq listener work in NFV module Zhipeng, These logs are normal in cases where the containerized rabbitmq servers are temporarily unavailable (i.e. the osh-openstack-rabbitmq-rabbitmq-0/1 pods). This can happen when a controller is locked or rebooted. The VIM should automatically re-connect to the rabbitmq servers - this could take anywhere from a few seconds to as long as a minute or two depending on the reason the rabbitmq server(s) were not available. Bart From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: April 17, 2019 6:46 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Does rabbitmq listener work in NFV module Hi all, >From nfv-vim.log I see below 2019-04-12T08:38:11.409 controller-0 VIM_Thread[31835] INFO rpc_listener.py.127 RPC-Listener not connected to exchange nova, queue=notifications.nfvi_nova_listener_queue. It seems NFV could not get notifications from nova now. Have we enabled rabbitmq listener in NFV after containerized version? Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Apr 19 16:20:23 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 19 Apr 2019 11:20:23 -0500 Subject: [Starlingx-discuss] stx-nova numa aware live migration In-Reply-To: <58CF5BABC9A76946A638A0E8AE48D17371830A74@ALA-MBD.corp.ad.wrs.com> References: <58CF5BABC9A76946A638A0E8AE48D17371830A74@ALA-MBD.corp.ad.wrs.com> Message-ID: On Wed, Apr 17, 2019 at 3:43 PM Kopec, Gerald (Gerry) wrote: > It’s the 7 open nova reviews from https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration plus a temporary change to address some of the review comments related to live migration and resource tracking. Thank you Gerry, I finally got through some Nova tests locally on that PR, the reviews did not age completely unscathed since late February. Detailed notes are in the PR, most of it is probably in the tests themselves and will need to be fixed upstream. What is the plan for getting the fixes upstream? Also, do you know if your fixes address the issues we found and reported just before feature freeze (sorry, I don't recall exactly what they were)? dt -- Dean Troyer dtroyer at gmail.com From build.starlingx at gmail.com Sat Apr 20 01:30:16 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 19 Apr 2019 21:30:16 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 71 - Failure! Message-ID: <1900690513.209.1555723817382.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 71 Status: Failure Timestamp: 20190420T013001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190420T013001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From cboylan at sapwetik.org Sat Apr 20 04:07:15 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Sat, 20 Apr 2019 00:07:15 -0400 Subject: [Starlingx-discuss] Migrating git repos to OpenDev In-Reply-To: References: <871s3itl0p.fsf@meyer.lemoncheese.net> Message-ID: On Tue, Apr 16, 2019, at 5:12 PM, Clark Boylan wrote: > > The infra/OpenDev team continues to make good progress towards this > transition and plans to perform the transition on April 19, 2019 as > previously scheduled. We will begin the transition at 15:00UTC and > users should plan for intermittent Gerrit and git repo outages through > the day. We expect most of those will be closer to 15:00UTC than > 23:00UTC. > > Fungi has generated a master list of project renames for the openstack > namespaces: http://paste.openstack.org/show/749402/. If you have a > moment please quickly review these planned renames for any obvious > errors or issues. > > For the airship, starlingx, and zuul repo renames the repositories > listed at git.airshipit.org, git.starlingx.io, and git.zuul-ci.org were > used placing repos in airship/, starlingx/ and zuul/ namespaces. Any > repo name prefix (like stx- and airship-) is dropped. Hello everyone! The Infra team, no, OpenDev Infra team, has completed the initial work to migrate to a more flexible git hosting platform. Gerrit is now hosted at review.opendev.org (with redirects from review.openstack.org in place). Our git mirroring has transitioned from git.openstack.org, git.airshipit.org, git.starlingx.io, and git.zuul-ci.org to http(s)://opendev.org. We have put in place redirects from these old domains to the new domain. If you see James Blair at the summit and appreciate these redirects: buy him a beverage. As part of this transition we've also moved git repos around. Airship, Zuul, OpenDev, StarlingX and others now have their own top level "orgs" which we hope cuts down on confusion over what is or isn't part of a certain project. You will likely want to update your git remotes to the new canonical locations (where ever you get redirected to using the old urls) in the near future. For unofficial projects that were hosted under openstack/ the OpenStack TC decided to move them out into a different org (namespace). The chosen namespace was x/ because it is short and doesn't convey any particular meaning. In time we can continue to use more meaningful namespaces, but for now I think we want to stabilize after the recent moves. The Zuul tenant configs report no errors (Jeremy Stanley gets the beverages for this); however, that does not mean all jobs will be happy. We will be around to help work people through job failures related to this migration. Please reach out and we'll go from there. On the infrastructure side of things we will also need to work to get our continuous deployment running again. This was a bit of an inception move for us as we do all our work through Gerrit and we had to take Gerrit down and "move" it. Finally thank you all for your patience today and thank you to everyone that helped make this possible. Clark From cboylan at sapwetik.org Sat Apr 20 18:10:59 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Sat, 20 Apr 2019 14:10:59 -0400 Subject: [Starlingx-discuss] Post OpenDev Repo Migration Issues Message-ID: <286711f4-fe69-4d49-bec0-236e072e0b68@www.fastmail.com> Hello, There are two quick things we want to point out that we've noticed after the OpenDev Gerrit manuevers. First, any commit Depends-On footers for unmerged changes will need to use https://review.opendev.org urls instead of https://review.openstack.org urls. This is due to a limitation in Zuul and how it sees the Gerrit server. Any Depends-On footers that use https://review.openstack.org are currently non-functional and they will not enforce the dependency relationship. Second, while we have fairly robust http(s) redirects in place for old urls (including those from cgit to gitea) any ssh repo urls will need to be updated to use the new canonical paths if a repo has moved. We are unable to redirect incoming ssh requests against Gerrit. If you need to know what the canonical path is go to the corresponding https repo url and the resulting destination is the canonical location. Again thank you for your patience, Clark From dtroyer at gmail.com Sat Apr 20 19:52:34 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Sat, 20 Apr 2019 14:52:34 -0500 Subject: [Starlingx-discuss] More OpenDev updates Message-ID: For those of you not following along in IRC here are my notes about things specific to StarlingX and the OpenDev transition: * All git remotes will need to eventually be updated. The gerrit remote WILL need to be updated to work as it uses ssh rather than https and can not be redirected. I have a script[-1] in progress to do this, it works for me but please run it with -n to see what it will do before you trust it. It is only changing git remotes so the risk is low but still... . .gitreview has been updated for us during the transition. You will not see this in the Gerrit queue as it was done direclty in the back-end repos while Gerrit was offline. You will see it in the git log and will need to do a git pull to refresh your working copies. * The repo name changes required some work in devstack itself plus our devstack jobs to address places where the repo name is used in the filesystem.[0] This means everywhere we assume we know the repo name in tox.ini and .zuul.yaml and other places we need to update it. The primary non-DevStack place I have seen this so far is pylint tox jobs that require other repos present to work. * At this point I have reviews up for the devstack jobs in update, integ, fault and config. The remaining ones yet to come...they will look a lot like the four already done. Also, fault and config depend on each other, wheee!!! I think I'll have to break up the plugins to get around that... * This Gerrit query will show the opendev stuff I have been doing under topic opendev-update: https://review.opendev.org/#/q/topic:opendev-update+(status:open+OR+status:merged) * Depends-On: footers must now use review.opendev.org in the URL. This is a Gerrit limitation that redirects can not be used. We had only two open reviews with Depends-On[1], I updated the commit message on both. Please verify that they are working as expected. (The original depends-on footer was not in the last block of text in the commit message, it may not have been previously working.) * I made a review in manifest[2] to use the new remotes but not change the destination paths. I'll leave it up to the build maintainers to decide if the destination directory names should also be changed. I've left a -1 on the review so it doesn't get blindly merged, if someone wants to pick it up and made further changes, feel free to do so. I am taking some weekend time, will check back in after a bit. If you have problems or questions please find me here, in #starlingx or anyone in #openstack-infra. Happy weekend everyone! dt [-1] https://github.com/starlingx-staging/tools-contrib/pull/8 for the PR or https://raw.githubusercontent.com/starlingx-staging/tools-contrib/8bea0e6d0650ec0e12648e3c5b749093e60203d7/misc/stx-remote-fix.sh for the raw file [0] Free Internet points to the first one to make Apache redirects work there too... :) [1] https://review.opendev.org/653910 https://review.opendev.org/653911 [2] https://review.opendev.org/653960 -- Dean Troyer dtroyer at gmail.com From build.starlingx at gmail.com Sun Apr 21 01:59:49 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 20 Apr 2019 21:59:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 275 - Failure! Message-ID: <852483577.214.1555811990948.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 275 Status: Failure Timestamp: 20190421T013129Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190421T013001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190421T013001Z DOCKER_DL_ID: jenkins-master-20190421T013001Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190421T013001Z/logs DOCKER_DL_TAG: master-20190421T013001Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190421T013001Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sun Apr 21 01:59:53 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 20 Apr 2019 21:59:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 72 - Still Failing! In-Reply-To: <2071989816.207.1555723814415.JavaMail.javamailuser@localhost> References: <2071989816.207.1555723814415.JavaMail.javamailuser@localhost> Message-ID: <1798235480.217.1555811994516.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 72 Status: Still Failing Timestamp: 20190421T013001Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190421T013001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From dtroyer at gmail.com Sun Apr 21 03:02:14 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Sat, 20 Apr 2019 22:02:14 -0500 Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 275 - Failure! In-Reply-To: <852483577.214.1555811990948.JavaMail.javamailuser@localhost> References: <852483577.214.1555811990948.JavaMail.javamailuser@localhost> Message-ID: On Sat, Apr 20, 2019 at 9:05 PM wrote: > Project: STX_DL_container_setup > Build #: 275 > Status: Failure > Timestamp: 20190421T013129Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190421T013001Z/logs As best I can tell this is due to a third party repo issue. All of the redirects in the OpenDev move seem to have worked, so \o/ we have gotten that far at least. dt -- Dean Troyer dtroyer at gmail.com From serverascode at gmail.com Sun Apr 21 13:03:45 2019 From: serverascode at gmail.com (Curtis) Date: Sun, 21 Apr 2019 09:03:45 -0400 Subject: [Starlingx-discuss] Announcing Packet.com Special Interest Group (SIG) Message-ID: Hi All, At the last TSC meeting we agreed to setup a SIG around our use of packet.com resources. The SIG will be temporary, say 4-6 moths, as we determine how we will manage and use packet.com. For the time the SIG exists all use of packet.com will funnel though it. I've put up an etherpad here: https://etherpad.openstack.org/p/stx-packet-sig We need to come up with a weekly meeting time first. Please put some options in that etherpad that would work for you and we will try to figure out a good time (or times). Also feel free to add to the agenda for the first, as yet unscheduled, meeting. :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Sun Apr 21 16:04:24 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Sun, 21 Apr 2019 11:04:24 -0500 Subject: [Starlingx-discuss] More OpenDev updates In-Reply-To: References: Message-ID: On Sat, Apr 20, 2019 at 2:52 PM Dean Troyer wrote: > [-1] https://github.com/starlingx-staging/tools-contrib/pull/8 for the > PR or https://raw.githubusercontent.com/starlingx-staging/tools-contrib/8bea0e6d0650ec0e12648e3c5b749093e60203d7/misc/stx-remote-fix.sh > for the raw file I updated the script, now with more sed(1). Here is the new direct link: https://raw.githubusercontent.com/starlingx-staging/tools-contrib/3e311b467e35689d6727cf4c0c1624624e5f1743/misc/stx-remote-fix.sh dt -- Dean Troyer dtroyer at gmail.com From cheng1.li at intel.com Mon Apr 22 02:34:43 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Mon, 22 Apr 2019 02:34:43 +0000 Subject: [Starlingx-discuss] More OpenDev updates In-Reply-To: References: Message-ID: Hi Troyer, I can see 'review.opendev.org' on opendev.org [1], but 'review.openstack.org' on github[2]. Will github.com/openstack/stx-xxx still be used and synced with opendev.org/starlingx/xxx? Or we won't use github.com/openstack/stx-xx anymore? [1] https://opendev.org/starlingx/config/src/branch/master/.gitreview [2] https://github.com/openstack/stx-config/blob/master/.gitreview Thanks, Cheng -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Sunday, April 21, 2019 3:53 AM To: starlingx Subject: [Starlingx-discuss] More OpenDev updates For those of you not following along in IRC here are my notes about things specific to StarlingX and the OpenDev transition: * All git remotes will need to eventually be updated. The gerrit remote WILL need to be updated to work as it uses ssh rather than https and can not be redirected. I have a script[-1] in progress to do this, it works for me but please run it with -n to see what it will do before you trust it. It is only changing git remotes so the risk is low but still... . .gitreview has been updated for us during the transition. You will not see this in the Gerrit queue as it was done direclty in the back-end repos while Gerrit was offline. You will see it in the git log and will need to do a git pull to refresh your working copies. * The repo name changes required some work in devstack itself plus our devstack jobs to address places where the repo name is used in the filesystem.[0] This means everywhere we assume we know the repo name in tox.ini and .zuul.yaml and other places we need to update it. The primary non-DevStack place I have seen this so far is pylint tox jobs that require other repos present to work. * At this point I have reviews up for the devstack jobs in update, integ, fault and config. The remaining ones yet to come...they will look a lot like the four already done. Also, fault and config depend on each other, wheee!!! I think I'll have to break up the plugins to get around that... * This Gerrit query will show the opendev stuff I have been doing under topic opendev-update: https://review.opendev.org/#/q/topic:opendev-update+(status:open+OR+status:merged) * Depends-On: footers must now use review.opendev.org in the URL. This is a Gerrit limitation that redirects can not be used. We had only two open reviews with Depends-On[1], I updated the commit message on both. Please verify that they are working as expected. (The original depends-on footer was not in the last block of text in the commit message, it may not have been previously working.) * I made a review in manifest[2] to use the new remotes but not change the destination paths. I'll leave it up to the build maintainers to decide if the destination directory names should also be changed. I've left a -1 on the review so it doesn't get blindly merged, if someone wants to pick it up and made further changes, feel free to do so. I am taking some weekend time, will check back in after a bit. If you have problems or questions please find me here, in #starlingx or anyone in #openstack-infra. Happy weekend everyone! dt [-1] https://github.com/starlingx-staging/tools-contrib/pull/8 for the PR or https://raw.githubusercontent.com/starlingx-staging/tools-contrib/8bea0e6d0650ec0e12648e3c5b749093e60203d7/misc/stx-remote-fix.sh for the raw file [0] Free Internet points to the first one to make Apache redirects work there too... :) [1] https://review.opendev.org/653910 https://review.opendev.org/653911 [2] https://review.opendev.org/653960 -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Mon Apr 22 11:26:59 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 22 Apr 2019 06:26:59 -0500 Subject: [Starlingx-discuss] More OpenDev updates In-Reply-To: References: Message-ID: On Sun, Apr 21, 2019 at 9:35 PM Li, Cheng1 wrote: > I can see 'review.opendev.org' on opendev.org [1], but 'review.openstack.org' on github[2]. > Will github.com/openstack/stx-xxx still be used and synced with opendev.org/starlingx/xxx? > Or we won't use github.com/openstack/stx-xx anymore? None of the repos that changed to different namespaces are mirrored to github now, including starlingx/*. We are not planning to continue to mirror StarlingX to github. dt -- Dean Troyer dtroyer at gmail.com From Marvin.Huang at windriver.com Mon Apr 22 12:44:39 2019 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Mon, 22 Apr 2019 12:44:39 +0000 Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19E9D6476A8@ALA-MBD.corp.ad.wrs.com> References: <4C60D9C5C8176C47874FFF36647AA19E9D647241@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D64742F@ALA-MBD.corp.ad.wrs.com>, <74D9C1EDDC44EF468303629CF9A2832C9CE47BC7@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D6476A8@ALA-MBD.corp.ad.wrs.com> Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE47DC2@ALA-MBD.corp.ad.wrs.com> Thanks Ovidiu! From: Poncea, Ovidiu Sent: Friday, April 19, 2019 5:39 AM To: Huang, Marvin; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Marvin, There is no need to wait for the primary tier storage backend to be configured, it's enabled by default at config_controller. Thus, this backend is already in 'configured' state. Ovidiu ________________________________ From: Huang, Marvin Sent: Thursday, April 18, 2019 11:38 PM To: Poncea, Ovidiu; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Ovidiu, How long (in worst case) should users wait for Ceph-backend be configured? Or what is the timeout value we should use for automated waiting/polling, after which we can considerate that there is likely an issue? Thanks, Marvin From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, April 17, 2019 5:50 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Change was merged, wiki was updated... ________________________________ From: Poncea, Ovidiu [Ovidiu.Poncea at windriver.com] Sent: Tuesday, April 16, 2019 10:36 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] For automation: Instalation workflow update will be merged soon Hi Folks, Be aware that we will merge a change soon that may impact automated deployments and testing. The gerrit is this: https://review.openstack.org/#/c/644256/ and it will make Ceph the default storage backend. Therefore, once merged, users will no longer have to run this (as currently stated by the wikis): echo ">>> Enable primary Ceph backend" system storage-backend-add ceph --confirmed echo ">>> Wait for primary ceph backend to be configured" echo ">>> This step really takes a long time" while [ $(system storage-backend-list | awk '/ceph-store/{print $8}') != 'configured' ]; do echo 'Waiting for ceph..'; sleep 5; done echo ">>> Ceph health" ceph -s Regards, Ovidiu -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Apr 22 13:11:29 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 22 Apr 2019 13:11:29 +0000 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 22 Message-ID: We will hold a meeting today. Planned agenda: 1. Sanity status/issues: First time application-apply stx-openstack failed on vbox due to timeout from unbalanced CPU load https://bugs.launchpad.net/starlingx/+bug/1825423 Multi-node system host-lock failed after swact during lab setup https://bugs.launchpad.net/starlingx/+bug/1824994 application-apply stx-openstack failed due to neutron pods failure https://bugs.launchpad.net/starlingx/+bug/1825045 2. Feature plan updates: https://docs.google.com/spreadsheets/d/1lMMclUmLMPTuk_a5URMMoWrJR4MbeA_UINnBliumg2Y/edit#gid=991138079 3. Test Team plans & status 4. Open topics -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Mon Apr 22 14:08:58 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Mon, 22 Apr 2019 14:08:58 +0000 Subject: [Starlingx-discuss] More OpenDev updates In-Reply-To: References: Message-ID: <210898B96CA058408C55992CCAD98676B9FC4BE0@ALA-MBD.corp.ad.wrs.com> Hi Dean, I ran your script but it didn't seem to do anything to my repo. [emacdona at yow-cgts4-lx:/localdisk/designer/emacdona/starlingx-0/cgcs-root/stx ] $ ./stx-remote-fix.sh -n stx.io-: https://opendev.org/stx-root.git git remote set-url starlingx https://opendev.org/stx-root.git [emacdona at yow-cgts4-lx:/localdisk/designer/emacdona/starlingx-0/cgcs-root/stx ] $ ./stx-remote-fix.sh stx.io-: https://opendev.org/stx-root.git [emacdona at yow-cgts4-lx:/localdisk/designer/emacdona/starlingx-0/cgcs-root/stx ] $ ls downloads git stx-clients stx-config stx-fault stx-gui stx-ha stx-integ stx-metal stx-nfv stx-remote-fix.sh stx-update stx-upstream It would be helpful if you circulated an email that specifically listed the operations that users need to execute that will allow us to bridge the namespace change in our local repos or to specifically recommend that we create new local repos with the namespace change. There are a lot of people affected by this update so a clear summary of what to do and maybe some checkpoints along the way would help the development community bridge this change. Thanks, Eric. > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: Saturday, April 20, 2019 3:53 PM > To: starlingx > Subject: [Starlingx-discuss] More OpenDev updates > > For those of you not following along in IRC here are my notes about > things specific to StarlingX and the OpenDev transition: > > * All git remotes will need to eventually be updated. The gerrit > remote WILL need to be updated to work as it uses ssh rather than > https and can not be redirected. I have a script[-1] in progress to > do this, it works for me but please run it with -n to see what it will > do before you trust it. It is only changing git remotes so the risk > is low but still... > > . .gitreview has been updated for us during the transition. You will > not see this in the Gerrit queue as it was done direclty in the > back-end repos while Gerrit was offline. You will see it in the git > log and will need to do a git pull to refresh your working copies. > > * The repo name changes required some work in devstack itself plus our > devstack jobs to address places where the repo name is used in the > filesystem.[0] This means everywhere we assume we know the repo name > in tox.ini and .zuul.yaml and other places we need to update it. The > primary non-DevStack place I have seen this so far is pylint tox jobs > that require other repos present to work. > > * At this point I have reviews up for the devstack jobs in update, > integ, fault and config. The remaining ones yet to come...they will > look a lot like the four already done. Also, fault and config depend > on each other, wheee!!! I think I'll have to break up the plugins to > get around that... > > * This Gerrit query will show the opendev stuff I have been doing > under topic opendev-update: > https://review.opendev.org/#/q/topic:opendev-update+(status:open+OR+status:merged) > > * Depends-On: footers must now use review.opendev.org in the URL. > This is a Gerrit limitation that redirects can not be used. We had > only two open reviews with Depends-On[1], I updated the commit message > on both. Please verify that they are working as expected. (The > original depends-on footer was not in the last block of text in the > commit message, it may not have been previously working.) > > * I made a review in manifest[2] to use the new remotes but not change > the destination paths. I'll leave it up to the build maintainers to > decide if the destination directory names should also be changed. > I've left a -1 on the review so it doesn't get blindly merged, if > someone wants to pick it up and made further changes, feel free to do > so. > > I am taking some weekend time, will check back in after a bit. If you > have problems or questions please find me here, in #starlingx or > anyone in #openstack-infra. > > Happy weekend everyone! > dt > > [-1] https://github.com/starlingx-staging/tools-contrib/pull/8 for the > PR or https://raw.githubusercontent.com/starlingx-staging/tools- > contrib/8bea0e6d0650ec0e12648e3c5b749093e60203d7/misc/stx-remote-fix.sh > for the raw file > [0] Free Internet points to the first one to make Apache redirects > work there too... :) > [1] https://review.opendev.org/653910 https://review.opendev.org/653911 > [2] https://review.opendev.org/653960 > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Apr 22 14:11:02 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 22 Apr 2019 09:11:02 -0500 Subject: [Starlingx-discuss] How to update an installed image Message-ID: Hi team I would like to know more about the image update mechanism we have in starting X. I have a simplex system installed and I want to keep my system updated with the latest version released in http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall the full ISO again every week. Is there any way to do a sw update in the starling x system so I keep my infrastructure updated w/o having to reinstall the ISO? Thanks a lot Regards Victor Rodriguez From fungi at yuggoth.org Mon Apr 22 14:18:54 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 22 Apr 2019 14:18:54 +0000 Subject: [Starlingx-discuss] More OpenDev updates In-Reply-To: <210898B96CA058408C55992CCAD98676B9FC4BE0@ALA-MBD.corp.ad.wrs.com> References: <210898B96CA058408C55992CCAD98676B9FC4BE0@ALA-MBD.corp.ad.wrs.com> Message-ID: <20190422141854.knxjbwu7m7ywhamt@yuggoth.org> On 2019-04-22 14:08:58 +0000 (+0000), MacDonald, Eric wrote: [...] > I ran your script but it didn't seem to do anything to my repo. > > [emacdona at yow-cgts4-lx:/localdisk/designer/emacdona/starlingx-0/cgcs-root/stx ] $ ./stx-remote-fix.sh -n > > stx.io-: https://opendev.org/stx-root.git > > git remote set-url starlingx https://opendev.org/stx-root.git [...] It looks like it must have missed dropping the "stx-" predfix and adding the "starlingx/" namespace in its place. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From jim.somerville at windriver.com Mon Apr 22 15:00:00 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Mon, 22 Apr 2019 11:00:00 -0400 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> Message-ID: <4201487e-eab2-d314-952e-f5daf578b7a3@windriver.com> On 2019-04-18 4:04 p.m., Dean Troyer wrote: > On Thu, Apr 18, 2019 at 10:22 AM Jim Somerville > wrote: >> I've finished reducing the patches on libvirt and qemu. I was able to >> get rid of virtually all of the RHEL patches, replacing them with just a >> minor "support for running on CentOS" patch or two. This will make our >> lives a lot easier moving to newer versions. qemu went from 97 patches >> down to 14, and libvirt from 23 to 13. The STX patches themselves >> required very little rework, this was mostly a testing exercise in the >> container realm with things changing frequently, making it quite >> challenging. > > Awesome! Thanks. > >> Once you're satisfied with the review, I'll issue pull requests. Once >> you've pulled and created new branches, I'll follow up with the two >> commits, one referring to the new branches in the manifest, and the >> other with minor changes to the qemu spec file in the stx-integ repo. >> Linked so they both go in together. > > It looks like these are on the same upstream base version, correct? Yes, same upstream base. > We'll have to add a suffix but that isn't a problem. I'll use '-N' > for that so it doesn't look like part of the upstream version (we used > '.N' for the Nova stable branch in stx-nova, /me kicks self). I have > created stx-qemu/stx/v3.0.0-1 and stx-libvirt/stx/v4.7.0-1. Fire away > with the PRs. Will do. > >> One issue concerns me a bit, and that is the tis patch number. It >> starts counting from the last upstream commit, and with me removing >> patches, it is now lower than it used to be. If this is a real concern >> I could just add a fixed 100 to the gitrevcount in both qemu and libvirt >> build_data files, guaranteeing package versions will not collide with >> ones in the past. Your thoughts? > > Is this that number that is supposed to be based on the patch count? Yes. > I think we should get rid of that idea and just increment it every > time it need to be incremented. Overloading things like that just > makes everything more brittle. Agreed. I will start the number for both qemu and libvirt at 100 so there is no chance of a collision with an earlier released version of either package. -Jim > > Also... > > I still want to encourage folks to do dev work in the primary places > (Gerrit and starlngx-staging on GitHub), this is a very important part > of The Four Opens[0] that is fundamental to being part of the > OpenStack Foundation. In this case it isn't so much development as > cleanup but it still counts as working in the open. Updating a WIP PR > is just as doable as a WIP Gerrit review as things progress. And that > lets people find the work without having to know beforehand where it > is, even as in this case it was on GitHub anyway. > > [I am trying to not pick on Jim specifically here but I did recently > say something in a meeting about this particular work and I thought > this was a good place to expand on why I feel so strongly on this > topic. These principles are fundamental to StarlingX being accepted > as an OpenStack Foundation project and we _will_ be judged on things > like this. We already are (informally) in fact...] > > dt > > [0] The Four Opens: https://governance.openstack.org/tc/reference/opens.html > From bruce.e.jones at intel.com Mon Apr 22 16:09:41 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 22 Apr 2019 16:09:41 +0000 Subject: [Starlingx-discuss] Hackathon update and Fiberhome questions Message-ID: <9A85D2917C58154C960D95352B22818BD0722F1F@fmsmsx123.amr.corp.intel.com> There was a very productive hack-a-thon in China last week, led by Wei and Shuquan. Can you please share an update on the event with the community? There were attendees from Fiberhome (on cc:) that have follow-on questions about the project. I've posted them to an etherpad [0]. Most of the questions are related to the Containerization changes - if folks from that team can help answer them, that would be amazing. Thank you! Brucej [0] https://etherpad.openstack.org/p/stx-fiberhome-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Apr 22 16:52:37 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 22 Apr 2019 11:52:37 -0500 Subject: [Starlingx-discuss] More OpenDev updates In-Reply-To: <210898B96CA058408C55992CCAD98676B9FC4BE0@ALA-MBD.corp.ad.wrs.com> References: <210898B96CA058408C55992CCAD98676B9FC4BE0@ALA-MBD.corp.ad.wrs.com> Message-ID: On Mon, Apr 22, 2019 at 9:08 AM MacDonald, Eric wrote: > I ran your script but it didn't seem to do anything to my repo. I am not sure what you started with in your remote, it got the name change but not the path, the only form that I thought did not require a namespace was git.starlingx.io/stx-XXX, did you have something else to start with? It is not going to work well at that level in the build tree. IIRC that only updates the stx-root remotes. The script is meant to be run in _each_ repo, it makes no assumptions about anything around it, only changing the remotes in the 'current' git repo, ie what you will see by typing 'git remote -v'. > It would be helpful if you circulated an email that specifically listed the operations that users need to execute that will allow us to bridge the namespace change in our local repos or to specifically recommend that we create new local repos with the namespace change. I am not making any specific recommendations on how to deal with the build environment workspace as I am not intimately familiar with it nor the assumptions built in to those scripts. That is why it does not (nor did the manifest change) attempt to rename any directories. dt -- Dean Troyer dtroyer at gmail.com From Brent.Rowsell at windriver.com Mon Apr 22 16:56:42 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 22 Apr 2019 16:56:42 +0000 Subject: [Starlingx-discuss] Hackathon update and Fiberhome questions In-Reply-To: <9A85D2917C58154C960D95352B22818BD0722F1F@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0722F1F@fmsmsx123.amr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB475EF3@ALA-MBD.corp.ad.wrs.com> Bruce, I have updated the etherpad below Brent From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Monday, April 22, 2019 12:10 PM To: starlingx-discuss at lists.starlingx.io; Hu, Wei W ; Shuquan Huang Cc: zhaowei7146 at fiberhome.com; hwang7073 at fiberhome.com Subject: [Starlingx-discuss] Hackathon update and Fiberhome questions There was a very productive hack-a-thon in China last week, led by Wei and Shuquan. Can you please share an update on the event with the community? There were attendees from Fiberhome (on cc:) that have follow-on questions about the project. I've posted them to an etherpad [0]. Most of the questions are related to the Containerization changes - if folks from that team can help answer them, that would be amazing. Thank you! Brucej [0] https://etherpad.openstack.org/p/stx-fiberhome-questions -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.smith at windriver.com Mon Apr 22 16:59:51 2019 From: tyler.smith at windriver.com (Smith, Tyler) Date: Mon, 22 Apr 2019 16:59:51 +0000 Subject: [Starlingx-discuss] Build issue Message-ID: Hello, Due to the opendev transition a commit [1] has gone in without its dependency [2] being met, which will cause build issues until [2] is merged. I suggest holding off on pulling until that happens. Thanks, Tyler [1] https://review.opendev.org/#/c/653821/2 [2] https://review.opendev.org/#/c/653086/5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Mon Apr 22 17:06:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 22 Apr 2019 17:06:30 +0000 Subject: [Starlingx-discuss] Build issue In-Reply-To: References: Message-ID: <20190422170629.oftvcsuzrxybymdh@yuggoth.org> On 2019-04-22 16:59:51 +0000 (+0000), Smith, Tyler wrote: > Due to the opendev transition a commit [1] has gone in without its > dependency [2] being met, which will cause build issues until [2] > is merged. I suggest holding off on pulling until that happens. [...] Per Clark's warning in http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/004178.html I recommend just updating the domain in Depends-On footers of commit messages for all open changes in your projects which match a Gerrit query like: message:"depends-on: https://review.openstack" -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From tyler.smith at windriver.com Mon Apr 22 18:22:17 2019 From: tyler.smith at windriver.com (Smith, Tyler) Date: Mon, 22 Apr 2019 18:22:17 +0000 Subject: [Starlingx-discuss] Build issue In-Reply-To: References: Message-ID: The changes have been merged now, except for two non-critical ones [0], the build should be functional again Thanks, Tyler [0] https://review.opendev.org/#/c/653079/3 https://review.opendev.org/#/c/653085/2 From: Smith, Tyler [mailto:tyler.smith at windriver.com] Sent: Monday, April 22, 2019 1:00 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build issue Hello, Due to the opendev transition a commit [1] has gone in without its dependency [2] being met, which will cause build issues until [2] is merged. I suggest holding off on pulling until that happens. Thanks, Tyler [1] https://review.opendev.org/#/c/653821/2 [2] https://review.opendev.org/#/c/653086/5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cm.lists at yandex.com Mon Apr 22 20:48:44 2019 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Mon, 22 Apr 2019 13:48:44 -0700 Subject: [Starlingx-discuss] API requests: stx-metal In-Reply-To: References: Message-ID: <43962841555966124@iva4-6593cae50902.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Apr 22 21:31:46 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 22 Apr 2019 21:31:46 +0000 Subject: [Starlingx-discuss] [ Test ] meeting agenda - 04/23/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC7D11@FMSMSX114.amr.corp.intel.com> Agenda for 04/23 1. Release decision, timelines planned - 10 min, Ada 2. Containers test plan sync - 20 min, Numan, Jose 3. Automated tests sharing status (upload to the repo) - 10 min, Numan, Ada 4. Opens - 20 min, All Regards Ada From maria.g.perez.ibarra at intel.com Mon Apr 22 22:20:27 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 22 Apr 2019 22:20:27 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190421 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-21 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 58 TCS [Fail : 58 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS - application-apply stx-openstack failed due to neutron pods failure - high ovs-dpdk cpu usage : https://bugs.launchpad.net/starlingx/+bug/1825045 - About virtual results we don't have results yet due to we working on integrate ceph suite For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Apr 22 23:31:02 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 22 Apr 2019 19:31:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync - Build # 277 - Failure! Message-ID: <1608274631.222.1555975863829.JavaMail.javamailuser@localhost> Project: STX_repo_sync Build #: 277 Status: Failure Timestamp: 20190422T233018Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190422T233000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon Apr 22 23:31:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 22 Apr 2019 19:31:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! Message-ID: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 74 Status: Failure Timestamp: 20190422T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From cindy.xie at intel.com Tue Apr 23 01:34:33 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 23 Apr 2019 01:34:33 +0000 Subject: [Starlingx-discuss] FM containerization collaboration In-Reply-To: <7242A3DC72E453498E3D783BBB134C3E9DDD6CEE@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F15FA9@SHSMSX104.ccr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C46645B7F51@fmsmsx107.amr.corp.intel.com> <9BAB5B7CAF57C3459E4636391F1071CE052CF2A1@shsmsx102.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35F1A39A@SHSMSX104.ccr.corp.intel.com> <7242A3DC72E453498E3D783BBB134C3E9DDD6843@ALA-MBD.corp.ad.wrs.com> <9BAB5B7CAF57C3459E4636391F1071CE052CF815@shsmsx102.ccr.corp.intel.com> <7242A3DC72E453498E3D783BBB134C3E9DDD6CEE@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F24815@SHSMSX104.ccr.corp.intel.com> + mailing list as I think the technical info in this thread is beneficial to the community audience as well. From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Monday, April 22, 2019 11:56 PM To: An, Ran1 ; Xie, Cindy ; Arevalo, Mario Alfredo C ; Sun, Austin ; Arce Moreno, Abraham ; Lara, Cesar ; Botello Ortega, Luis ; Miller, Frank Subject: RE: FM containerization collaboration Hi Ran, See comments inline: Tao From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Friday, April 19, 2019 12:25 PM To: Liu, Tao; Xie, Cindy; Arevalo, Mario Alfredo C; Sun, Austin; Arce Moreno, Abraham; Lara, Cesar; Botello Ortega, Luis; Miller, Frank Subject: RE: FM containerization collaboration Hi Tao Thanks for your comments. a) Just a verify: Is it right that VIM alarm interfaces use fm_api to raise/clear alarms and fm_rest_api to audit alarms currently? So it is required to change VIM alarm interfaces using POST/PUT/DELETE fm_rest_api? TL> Currently, the VIM alarm handlers use fm_api to raise/clear/audit alarms, and nfv orchestration uses fm_rest_api. Yes, the alarm handlers will interface with the containerized fm_rest_api (via fmclient) to raise/clear/audit the instance alarms, and generate instances event logs. b) There is no task to trace "modify glance/neutron alarm interfaces to the containerized FM restful api". Are they not required in story 2004008? Or they are included in current task? TL> I don't think there is a requirement for glance/neutron to raise alarms. Thanks Ran From: Liu, Tao [mailto:Tao.Liu at windriver.com] Sent: Wednesday, April 17, 2019 10:42 PM To: Xie, Cindy >; An, Ran1 >; Arevalo, Mario Alfredo C >; Sun, Austin >; Arce Moreno, Abraham >; Lara, Cesar >; Botello Ortega, Luis >; Miller, Frank > Subject: RE: FM containerization collaboration Hi Ran, I added inline comments below. Tao From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, April 16, 2019 8:53 PM To: An, Ran1; Arevalo, Mario Alfredo C; Sun, Austin; Arce Moreno, Abraham; Lara, Cesar; Botello Ortega, Luis; Miller, Frank; Liu, Tao Subject: RE: FM containerization collaboration + Liu Tao from WR into the loop for the tech discussions. From: An, Ran1 Sent: Wednesday, April 17, 2019 8:13 AM To: Arevalo, Mario Alfredo C >; Xie, Cindy >; Sun, Austin >; Arce Moreno, Abraham >; Lara, Cesar >; Botello Ortega, Luis >; Frank.Miller at windriver.com Subject: RE: FM containerization collaboration Hi Mario: I investigated SB 2004008 and have following questions: 1) clarify what the final goal for containerized fm service: a) Is the goal to apply a containerized fm service with POST/PUT/GET/DELETE restful apis exported, which can serve glance, neutron and horizon and stx-nfvi to set/clear/query alarms? Is my understanding right? TL> The goal is to add the FM panel into the containerized horizon instance along with alarm banner. The VIM alarm interfaces will be modified to use the containerized FM REST API to raise/clear/audit the openstack alarms/events. b) If a) is right, what's the rest apis required? Should we map all fm_api to fm_rest_api? TL> No, VIM alarm interfaces will use the POST/PUT/GET/DELETE restful apis to raise/clear/audit alarms. c) could you let me know more about the new database plan? TL> The backend database will be configured to use mysql via helm charts. d) what does " Task 0000 Add calls from python-fmclient to fm-manager " mean in your doc? TL> I don't think this task is required. I will leave it to Mario to clarify. 2) could you share the current status of tests about helm charts and armada manifest? a) Could we apply a containerized fm service by "system application-apply" now? b) Could we launch a fm service by helm charts with tiller? Is anything I can help here? TL> Questions for Mario. Thanks Ran From: Arevalo, Mario Alfredo C Sent: Tuesday, April 16, 2019 2:15 AM To: Xie, Cindy >; An, Ran1 >; Sun, Austin >; Arce Moreno, Abraham >; Lara, Cesar >; Botello Ortega, Luis >; Frank.Miller at windriver.com Subject: RE: FM containerization collaboration Hi Cindy, Good day, unfortunately, It is not possible for me to attend the meeting, however I wonder it is possible to move it to Wednesday. Best regards. Mario. ________________________________ From: Xie, Cindy Sent: Sunday, April 14, 2019 4:26 PM To: An, Ran1; Sun, Austin; Arevalo, Mario Alfredo C; Arce Moreno, Abraham; Lara, Cesar; Botello Ortega, Luis; Frank.Miller at windriver.com Subject: FM containerization collaboration When: Tuesday, April 16, 2019 6:00 AM-7:00 AM. Where: Skype Meeting Cesar/Mario, Ran wants to see if it's helpful to join SB#2004008 to accelerate the remaining several tasks. Want to sync-up w/ you to avoid the overlap. Please let me know if the slot works OK. Thx. - cindy ......................................................................................................................................... --> Join Skype Meeting Trouble Joining? Try Skype Web App Join by phone +1(916)356-2663 (or your local bridge access #) Choose bridge 5. (Global) English (United States) Find a local number Conference ID: 6636013518 Forgot your dial-in PIN? |Help [!OC([1033])!] ......................................................................................................................................... Skype users: Recordings are subject to the Audio/Video Recording Policy. -------------- next part -------------- An HTML attachment was scrubbed... URL: From huang.shuquan at 99cloud.net Tue Apr 23 07:13:59 2019 From: huang.shuquan at 99cloud.net (Shuquan Huang) Date: Tue, 23 Apr 2019 15:13:59 +0800 Subject: [Starlingx-discuss] Hackathon update and Fiberhome questions In-Reply-To: <9A85D2917C58154C960D95352B22818BD0722F1F@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0722F1F@fmsmsx123.amr.corp.intel.com> Message-ID: <6010C8E5-C74F-4912-BF23-94E90681DC0E@99cloud.net> StarlingX is the Top 2 project with 26 registers.(Top 1 is OpenStack.) We have developers/users from China Mobile, China Unicom, Fiberhome, Intel, Huawei, 99Cloud, ZTE. The StarlingX team submitted 22 bugfix and 3 bp. For further information, please refer to https://etherpad.openstack.org/p/OpenSource-Hackathon-9-Shenzhen . Besides coding and hacking, we also discussed about the use case and community building. After talking with Ildiko from OSF, We planed to set up a regular meeting for China users for requirements collection and publish some white papers about the best practices in China. On Apr 23, 2019, at 12:09 AM, Jones, Bruce E wrote: There was a very productive hack-a-thon in China last week, led by Wei and Shuquan. Can you please share an update on the event with the community? There were attendees from Fiberhome (on cc:) that have follow-on questions about the project. I’ve posted them to an etherpad [0]. Most of the questions are related to the Containerization changes – if folks from that team can help answer them, that would be amazing. Thank you! Brucej [0] https://etherpad.openstack.org/p/stx-fiberhome-questions _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Tue Apr 23 11:22:49 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Tue, 23 Apr 2019 19:22:49 +0800 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? Message-ID: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> Hi all I am trying to deploy STX in kylin system[1], an operating system base on ubuntu 16.04. I don’t know how to deploy STX in an installed system. I know there are many dependent components and softwares to be installed. I am trying to install those softwares one by one, but I don’t know whether this way is right or not. Does somebody try to deploy STX in ubuntu? Can you help me how to work for it? Thanks Kunpeng [1]http://en.kylinos.cn/products_detail/productId=21.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Tue Apr 23 12:00:00 2019 From: gaosong_1250 at 163.com (gao.song) Date: Tue, 23 Apr 2019 20:00:00 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Message-ID: Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Apr 23 12:20:59 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 23 Apr 2019 12:20:59 +0000 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> Kunpeng The current STX version doesn’t support deployment on Ubuntu yet – Victor is working on the build for Ubuntu but the functionality of the image is not yet testable. We are interested to understand the requirements: does 99cloud have customer who is asking to have StarlingX on Ubuntu? What is the user scenario? And we are very much welcome your contribution to multi-OS effort lead by Cesar/Victor. Thx. - cindy From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Tuesday, April 23, 2019 7:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? Hi all I am trying to deploy STX in kylin system[1], an operating system base on ubuntu 16.04. I don’t know how to deploy STX in an installed system. I know there are many dependent components and softwares to be installed. I am trying to install those softwares one by one, but I don’t know whether this way is right or not. Does somebody try to deploy STX in ubuntu? Can you help me how to work for it? Thanks Kunpeng [1]http://en.kylinos.cn/products_detail/productId=21.html -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Apr 23 13:12:26 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 23 Apr 2019 13:12:26 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190421 In-Reply-To: References: Message-ID: Maria: For the AIO-Simplex and AIO-Duplex results you indicate the application-apply failure is due to 1825045 but that LP is associated with switch ports being down. Can you share why you think the failures seen in sanity here are equivalent to this LP? I'd like to know if this is a duplicate or we have a different failure that looks similar. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 22, 2019 6:20 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190421 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-21 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 58 TCS [Fail : 58 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS - application-apply stx-openstack failed due to neutron pods failure - high ovs-dpdk cpu usage : https://bugs.launchpad.net/starlingx/+bug/1825045 - About virtual results we don't have results yet due to we working on integrate ceph suite For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Apr 23 13:17:05 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 23 Apr 2019 13:17:05 +0000 Subject: [Starlingx-discuss] Community Call (April 24, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A2A7F4@ALA-MBD.corp.ad.wrs.com> Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Currently, we just have the sub-projects on the agenda, so there's room for more items. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190424T1400 From bruce.e.jones at intel.com Tue Apr 23 13:17:49 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 23 Apr 2019 13:17:49 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting notes 4/23/19 Message-ID: <9A85D2917C58154C960D95352B22818BD0723CA5@fmsmsx123.amr.corp.intel.com> Meeting notes and agenda for the 4/23 meeting * No meeting next week due to Open Infra Summit. * Bug scrub! https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.openstack. No changes to any bugs. * Bruce to update the spreadsheet from Shuquan's email update. * Yongli's work on "cleanup orphan instances" is hitting an issue that there are no Cores for this part of Nova. Alex to discuss at the Summit. * Numa migration fixes ready to merge? Not yet * OpenDev team does not support running Zuul tests against our staging repos (e.g. stx-nova). This would be an issue if we planned to support these branches long term, but this is a one cycle blip. Dean will run the required tests by hand (in DevStack). -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Apr 23 13:40:53 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 23 Apr 2019 13:40:53 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! In-Reply-To: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> References: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> Message-ID: Cesar: Scott is out this week so is there someone on the Build subteam who can investigate this failure? Is it related to the repo moves to OpenDev from the weekend? Frank -----Original Message----- From: build.starlingx at gmail.com [mailto:build.starlingx at gmail.com] Sent: Monday, April 22, 2019 7:31 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! Project: STX_build_master_master Build #: 74 Status: Failure Timestamp: 20190422T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From Don.Penney at windriver.com Tue Apr 23 13:48:31 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 23 Apr 2019 13:48:31 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! In-Reply-To: References: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA47CC03@ALA-MBD.corp.ad.wrs.com> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs/jenkins-STX_repo_sync-277.log The repo sync job failed. I'm going to update the job to point to the renamed manifest repo and kick it off again. -----Original Message----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 23, 2019 9:41 AM To: 'Lara, Cesar' Cc: build.starlingx at gmail.com; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! Cesar: Scott is out this week so is there someone on the Build subteam who can investigate this failure? Is it related to the repo moves to OpenDev from the weekend? Frank -----Original Message----- From: build.starlingx at gmail.com [mailto:build.starlingx at gmail.com] Sent: Monday, April 22, 2019 7:31 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! Project: STX_build_master_master Build #: 74 Status: Failure Timestamp: 20190422T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From erich.cm.lists at yandex.com Tue Apr 23 13:51:16 2019 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Tue, 23 Apr 2019 06:51:16 -0700 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! In-Reply-To: References: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> Message-ID: <8541491556027476@myt5-f1576e7b5bad.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Apr 23 13:53:46 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 23 Apr 2019 13:53:46 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting notes 4/23/19 In-Reply-To: <9A85D2917C58154C960D95352B22818BD0723CA5@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BD0723CA5@fmsmsx123.amr.corp.intel.com> Message-ID: <20190423135346.7uce63bidiwi3mdz@yuggoth.org> On 2019-04-23 13:17:49 +0000 (+0000), Jones, Bruce E wrote: [...] > OpenDev team does not support running Zuul tests against our > staging repos (e.g. stx-nova). [...] I know this was just a summary, and I'm probably missing a lot more explanatory context from the meeting, but just to clarify the position[*] of the OpenDev Sysadmins: we can't reliably support acting as the primary CI system testing changes/pull requests for projects hosted outside of OpenDev's code review system. We do have an established pattern of acting as a "third party" CI system, reporting on proposed changes to externally-hosted dependencies of projects hosted in OpenDev (in the form of integration testing to find out whether those changes will break the way those dependencies are being used), but this is a fair amount of setup for repositories you're planning to get rid of within a year. You can also quite easily incorporate those external dependencies into tests run for projects hosted in OpenDev, by having your jobs fetch the source code from those external hosting sites. A caveat, however, is that we've observed a measurable amount of nondeterministic failure (particularly in the form of network blips and random API errors) which arise from cloning remote repositories in jobs, and this chance increases with the size of the repository. [*] http://lists.openstack.org/pipermail/openstack-infra/2019-January/006269.html -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Don.Penney at windriver.com Tue Apr 23 13:56:25 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 23 Apr 2019 13:56:25 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA47CC03@ALA-MBD.corp.ad.wrs.com> References: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> <6703202FD9FDFF4A8DA9ACF104AE129FBA47CC03@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA47CC69@ALA-MBD.corp.ad.wrs.com> The new build is past the point of previous failure. I'll keep an eye on it as it progresses. -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Tuesday, April 23, 2019 9:49 AM To: Miller, Frank; 'Lara, Cesar' Cc: build.starlingx at gmail.com; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs/jenkins-STX_repo_sync-277.log The repo sync job failed. I'm going to update the job to point to the renamed manifest repo and kick it off again. -----Original Message----- From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 23, 2019 9:41 AM To: 'Lara, Cesar' Cc: build.starlingx at gmail.com; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! Cesar: Scott is out this week so is there someone on the Build subteam who can investigate this failure? Is it related to the repo moves to OpenDev from the weekend? Frank -----Original Message----- From: build.starlingx at gmail.com [mailto:build.starlingx at gmail.com] Sent: Monday, April 22, 2019 7:31 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! Project: STX_build_master_master Build #: 74 Status: Failure Timestamp: 20190422T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From wei.w.hu at intel.com Tue Apr 23 14:59:20 2019 From: wei.w.hu at intel.com (Hu, Wei W) Date: Tue, 23 Apr 2019 14:59:20 +0000 Subject: [Starlingx-discuss] Hackathon update and Fiberhome questions In-Reply-To: <6010C8E5-C74F-4912-BF23-94E90681DC0E@99cloud.net> References: <9A85D2917C58154C960D95352B22818BD0722F1F@fmsmsx123.amr.corp.intel.com> <6010C8E5-C74F-4912-BF23-94E90681DC0E@99cloud.net> Message-ID: Thanks Shuquan for the update. Yes, we are very pleased to see that new partners to play more role in the community. One more thing is that China Mobile team has sent 5 people there for OpenStack and StarlingX. They would like to contribute from user perspective as well. From: Shuquan Huang [mailto:huang.shuquan at 99cloud.net] Sent: Tuesday, April 23, 2019 3:14 PM To: Jones, Bruce E Cc: starlingx-discuss at lists.starlingx.io; Hu, Wei W ; zhaowei7146 at fiberhome.com; hwang7073 at fiberhome.com Subject: Re: [Starlingx-discuss] Hackathon update and Fiberhome questions StarlingX is the Top 2 project with 26 registers.(Top 1 is OpenStack.) We have developers/users from China Mobile, China Unicom, Fiberhome, Intel, Huawei, 99Cloud, ZTE. The StarlingX team submitted 22 bugfix and 3 bp. For further information, please refer to https://etherpad.openstack.org/p/OpenSource-Hackathon-9-Shenzhen. Besides coding and hacking, we also discussed about the use case and community building. After talking with Ildiko from OSF, We planed to set up a regular meeting for China users for requirements collection and publish some white papers about the best practices in China. On Apr 23, 2019, at 12:09 AM, Jones, Bruce E > wrote: There was a very productive hack-a-thon in China last week, led by Wei and Shuquan. Can you please share an update on the event with the community? There were attendees from Fiberhome (on cc:) that have follow-on questions about the project. I’ve posted them to an etherpad [0]. Most of the questions are related to the Containerization changes – if folks from that team can help answer them, that would be amazing. Thank you! Brucej [0] https://etherpad.openstack.org/p/stx-fiberhome-questions _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Apr 23 16:08:56 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 23 Apr 2019 18:08:56 +0200 Subject: [Starlingx-discuss] OSF Lounge - project demo space In-Reply-To: <205821B0-025F-410F-AF25-5F6CFC1382BD@gmail.com> References: <205821B0-025F-410F-AF25-5F6CFC1382BD@gmail.com> Message-ID: <6C47D41E-C38F-4D1D-8380-D7369590BBEF@gmail.com> Hi, As I mentioned earlier there will be a shared booth space available at the Foundation Lounge in Denver for project demos and office hours. Please sign up for slots when you would like to use the booth area here: https://docs.google.com/spreadsheets/d/1ph5neMyLBFtl50hTwXfluZNFFWVF4EIu0a2cY-wHPO4/edit#gid=0 Please let me know if you have any questions. Thanks, Ildikó > On 2019. Apr 12., at 15:24, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > I’m reaching out to let you know that we will have an OpenStack Foundation lounge at the Open Infrastructure Summit in Denver where there will be a demo spot for OSF projects. We will have a sign-up sheet to share the space among the projects to show demos and hold office hours. > > Stay tuned for further information and let me know if you have further questions at the meantime. > > Thanks, > Ildikó > > From maria.g.perez.ibarra at intel.com Tue Apr 23 17:42:03 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 23 Apr 2019 17:42:03 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190421 In-Reply-To: References: Message-ID: Hello Frank, The issue is similar because application-apply fails. The following pods are not running on duplex and simplex environments : 13:41:23 [2019-04-22T18:41:23.912Z] Kube System Services :: Check pods status and kube-system services... | FAIL | 13:41:23 [2019-04-22T18:41:23.912Z] 'openstack libvirt-libvirt-default-fwx24 0/1 Init:0/1 0 4m58s 10.10.53.3 controller-0 13:41:23 [2019-04-22T18:41:23.912Z] openstack libvirt-libvirt-default-j5bxk 0/1 Init:0/1 0 4m58s 10.10.53.4 controller-1 13:41:23 [2019-04-22T18:41:23.912Z] openstack neutron-db-init-pmmg8 0/1 Init:0/1 0 4m42s controller-0 13:41:23 [2019-04-22T18:41:23.912Z] openstack neutron-db-sync-zc97h 0/1 Init:0/1 0 4m42s controller-0 13:41:23 [2019-04-22T18:41:23.912Z] openstack neutron-dhcp-agent-controller-0-a762cb46-wjk52 0/1 Init:0/1 0 4m43s 10.10.53.3 controller-0 13:41:23 [2019-04-22T18:41:23.912Z] openstack neutron-dhcp-agent-controller-1-347ae4cb-2pmkx 0/1 Init:0/1 0 4m44s 10.10.53.4 controller-1 13:41:23 [2019-04-22T18:41:23.912Z] openstack neutron-ks-endpoints-wcc74 0/3 Init:0/1 0 4m42s As you mentioned the root cause is not the same because we are not using bond interfaces, we are double checking to discard temporal issues with our network, if the issue happens again we will create a bug. Regards Maria G. From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 23, 2019 8:12 AM To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190421 Maria: For the AIO-Simplex and AIO-Duplex results you indicate the application-apply failure is due to 1825045 but that LP is associated with switch ports being down. Can you share why you think the failures seen in sanity here are equivalent to this LP? I'd like to know if this is a duplicate or we have a different failure that looks similar. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 22, 2019 6:20 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190421 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-21 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 58 TCS [Fail : 58 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS - application-apply stx-openstack failed due to neutron pods failure - high ovs-dpdk cpu usage : https://bugs.launchpad.net/starlingx/+bug/1825045 - About virtual results we don't have results yet due to we working on integrate ceph suite For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Apr 23 19:34:23 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Apr 2019 15:34:23 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_publish - Build # 190 - Failure! Message-ID: <21841565.232.1556048064684.JavaMail.javamailuser@localhost> Project: STX_publish Build #: 190 Status: Failure Timestamp: 20190423T193403Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T135120Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190423T135120Z OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T135120Z/logs TIMESTAMP: 20190423T135120Z PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190423T135120Z/inputs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190423T135120Z/logs MASTER_JOB_NAME: STX_build_master_master PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190423T135120Z/outputs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos From build.starlingx at gmail.com Tue Apr 23 19:34:27 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Apr 2019 15:34:27 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 75 - Still Failing! In-Reply-To: <1443888931.223.1555975864671.JavaMail.javamailuser@localhost> References: <1443888931.223.1555975864671.JavaMail.javamailuser@localhost> Message-ID: <51449643.235.1556048068607.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 75 Status: Still Failing Timestamp: 20190423T135120Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T135120Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: true BUILD_CONTAINERS_STABLE: true From ada.cabrales at intel.com Tue Apr 23 20:50:13 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 23 Apr 2019 20:50:13 +0000 Subject: [Starlingx-discuss] [ Test ] meeting minutes - 04/23/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC8693@FMSMSX114.amr.corp.intel.com> Agenda for 04/23 Attendees: Elio, Bill, JP, Richo, Ada, Jose, JC, Fernando, Numan, Cristopher, Bruce 1. Release decision, timelines planned - 10 min, Ada - New dates presented - Distributed cloud will be out for this release, and will be integrated later. - Ada to create a consolidated report for the progress of feature testing (Tests, pass/fail, etc) - Ada to update the release plan with current status for feature testing activities. - Numan showed the regression test plan and the summary table. Ada to mimic it for the feature testing. 2. Containers test plan sync - 20 min, Numan, Jose - Numan has uploaded the file with their test cases (~45). Jose to check it and verify we are not duplicating tests. We have around 60 scenarios, and around 10 with steps. - Feedback has been requested through the mailing list. We are getting comments on them. 3. Automated tests sharing status (upload to the repo) - 10 min, Numan, Ada - Intel Internal process has to be followed in order to get the code out. Ada working on it. Template to be delivered by EOW. - Numan's team working on removing internal dependencies to their labs. - Tests should be adjusted to using the new Openstack commands (instead of the neutron/nova/etc CLI). 4. Opens - 20 min, All - Elio - + Automated tests - from the previous 200: 45 fixed, 11 reworked. + 103 new test cases, struggling with gnocci tests. Help requested to JP. + Help required on security test cases - Fernando already working on it, changes introduced by the containers inclusion. + Also continue the work on the test cases. + Make sure of using OpenStack commands only. - Numan - What happened with getting a testing environment in the open? + Bruce: we are working on setting an internal (intel) cloud. We already started conversations with CENGN on how to host a public cloud. Regards Ada > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, April 22, 2019 4:32 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] [ Test ] meeting agenda - 04/23/2019 > > Agenda for 04/23 > > 1. Release decision, timelines planned - 10 min, Ada 2. Containers test plan sync > - 20 min, Numan, Jose 3. Automated tests sharing status (upload to the repo) - > 10 min, Numan, Ada 4. Opens - 20 min, All > > Regards > Ada > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jose.perez.carranza at intel.com Tue Apr 23 21:32:37 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 23 Apr 2019 21:32:37 +0000 Subject: [Starlingx-discuss] [Docs} Configuration INI file structure Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A969AF2@fmsmsx101.amr.corp.intel.com> Hi Docs team Is there any wiki or document where is explained the structure and how to use the configuration INI file used on the `config_controller --config-file` ? Regards, José From vm.rod25 at gmail.com Tue Apr 23 23:01:59 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 23 Apr 2019 18:01:59 -0500 Subject: [Starlingx-discuss] How to update an installed image In-Reply-To: References: Message-ID: Hi Based on recommendations from Michael I am going to rewrite my question: I have a server, all in one with STX simplex configuration but I installed the ISO like a month ago, now I want to get the latest version of STX I think this a really important part of the project. I don't see myself as sysadmin with a new CVE fixed in the latest version of starling x and have to reinstall the full iso in all my nodes. I was pretty sure we had this component like starling x update. Any feedback more than welcome, if this is already a project under development is perfect if not, we might spend some time discussing it Regards Victor Rodriguez Something like preupg or update-manager in the case of Centos and Ubuntu On Mon, Apr 22, 2019 at 9:11 AM Victor Rodriguez wrote: > > Hi team > > I would like to know more about the image update mechanism we have in > starting X. I have a simplex system installed and I want to keep my > system updated with the latest version released in > http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall > the full ISO again every week. Is there any way to do a sw update in > the starling x system so I keep my infrastructure updated w/o having > to reinstall the ISO? > > Thanks a lot > > Regards > > Victor Rodriguez From vm.rod25 at gmail.com Tue Apr 23 23:43:23 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 23 Apr 2019 18:43:23 -0500 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Kunpeng Cindy is right, we are working very hard to make it possible as soon as possible. Right now the current state of the project is described in this mail: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/004145.html The current multi-os build system we have been working on lives in: https://github.com/starlingx-staging/stx-packaging It has a very clear README with videos and examples With that you can start building parts fo the flock services as DEB packages, you can also create a live ISO image that you can test. In the previous mail pointed you will see the list of TODO thins we have. The full architecture is described in this document as well: https://drive.google.com/open?id=1ck7vGH50AIAjUx9GNrIGtowG5qg7OYUBNdJyY-5ZvDc Regarding your use case, we are more than happy to help with the developing of tools to enable the community to support other OS. In this case the Kylin OS is not in our scope, however, it is the Ubuntu OS. I am not familiar with Kylin but a nice first exploratory phase could be that you take what we have right now, and test if you can install the packages of the flock that we have ported to Ubuntu and see if they can be installed. ( no functional test has been performed yet ) If you have any questions please let me know, any feedback to our multi-os build system is more than welcome Regards Victor Rodriguez On Tue, Apr 23, 2019 at 7:26 AM Xie, Cindy wrote: > > Kunpeng > > The current STX version doesn’t support deployment on Ubuntu yet – Victor is working on the build for Ubuntu but the functionality of the image is not yet testable. > > > > We are interested to understand the requirements: does 99cloud have customer who is asking to have StarlingX on Ubuntu? What is the user scenario? And we are very much welcome your contribution to multi-OS effort lead by Cesar/Victor. > > > > Thx. - cindy > > > > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Tuesday, April 23, 2019 7:23 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? > > > > Hi all > > > > I am trying to deploy STX in kylin system[1], an operating system base on ubuntu 16.04. I don’t know how to deploy STX in an installed system. I know there are many dependent components and softwares to be installed. I am trying to install those softwares one by one, but I don’t know whether this way is right or not. > > Does somebody try to deploy STX in ubuntu? Can you help me how to work for it? > > > > Thanks > > Kunpeng > > > > [1]http://en.kylinos.cn/products_detail/productId=21.html > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Tue Apr 23 23:54:46 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 23 Apr 2019 16:54:46 -0700 Subject: [Starlingx-discuss] StarlingX / Infra teams sync-up at PTG Message-ID: <1b8469d0-387f-4853-b487-f9519dcc5130@linux.intel.com> Infra Team, The StarlingX Project would like to request a 30-60 minute sync up to talk about some of the challenges that StarlingX faces with regards to build and test infrastructure. As StarlingX is an integration project that creates a Linux Distribution with a Cloud infrastructure on top of it, this makes it more challenging to both build and test. The current OpenStack Foundation infrastructure is good as building and testing projects such as Nova, Neutron, ... It could be used to build and test the individual components of the StarlingX Flock, such as Fault and others. It's not well suited to build the complete StarlingX ISO and test that ISO. We want to explore what the existing resources that are available and understand how and what we can add to the infrastructure to enable the build and testing that StarlingX will require. We hope that we can find an hour timeslot during the PTG that we can talk further about this with both teams. Our timeslots overlap Thrusday afternoon and Friday so we could put it on our adgendas. Please let us know if you have available time slots. Thanks Sau! StarlingX TSC From gaosong_1250 at 163.com Wed Apr 24 00:39:03 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 08:39:03 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Message-ID: <1f48b907.291f.16a4cc8187e.Coremail.gaosong_1250@163.com> Or, Can I just skip this rpm cause it's probably not will be used ? -------- Forwarding messages -------- From: "gao.song" Date: 2019-04-23 20:00:00 To: starlingx-discuss at lists.starlingx.io Subject: Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Apr 24 00:46:24 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 24 Apr 2019 00:46:24 +0000 Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is "3.10.0-957.1.3.el7.1.tis.x86_64", while in your log, it is "4.9.86-30.el7.x86_64". Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 01:41:29 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 09:41:29 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> Message-ID: <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cm.lists at yandex.com Wed Apr 24 01:57:26 2019 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Tue, 23 Apr 2019 18:57:26 -0700 Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> References: <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> Message-ID: <12148621556071046@myt5-02b80404fd9e.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 02:01:03 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 10:01:03 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <12148621556071046@myt5-02b80404fd9e.qloud-c.yandex.net> References: <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <12148621556071046@myt5-02b80404fd9e.qloud-c.yandex.net> Message-ID: Yes, I followed the steps in the Building Guide doc,https://docs.starlingx.io/contributor/build_guides/latest/index.html, uname -a output is: Linux 2cb27d9f40b9 4.4.0-131-generic #157-Ubuntu SMP Thu Jul 12 15:51:36 UTC 2018 x86_64 x86_64 x86_64 GNU/Linux At 2019-04-24 09:57:26, "Erich Cordoba" wrote: Hi, Are you running inside the container ? What’s the output of uname -a ? -- Sent from Yandex.Mail for mobile 23.04.2019, 20:42, "gao.song" : Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Apr 24 02:04:24 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 24 Apr 2019 02:04:24 +0000 Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> Hi Song, Do you have "4.9.86-30.el7.x86_64" package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" > wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is "3.10.0-957.1.3.el7.1.tis.x86_64", while in your log, it is "4.9.86-30.el7.x86_64". Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 02:11:26 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 10:11:26 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> Message-ID: <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Apr 24 02:22:37 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 24 Apr 2019 02:22:37 +0000 Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: “ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ “ Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" > 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" > wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 02:24:45 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 10:24:45 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> Message-ID: <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> Then I see, I got this 4.9.8 kernel file there, But problem still happen At 2019-04-24 10:22:37, "Lin, Shuicheng" wrote: Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: “ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ “ Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Apr 24 02:36:07 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 24 Apr 2019 02:36:07 +0000 Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FEB732E@SHSMSX101.ccr.corp.intel.com> Hi Song, You should not have this file. Could you check which step do you get this file? For the correct case, you should just have below 3 kernel file as me. I suspect there is some problem with your mirror. You could have a check for the packages in your mirror, the name/version info should be the same as the name listed in tools git. slin14 at slin14-nuc2:~/GIT/stx-tools/centos-mirror-tools$ grep kernel *.lst �CRsn �� rpms_centos3rdparties.lst:35:kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm rpms_centos.lst:527:kernel-3.10.0-957.1.3.el7.src.rpm rpms_centos.lst:528:kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm �� [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:25 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Then I see, I got this 4.9.8 kernel file there, But problem still happen At 2019-04-24 10:22:37, "Lin, Shuicheng" > wrote: Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: �� generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ �� Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike �� 2019-04-24 10:04:24��"Lin, Shuicheng" > д���� Hi Song, Do you have ��4.9.86-30.el7.x86_64�� package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" > wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is ��3.10.0-957.1.3.el7.1.tis.x86_64��, while in your log, it is ��4.9.86-30.el7.x86_64��. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Apr 24 02:37:46 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Apr 2019 22:37:46 -0400 (EDT) Subject: [Starlingx-discuss] [dev] [build-report] STX_build_docker_flock_images - Build # 83 - Failure! Message-ID: <1246700480.241.1556073468264.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 83 Status: Failure Timestamp: 20190424T004801Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T195416Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190423T195416Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: master-dev-20190423T195416Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T195416Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: master PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190423T195416Z/logs PUBLISH_TIMESTAMP: 20190423T195416Z FLOCK_VERSION: master-centos-dev-20190423T195416Z PREFIX: master TIMESTAMP: 20190423T195416Z BUILD_STREAM: dev REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190423T195416Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Wed Apr 24 02:37:50 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 23 Apr 2019 22:37:50 -0400 (EDT) Subject: [Starlingx-discuss] [dev] [build-report] STX_build_docker_images - Build # 86 - Failure! Message-ID: <877719250.244.1556073472262.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 86 Status: Failure Timestamp: 20190424T003930Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T195416Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190423T195416Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190423T195416Z/logs MASTER_BUILD_NUMBER: 76 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190423T195416Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190423T195416Z DOCKER_BUILD_ID: jenkins-master-20190423T195416Z-builder TIMESTAMP: 20190423T195416Z OS_VERSION: 7.6.1810 BUILD_STREAM: dev PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190423T195416Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190423T195416Z/outputs From gaosong_1250 at 163.com Wed Apr 24 02:43:03 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 10:43:03 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEB732E@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB732E@SHSMSX101.ccr.corp.intel.com> Message-ID: <4c696a66.489b.16a4d39a276.Coremail.gaosong_1250@163.com> Hi Shuicheng: I checked those files , same location, same name and version At 2019-04-24 10:36:07, "Lin, Shuicheng" wrote: Hi Song, You should not have this file. Could you check which step do you get this file? For the correct case, you should just have below 3 kernel file as me. I suspect there is some problem with your mirror. You could have a check for the packages in your mirror, the name/version info should be the same as the name listed in tools git. slin14 at slin14-nuc2:~/GIT/stx-tools/centos-mirror-tools$ grep kernel *.lst –Rsn … rpms_centos3rdparties.lst:35:kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm rpms_centos.lst:527:kernel-3.10.0-957.1.3.el7.src.rpm rpms_centos.lst:528:kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm … [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:25 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Then I see, I got this 4.9.8 kernel file there, But problem still happen At 2019-04-24 10:22:37, "Lin, Shuicheng" wrote: Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: “ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ “ Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 02:53:51 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 10:53:51 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <4c696a66.489b.16a4d39a276.Coremail.gaosong_1250@163.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB732E@SHSMSX101.ccr.corp.intel.com> <4c696a66.489b.16a4d39a276.Coremail.gaosong_1250@163.com> Message-ID: <75246dd3.4b8f.16a4d4384c2.Coremail.gaosong_1250@163.com> I think the key point to this error is how the compile program determine the kernel version to 4.9.8. And I am checking the mlnx_ofa_kernel-4.5 rpm spec file, try to find some clues. 在 2019-04-24 10:43:03,"gao.song" 写道: Hi Shuicheng: I checked those files , same location, same name and version At 2019-04-24 10:36:07, "Lin, Shuicheng" wrote: Hi Song, You should not have this file. Could you check which step do you get this file? For the correct case, you should just have below 3 kernel file as me. I suspect there is some problem with your mirror. You could have a check for the packages in your mirror, the name/version info should be the same as the name listed in tools git. slin14 at slin14-nuc2:~/GIT/stx-tools/centos-mirror-tools$ grep kernel *.lst –Rsn … rpms_centos3rdparties.lst:35:kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm rpms_centos.lst:527:kernel-3.10.0-957.1.3.el7.src.rpm rpms_centos.lst:528:kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm … [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:25 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Then I see, I got this 4.9.8 kernel file there, But problem still happen At 2019-04-24 10:22:37, "Lin, Shuicheng" wrote: Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: “ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ “ Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Apr 24 04:27:20 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 24 Apr 2019 04:27:20 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190423 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-23 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 57 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 52 TCs [FAIL] Sanity Platform 05 TCs [FAIL] TOTAL: 57 TCS [Fail : 57] Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs ] [Fail : 56 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 56 TCs] -------------------------------------------------------------------------------- In the failing configurations, the issue was that system application-apply got stuck, but in different stages, osh-openstack-keystone and osh-openstack-neutron. We'll investigate more on these problems in the next execution. Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 10:14:06 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 18:14:06 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <75246dd3.4b8f.16a4d4384c2.Coremail.gaosong_1250@163.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB732E@SHSMSX101.ccr.corp.intel.com> <4c696a66.489b.16a4d39a276.Coremail.gaosong_1250@163.com> <75246dd3.4b8f.16a4d4384c2.Coremail.gaosong_1250@163.com> Message-ID: <1a46e315.c943.16a4ed69282.Coremail.gaosong_1250@163.com> Still cannot fix this, any other get some suggestion? At 2019-04-24 10:53:51, "gao.song" wrote: I think the key point to this error is how the compile program determine the kernel version to 4.9.8. And I am checking the mlnx_ofa_kernel-4.5 rpm spec file, try to find some clues. 在 2019-04-24 10:43:03,"gao.song" 写道: Hi Shuicheng: I checked those files , same location, same name and version At 2019-04-24 10:36:07, "Lin, Shuicheng" wrote: Hi Song, You should not have this file. Could you check which step do you get this file? For the correct case, you should just have below 3 kernel file as me. I suspect there is some problem with your mirror. You could have a check for the packages in your mirror, the name/version info should be the same as the name listed in tools git. slin14 at slin14-nuc2:~/GIT/stx-tools/centos-mirror-tools$ grep kernel *.lst –Rsn … rpms_centos3rdparties.lst:35:kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm rpms_centos.lst:527:kernel-3.10.0-957.1.3.el7.src.rpm rpms_centos.lst:528:kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm … [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:25 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Then I see, I got this 4.9.8 kernel file there, But problem still happen At 2019-04-24 10:22:37, "Lin, Shuicheng" wrote: Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: “ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ “ Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! 老少皆宜,秒杀路边摊,原肉制作炭火烤肠仅18元150g! -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Wed Apr 24 10:38:29 2019 From: gaosong_1250 at 163.com (gao.song) Date: Wed, 24 Apr 2019 18:38:29 +0800 (CST) Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 In-Reply-To: <1a46e315.c943.16a4ed69282.Coremail.gaosong_1250@163.com> References: <9700A18779F35F49AF027300A49E7C765FEB7273@SHSMSX101.ccr.corp.intel.com> <380facf6.37ca.16a4d0142e0.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72BF@SHSMSX101.ccr.corp.intel.com> <66b6e17f.3fac.16a4d1caff8.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB72EF@SHSMSX101.ccr.corp.intel.com> <494ab56c.4352.16a4d28df4f.Coremail.gaosong_1250@163.com> <9700A18779F35F49AF027300A49E7C765FEB732E@SHSMSX101.ccr.corp.intel.com> <4c696a66.489b.16a4d39a276.Coremail.gaosong_1250@163.com> <75246dd3.4b8f.16a4d4384c2.Coremail.gaosong_1250@163.com> <1a46e315.c943.16a4ed69282.Coremail.gaosong_1250@163.com> Message-ID: <7600370e.ce1f.16a4eece670.Coremail.gaosong_1250@163.com> Oh,forget the last mail,I nailed it by issue a command RIGHT THERE in the doc before build-pkgs.... 在 2019-04-24 18:14:06,"gao.song" 写道: Still cannot fix this, any other get some suggestion? At 2019-04-24 10:53:51, "gao.song" wrote: I think the key point to this error is how the compile program determine the kernel version to 4.9.8. And I am checking the mlnx_ofa_kernel-4.5 rpm spec file, try to find some clues. 在 2019-04-24 10:43:03,"gao.song" 写道: Hi Shuicheng: I checked those files , same location, same name and version At 2019-04-24 10:36:07, "Lin, Shuicheng" wrote: Hi Song, You should not have this file. Could you check which step do you get this file? For the correct case, you should just have below 3 kernel file as me. I suspect there is some problem with your mirror. You could have a check for the packages in your mirror, the name/version info should be the same as the name listed in tools git. slin14 at slin14-nuc2:~/GIT/stx-tools/centos-mirror-tools$ grep kernel *.lst –Rsn … rpms_centos3rdparties.lst:35:kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm rpms_centos.lst:527:kernel-3.10.0-957.1.3.el7.src.rpm rpms_centos.lst:528:kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm … [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:25 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Then I see, I got this 4.9.8 kernel file there, But problem still happen At 2019-04-24 10:22:37, "Lin, Shuicheng" wrote: Hi Song, Cgcs-centos-repo is in the build container, not the mirror container. And it is generated with below cmd in the wiki: “ generate-cgcs-centos-repo.sh /import/mirrors/CentOS/stx-r1/CentOS/pike/ “ Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 10:11 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:RE: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Oops,I cannot find it ,just a kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm there. And another problem, My mirror container doesn't have this "cgcs-centos-repo" directory, just : [root at 66343f53d70d pike]# ls Binary downloads Source [root at 66343f53d70d pike]# pwd /localdisk/output/stx-r1/CentOS/pike 在 2019-04-24 10:04:24,"Lin, Shuicheng" 写道: Hi Song, Do you have “4.9.86-30.el7.x86_64” package in your mirror? [slin14 at 031aee5aad95 cgcs-centos-repo]$ find . -name "kernel*" ./Source/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm ./Source/kernel-3.10.0-957.1.3.el7.src.rpm ./Binary/x86_64/kernel-headers-3.10.0-957.1.3.el7.x86_64.rpm Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Wednesday, April 24, 2019 9:41 AM To: Lin, Shuicheng Cc:starlingx-discuss at lists.starlingx.io Subject: Re:Re: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Thanks for your reply. Is it build same version rpm package as mine in your env? It seems the program choose the version, where I can modify it At 2019-04-24 08:46:24, "Lin, Shuicheng" wrote: Hi Song, I checked my local log, the difference is the kernel version. StarlingX uses CentOS 7.6, which kernel version is “3.10.0-957.1.3.el7.1.tis.x86_64”, while in your log, it is “4.9.86-30.el7.x86_64”. Here is my configure in the log file: ./configure --build-dummy-mods --prefix=/usr --kernel-version 3.10.0-957.1.3.el7.1.tis.x86_64 --kernel-sources /usr/src/kernels/3.10.0-957.1.3.el7.1.tis.x86_64 --modules-dir /lib/modules/3.10.0-957.1.3.el7.1.tis.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j8 Best Regards Shuicheng From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Tuesday, April 23, 2019 8:00 PM To:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Build ISO for stx 2019.05 ERROR with mlnx-ofa_kernel-4.5 Hi folks: Recently I am building ISO accroding to Building Guide stx.2019.05, But I encounter this error for the step building packages with mlnx_ofa_kernel-4.5, the log can be found in the building container : ./configure --build-dummy-mods --prefix=/usr --kernel-version 4.9.86-30.el7.x86_64 --kernel-sources /usr/src/kernels/4.9.86-30.el7.x86_64 --modules-dir /lib/modules/4.9.86-30.el7.x86_64/extra/mlnx-ofa_kernel --with-core-mod --with-user_mad-mod --with-user_access-mod --with-addr_trans-mod --with-mlx4-mod --with-mlx4_en-mod --with-mlx5-mod --with-mlxfw-mod --with-ipoib-mod -j2 make[1]: Entering directory `/usr/src/kernels/4.9.86-30.el7.x86_64' CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/cls_flower.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/net/sched/act_tunnel_key.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/main.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/kthread.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/drivers/infiniband/core/packer.o CC [M] /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function '__skb_flow_dissect_tunnel_info': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:205:2: warning: passing argument 1 of 'skb_tunnel_info' discards 'const' qualifier from pointer target type [enabled by default] BUILDSTDERR: info = skb_tunnel_info(skb); BUILDSTDERR: ^ BUILDSTDERR: In file included from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/include/net/dst_metadata.h:5:0, BUILDSTDERR: from /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:34: BUILDSTDERR: include/net/dst_metadata.h:25:38: note: expected 'struct sk_buff *' but argument is of type 'const struct sk_buff *' BUILDSTDERR: static inline struct ip_tunnel_info *skb_tunnel_info(struct sk_buff *skb) BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c: In function 'backport___skb_flow_dissect': BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:607:11: error: dereferencing pointer to incomplete type BUILDSTDERR: if (ops->flow_dissect && BUILDSTDERR: ^ BUILDSTDERR: /builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.c:608:12: error: dereferencing pointer to incomplete type BUILDSTDERR: !ops->flow_dissect(skb, &proto, &offset)) { BUILDSTDERR: ^ BUILDSTDERR: make[3]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat/flow_dissector.o] Error 1 BUILDSTDERR: make[2]: *** [/builddir/build/BUILD/mlnx-ofa_kernel-4.5/obj/default/compat] Error 2 BUILDSTDERR: make[2]: *** Waiting for unfinished jobs.... Any help will be appreciated! 老少皆宜,秒杀路边摊,原肉制作炭火烤肠仅18元150g! -------------- next part -------------- An HTML attachment was scrubbed... URL: From thierry at openstack.org Wed Apr 24 11:18:25 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 24 Apr 2019 13:18:25 +0200 Subject: [Starlingx-discuss] [ptg] ptgbot HOWTO Message-ID: <90a0bc65-e6c6-7226-4fb5-4d25dcea9fbb@openstack.org> Hi everyone, In a few days, some contributor teams will meet in Denver for the 5th Project Teams Gathering. The event is organized around separate 'tracks' (generally tied to a specific team/group). Topics of discussion are loosely scheduled in those tracks, based on the needs of the attendance. This allows to maximize attendee productivity, but the downside is that it can make the event a bit confusing to navigate. To mitigate that issue, we are using an IRC bot to expose what's happening currently at the event at the following page: http://ptg.openstack.org/ptg.html It is therefore useful to have a volunteer in each room who makes use of the PTG bot to communicate what's happening. This is done by joining the #openstack-ptg IRC channel on Freenode and voicing commands to the bot. Usage of the bot is of course optional, but in past editions it was really useful to help attendees successfully navigate this dynamic event. How to keep attendees informed of what's being discussed in your room --------------------------------------------------------------------- To indicate what's currently being discussed, you will use the track name hashtag (found in the "Scheduled tracks" section on the above page), with the 'now' command: #TRACK now Example: #swift now brainstorming improvements to the ring You can also mention other track names to make sure to get people attention when the topic is transverse: #ops-meetup now discussing #cinder pain points There can only be one 'now' entry for a given track at a time. To indicate what will be discussed next, you can enter one or more 'next' commands: #TRACK next Example: #api-sig next at 2pm we'll be discussing pagination woes Note that in order to keep content current, entering a new 'now' command for a track will automatically erase any 'next' entry for that track. Finally, if you want to clear all 'now' and 'next' entries for your track, you can issue the 'clean' command: #TRACK clean Example: #ironic clean How to book reservable rooms ---------------------------- Like at every PTG, in Denver we will have additional reservable space for extra un-scheduled discussions. Two of those rooms (Ballroom 4 and Room 112) are equipped with a projector, so if your discussion would benefit from projection, you can also book time there. Finally, some of the smaller teams do not have any pre-scheduled space, and will solely be relying on this feature to book the time that makes the most sense for them. Those teams are the OpenStack release management team (#release-management) and requirements team (#requirements), the Extended Maintenance SIG (#extended-maint-sig), the Security SIG (#security-sig), the Bare Metal SIG (#baremetal-sig) and the Interoperability working group (#interop-wg). The PTG bot page shows which track is allocated to which room, as well as available reservable space, with a slot code (room name - time slot) that you can use to issue a 'book' command to the PTG bot: #TRACK book Example: #release-management book Room 112-ThuA1 Any track can book additional space and time using this system. All slots are 1h45-long. If your topic of discussion does not fall into an existing track, it is easy to add a track on the fly. Just ask PTG bot admins (ttx, diablo_rojo...) on the channel to create a track for you (which they can do by getting op rights and issuing a ~add command). For more information on the bot commands, please see: https://opendev.org/openstack/ptgbot/src/branch/master/README.rst Let me know if you have any additional questions. -- Thierry Carrez (ttx) From zhang.kunpeng at 99cloud.net Wed Apr 24 11:40:25 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Wed, 24 Apr 2019 19:40:25 +0800 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi Cindy, Victor, I am very glad to receive your replies. We have the requirements to deploy STX in other linux systems. For this work, I think deployment on Ubuntu is the best start to understand the flow of starlingx installation. I hope I can do something for multi-os and I am trying to follow in your footsteps. Otherwise, I find your goal is to build DEB files and finally create an iso. If a clear ubuntu system have been installed in server, how can I to deploy starlingx for test or development? Is it right to install these DEB file by hand? Thanks Kunpeng > On Apr 24, 2019, at 07:43, Victor Rodriguez wrote: > > Hi Kunpeng > > Cindy is right, we are working very hard to make it possible as soon > as possible. Right now the current state of the project is described > in this mail: > > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/004145.html > > The current multi-os build system we have been working on lives in: > > https://github.com/starlingx-staging/stx-packaging > > It has a very clear README with videos and examples > > With that you can start building parts fo the flock services as DEB > packages, you can also create a live ISO image that you can test. > > In the previous mail pointed you will see the list of TODO thins we have. > > The full architecture is described in this document as well: > > https://drive.google.com/open?id=1ck7vGH50AIAjUx9GNrIGtowG5qg7OYUBNdJyY-5ZvDc > > Regarding your use case, we are more than happy to help with the > developing of tools to enable the community to support other OS. In > this case the Kylin OS is not in our scope, however, it is the Ubuntu > OS. I am not familiar with Kylin but a nice first exploratory phase > could be that you take what we have right now, and test if you can > install the packages of the flock that we have ported to Ubuntu and > see if they can be installed. ( no functional test has been performed > yet ) > > If you have any questions please let me know, any feedback to our > multi-os build system is more than welcome > > Regards > > Victor Rodriguez > > > > On Tue, Apr 23, 2019 at 7:26 AM Xie, Cindy wrote: >> >> Kunpeng >> >> The current STX version doesn’t support deployment on Ubuntu yet – Victor is working on the build for Ubuntu but the functionality of the image is not yet testable. >> >> >> >> We are interested to understand the requirements: does 99cloud have customer who is asking to have StarlingX on Ubuntu? What is the user scenario? And we are very much welcome your contribution to multi-OS effort lead by Cesar/Victor. >> >> >> >> Thx. - cindy >> >> >> >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] >> Sent: Tuesday, April 23, 2019 7:23 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? >> >> >> >> Hi all >> >> >> >> I am trying to deploy STX in kylin system[1], an operating system base on ubuntu 16.04. I don’t know how to deploy STX in an installed system. I know there are many dependent components and softwares to be installed. I am trying to install those softwares one by one, but I don’t know whether this way is right or not. >> >> Does somebody try to deploy STX in ubuntu? Can you help me how to work for it? >> >> >> >> Thanks >> >> Kunpeng >> >> >> >> [1]http://en.kylinos.cn/products_detail/productId=21.html >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From cindy.xie at intel.com Wed Apr 24 12:09:54 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 24 Apr 2019 12:09:54 +0000 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F26839@SHSMSX104.ccr.corp.intel.com> >>> If a clear ubuntu system have been installed in server, how can I to deploy starlingx for test or development? Is it right to install these DEB file by hand? I don't think you can do so - StarlingX flocks services needs a lot of configurations before it can be started, so just install flocks DEB files built out by ourselves cannot work. A demo is possible by using Devstack to launch those services. I believe to make StarlingX fully functional is a project deserves more planning and digesting still. This is a good PTG topics and please have Shuquan to raise it in TSC as well. Thx. - cindy -----Original Message----- From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] Sent: Wednesday, April 24, 2019 7:40 PM To: Victor Rodriguez Cc: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? Hi Cindy, Victor, I am very glad to receive your replies. We have the requirements to deploy STX in other linux systems. For this work, I think deployment on Ubuntu is the best start to understand the flow of starlingx installation. I hope I can do something for multi-os and I am trying to follow in your footsteps. Otherwise, I find your goal is to build DEB files and finally create an iso. If a clear ubuntu system have been installed in server, how can I to deploy starlingx for test or development? Is it right to install these DEB file by hand? Thanks Kunpeng > On Apr 24, 2019, at 07:43, Victor Rodriguez wrote: > > Hi Kunpeng > > Cindy is right, we are working very hard to make it possible as soon > as possible. Right now the current state of the project is described > in this mail: > > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/00414 > 5.html > > The current multi-os build system we have been working on lives in: > > https://github.com/starlingx-staging/stx-packaging > > It has a very clear README with videos and examples > > With that you can start building parts fo the flock services as DEB > packages, you can also create a live ISO image that you can test. > > In the previous mail pointed you will see the list of TODO thins we have. > > The full architecture is described in this document as well: > > https://drive.google.com/open?id=1ck7vGH50AIAjUx9GNrIGtowG5qg7OYUBNdJy > Y-5ZvDc > > Regarding your use case, we are more than happy to help with the > developing of tools to enable the community to support other OS. In > this case the Kylin OS is not in our scope, however, it is the Ubuntu > OS. I am not familiar with Kylin but a nice first exploratory phase > could be that you take what we have right now, and test if you can > install the packages of the flock that we have ported to Ubuntu and > see if they can be installed. ( no functional test has been performed > yet ) > > If you have any questions please let me know, any feedback to our > multi-os build system is more than welcome > > Regards > > Victor Rodriguez > > > > On Tue, Apr 23, 2019 at 7:26 AM Xie, Cindy wrote: >> >> Kunpeng >> >> The current STX version doesn’t support deployment on Ubuntu yet – Victor is working on the build for Ubuntu but the functionality of the image is not yet testable. >> >> >> >> We are interested to understand the requirements: does 99cloud have customer who is asking to have StarlingX on Ubuntu? What is the user scenario? And we are very much welcome your contribution to multi-OS effort lead by Cesar/Victor. >> >> >> >> Thx. - cindy >> >> >> >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] >> Sent: Tuesday, April 23, 2019 7:23 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? >> >> >> >> Hi all >> >> >> >> I am trying to deploy STX in kylin system[1], an operating system base on ubuntu 16.04. I don’t know how to deploy STX in an installed system. I know there are many dependent components and softwares to be installed. I am trying to install those softwares one by one, but I don’t know whether this way is right or not. >> >> Does somebody try to deploy STX in ubuntu? Can you help me how to work for it? >> >> >> >> Thanks >> >> Kunpeng >> >> >> >> [1]http://en.kylinos.cn/products_detail/productId=21.html >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Barton.Wensley at windriver.com Wed Apr 24 12:50:43 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 24 Apr 2019 12:50:43 +0000 Subject: [Starlingx-discuss] How to update an installed image In-Reply-To: References: Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F409@ALA-MBD.corp.ad.wrs.com> Victor, StarlingX has two mechanisms for updating software on a running system: - updates (also known as patching): This allows the user to build an updated version of a set of RPMs, bundle them into a patch file and apply them to a running system. We are also working on an update mechanism for containerized applications, which will allow new versions the docker images for an application to be deployed to a running system. - upgrades: This allows the user to upgrade a running system from one StarlingX release to the next, including the OS, RPMs and applications. This mechanism will only be supported when moving between StarlingX releases - any fixes delivered to an existing release will use the update mechanism. The update mechanism would be used to deliver a CVE fix (as per your example) to a running system. Bart -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: April 23, 2019 7:02 PM To: starlingx-discuss at lists.starlingx.io; Thebeau, Michel Subject: Re: [Starlingx-discuss] How to update an installed image Hi Based on recommendations from Michael I am going to rewrite my question: I have a server, all in one with STX simplex configuration but I installed the ISO like a month ago, now I want to get the latest version of STX I think this a really important part of the project. I don't see myself as sysadmin with a new CVE fixed in the latest version of starling x and have to reinstall the full iso in all my nodes. I was pretty sure we had this component like starling x update. Any feedback more than welcome, if this is already a project under development is perfect if not, we might spend some time discussing it Regards Victor Rodriguez Something like preupg or update-manager in the case of Centos and Ubuntu On Mon, Apr 22, 2019 at 9:11 AM Victor Rodriguez wrote: > > Hi team > > I would like to know more about the image update mechanism we have in > starting X. I have a simplex system installed and I want to keep my > system updated with the latest version released in > http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall > the full ISO again every week. Is there any way to do a sw update in > the starling x system so I keep my infrastructure updated w/o having > to reinstall the ISO? > > Thanks a lot > > Regards > > Victor Rodriguez _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Apr 24 13:39:02 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 24 Apr 2019 13:39:02 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/24 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F26D0C@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 4/24 meeting: - Ceph upgrade status 1. Ceph dev build validation status (Fernando) Pre-merged basic test ran. __________________________________________________________________________________________________ # | Test ID | Comments 1 | STOR_TIER_005 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed 2 | STOR_TIER_006 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed 3 | STOR_TIER_007 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed 4 | STOR_TIER_008 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed **SB- https://storyboard.openstack.org/#!/story/2003909 7 | STOR_PROCESS_011 | - WIP - Emails have been back and forward and test case has been re-worked. | Pending questions: | @Daniel. - Does StarlingX Ceph solution require at least ceph-mgr, or 2 (active and standby) at the same time? It seems that only one ceph-mgr also works. [Daniel]: ceph-mgr has the HA mechanism built-in so it's not depends on HA. Needs restful plug-in to restart the service. race condition for port 5001. Working on fixing the issues more testing required. root-cause identified: the services itself is not restarted only the plug-in is up. If we have multi-layer implemented, it can restart the service. Impact: minimal due to the fact if we have 2 serivies running and 1 of them is not responsing, then it still working. Ceph-mgr REST API has port 5001, race condition of 2 ceph-mgr to access the same ports. AR: Daniel to fix the issue before we call the patch can be merged. ETA: patch to be ready by tomorrow. - In the case I described above, once ceph-mgr on controller-1 took over the active role, ceph-mgr on controller-0 hardly won the chance to be "active" any more. Is it reasonable? - By "systemctl status sm.service | grep ceph-mgr", we know ceph-mgr daemon is managed by sm.service, but why sm.service did not restart ceph-mgr WHEN there is still a ceph-mgr on another controller? | @Daniel/Yong. - For STEP 8. Need guidance about what ceph folder/filename must be renamed to get the ceph monitor process killed and never come back. It seems that renaming the ceph services from path "/usr/lib/systemd/system" is not working for this quest. Fernando needs to rename the binary (path?) Daniel to provide guiance using email for specific file names. - Do you believe is this still a valid test case? We can focused on kill the service but Is this going to happen to a customer, Is this a real scenario? 8 | STOR_PROCESS_012 | - WIP - Emails have been back and forward and test case has been re-worked. | Pending questions: | This test case was executed almost completely, the only step missing is the "Step 9" when killing the osd process and never back. I need @Daniel/Yong/Tingkie feedback in order to kill ceph service to never come back once they are down. [Fernando] same quesiton as of test case 011, need to rename the correct binaries. 17 | STOR_FAULT_023 | Not Attempted 20 | STOR_PART_030 | Not Attempted AR: Fernando to run the auto-sanity for the same binary after the cengen built. ETA: end of today. Fernando please send the Sanity detail failures to the list so that we can help to do the failure analysis. 2. test plan clarification (Fernando/Daniel) - covered in (1) 3. Patch rebase (Daniel) https://review.openstack.org/#/q/topic:ceph-mimic-upgrade+(status:open+OR+status:merged) Daniel rebased all patches yeserday and working on the patches for ceph-mgr race condition issue. - QAT driver upgrade status 1. Dev status (base rebase, QAT driver load, ISO generation, etc) (Haitao) Still working on setting up the environment in offshore ODC and meet deployment issue in the public network. Helping ODC to deploy the ISO from SH lab. QAT devices can be shown from Horizon and we can start the VM esting to access the QAT driver from tenant. 2. validation prepration (Ricardo) Ricardo is waiting for ISO generated from ODC. test cases discussion between Ricardo and Haitao for QAT tests. - libvirt/qemu patch reduction PR merge status (Jim/Dean) 2 PR for Libvirt and 1 for Qemu posted. Dean to review the PR. - Opens (all) -----Original Message----- From: Xie, Cindy Sent: Tuesday, April 23, 2019 9:06 PM To: starlingx-discuss at lists.starlingx.io Cc: 'Rowsell, Brent' ; Wold, Saul ; Hernandez Gonzalez, Fernando ; Badea, Daniel ; Wang, Hai Tao ; Perez, Ricardo O ; Somerville, Jim ; 'Khalil, Ghada' ; Troyer, Dean Subject: Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/24 Agenda for 4/24 meeting: - Ceph upgrade status 1. Ceph dev build validation status (Fernando) 2. test plan clarification (Fernando/Daniel) 3. Patch rebase (Daniel) - QAT driver upgrade status 1. Dev status (base rebase, QAT driver load, ISO generation, etc) (Haitao) 2. validation prepration (Ricardo) - libvirt/qemu patch reduction PR merge status (Jim/Dean) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 24, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From hai.tao.wang at intel.com Wed Apr 24 14:28:37 2019 From: hai.tao.wang at intel.com (Wang, Hai Tao) Date: Wed, 24 Apr 2019 14:28:37 +0000 Subject: [Starlingx-discuss] QAT upgrade testing Message-ID: <90D309A9E5805640B40D067D1B8EC8BF626963E9@SHSMSX103.ccr.corp.intel.com> Hi Numan, We are now testing the QAT upgrade on CentOS7.6. For functional test before upgrade, we start to verify guest can be launched with multiple crypto VFs. Could you share some detail step on how to create a new flavor and add extra-spec for QAT device? And it is nice if you provide the corresponding guest VM image to support the test so that we can go through the basic QAT test. Thanks Haitao From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Tuesday, April 16, 2019 4:50 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Upstreaming Automation Framework StarlingX community has felt the lack of an automation framework since the beginning of this project. I am excited to share that we are working on upstreaming the automation framework that Wind River has been using for over three years now. This automation framework is based on PyTest but has been customized by adding Keywords that help test case creation simple and quick for this project. PyTest was chosen as automation framework because of its maintainability, debugability, flexibility and scalability. It has simple syntax and parametrization capability that allows to scale quickly. It possesses strong support for test fixtures and state management via setup/teardown hooks. Test case selection and deselection is fairly easy with the use of Markers. As mentioned earlier, this framework has been in use for over three years. The framework and a set of test cases will become available to community in phases. In the first phase, we will be upstreaming the framework and related keywords. Next phase will include upstreaming the test case. We also plan to create a wiki for helping community members in using this framework and executing automated test cases or writing their own test cases. Stay tuned. Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Apr 24 15:03:09 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Apr 2019 15:03:09 +0000 Subject: [Starlingx-discuss] Community Call (April 24, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A2B433@ALA-MBD.corp.ad.wrs.com> Notes from our call today... Weekly Calls During the Summit (Ildiko) - Denver is on Mountain Time - some sessions that Ildiko would like to offer Zoom account for - per Cindy, China will be off W/Th/Fri next week - non Distro OpenStack will be cancelled - per Bruce, the Distro OpenStack - will keep the Community Call for next week - Ildiko will send out the list of when she'd like to use the bridge, Bill will help sort out which calls will happen or not OSF Lounge Shared Project Booth (Ildiko) - sign-up sheet: https://docs.google.com/spreadsheets/d/1ph5neMyLBFtl50hTwXfluZNFFWVF4EIu0a2cY-wHPO4/edit#gid=0 - use this to reserve your slot, if you want one (demos, office hours, whatever) StarlingX Supporters Page (Ildiko) - it's at https://www.starlingx.io/supporters/ - we talked about updating it in about a month - should have some new ones soon (FiberHome) - Bruce: we have a few (and hopefully more soon!) new contributors for the project - I'd like to suggest that we consider forming a "first contact" SIG consisting of community members who can help new contributors learn how to contribute effectively to the project. - this was prompted by new commits that came out of the Hackathon in China last week - Bill & Bruce to discuss offline - what are the logical next steps Zuul Slowness (Brent) - per Brent, Zuul was quite slow yesterday, some Zuul jobs were taking 4 hours - Dean said Zuul got overloaded yesterday, seemed to be an anomaly (they don't know what the issue actually was) - Don asked what the value is of the DevStack part of what Zuul runs - Dean said the DevStack jobs' contribution to the Zuul time is ~1% - they do run a build & install process, so there's some value - the primary value is still to come, as we add API tests - Brent mentioned that this will be discussed during the PTG (re: Unit Testing) - Dean mentioned that if anyone wants to talk to him, just reach out to him Packet.com SIG (Curtis) - https://etherpad.openstack.org/p/stx-packet-sig - looking to setup a meeting time, and can discuss more at PTG Sub-Project Updates Release (Bill/Bruce) - reached agreement on the new release dates in last week's TSC - bug count continues to go up, some discussion on that Containers (Frank) - working through sanity issues - Ada's team seeing them too, Frank's team will help triage with Ada's team Security (Victor/Bruce) - a couple of CVEs in flight, build related issues - one CVE reported on Oct release - Security team will discuss, we are not obligated to fix Ceph Upgrade (Vivian) - not out of the woods yet, very close - all patches have been uploaded, test team has found some bugs (at least one) - working to fix these before merging, and then sanity CentOS Upgrade (Cindy) - QAT upgrade - working through it Networking (Forrest) - no update Docs (Michael/Bruce) - team has been working on deployment & installation guides - looking for help, if anyone can provide - Victor asked: how do we document HW-specific features? - Curtis: agree that we need to - Dean/Bruce: put it in the Wiki, we'll make a formal doc of it if warranted Build (Cesar/Scott) - no update Test (Ada/Numan) - working through feature test - reviewed the regression test plan - working through changes so our testcases can be run in the public (Robot & Pytest) Multi-OS (Cesar/Victor) - Victor: what is necessary for the POC to be moved forward as an accepted STX project & no longer a POC - Brent: let's discuss during the PTG, it's on the list to discuss for R3 - Victor raised a concern about some workarounds they need to employ - will send an email Distro.OpenStack (Bruce) - a number of reviews are out for review - most aren't moving forward since Nova folks are prepping for their PTG -----Original Message----- From: Zvonar, Bill Sent: Tuesday, April 23, 2019 9:17 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (April 24, 2019) Reminder of tomorrow's Community call - please feel free to add to the agenda at [0]. Currently, we just have the sub-projects on the agenda, so there's room for more items. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190424T1400 From sgw at linux.intel.com Wed Apr 24 15:05:33 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 24 Apr 2019 08:05:33 -0700 Subject: [Starlingx-discuss] How to update an installed image In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F409@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F409@ALA-MBD.corp.ad.wrs.com> Message-ID: On 4/24/19 5:50 AM, Wensley, Barton wrote: > Victor, > > StarlingX has two mechanisms for updating software on a running system: > - updates (also known as patching): This allows the user to build an updated version of a set of RPMs, bundle them into a patch file and apply them to a running system. We are also working on an update mechanism for containerized applications, which will allow new versions the docker images for an application to be deployed to a running system. Is this process being used to build patches between the daily/weekly builds? Can we even patch between these? If I understand this uses the "smart" package manager, or is that for the upgrade process? I understand that there is not yet a process for updating the containers, is there a specification being worked on for describing how the containerized applications will be updated? Sau! > - upgrades: This allows the user to upgrade a running system from one StarlingX release to the next, including the OS, RPMs and applications. This mechanism will only be supported when moving between StarlingX releases - any fixes delivered to an existing release will use the update mechanism. > > The update mechanism would be used to deliver a CVE fix (as per your example) to a running system. > > Bart > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: April 23, 2019 7:02 PM > To: starlingx-discuss at lists.starlingx.io; Thebeau, Michel > Subject: Re: [Starlingx-discuss] How to update an installed image > > Hi > > Based on recommendations from Michael I am going to rewrite my question: > > I have a server, all in one with STX simplex configuration but I > installed the ISO like a month ago, now I want to get the latest > version of STX > > I think this a really important part of the project. I don't see > myself as sysadmin with a new CVE fixed in the latest version of > starling x and have to reinstall the full iso in all my nodes. I was > pretty sure we had this component like starling x update. > > Any feedback more than welcome, if this is already a project under > development is perfect if not, we might spend some time discussing it > > Regards > > Victor Rodriguez > > > > > Something like preupg or update-manager in the case of Centos and Ubuntu > > On Mon, Apr 22, 2019 at 9:11 AM Victor Rodriguez wrote: >> >> Hi team >> >> I would like to know more about the image update mechanism we have in >> starting X. I have a simplex system installed and I want to keep my >> system updated with the latest version released in >> http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall >> the full ISO again every week. Is there any way to do a sw update in >> the starling x system so I keep my infrastructure updated w/o having >> to reinstall the ISO? >> >> Thanks a lot >> >> Regards >> >> Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Frank.Miller at windriver.com Wed Apr 24 15:21:01 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 24 Apr 2019 15:21:01 +0000 Subject: [Starlingx-discuss] Update on sanity status Message-ID: Maria's sanity test email from earlier today is reporting that system application-apply stx-openstack is failing (hanging) on both virtual and bare metal environments. As discussed on the community call today the plan to debug these failures is for Ada's team to reproduce the issue and then request assistance from the containers subteam to triage the failure today. On bare metal labs at our location we are not seeing these failures as of loads built on April 18th or later. We were seeing these types of failures on virtual loads and a commit was merged today to address: https://review.opendev.org/#/c/655240/ In case this commit also addresses the failures reported earlier today in Maria's sanity test email, Don Penney has triggered a new CENGN build to pick up this commit. Frank From: Miller, Frank Sent: Thursday, April 18, 2019 5:11 PM To: starlingx-discuss at lists.starlingx.io Subject: Update on sanity status This week has seen a number of issues that impacted sanity. Two of the issues were addressed by commits earlier in the week as well as this morning. It looks like at least one issue remains that is preventing the stx-openstack application from successfully coming up on some platforms. This issue is being tracked by https://bugs.launchpad.net/starlingx/+bug/1825423 Until a solution is merged and sanity results are good, I suggest you revert to the most recent sane loads which were from the April 10 and April 11 builds: Apr 11: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T013000Z/outputs/iso/ Apr 10: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T013000Z/ An update will be sent on Monday. Frank P.S. The most recent sanity report is indicating LP https://bugs.launchpad.net/starlingx/+bug/1825045 is causing failures but while the symptoms look the same we do not feel this is actually the LP that is causing sanity to fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Apr 24 15:36:04 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 24 Apr 2019 15:36:04 +0000 Subject: [Starlingx-discuss] No distro.openstack call next week Message-ID: <9A85D2917C58154C960D95352B22818BD0724D36@fmsmsx123.amr.corp.intel.com> There will be no distro.openstack call next week as I will be at the Open Infra Summit. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Apr 24 15:45:01 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 24 Apr 2019 10:45:01 -0500 Subject: [Starlingx-discuss] How to update an installed image In-Reply-To: References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F409@ALA-MBD.corp.ad.wrs.com> Message-ID: On Wed, Apr 24, 2019 at 10:05 AM Saul Wold wrote: > > > > On 4/24/19 5:50 AM, Wensley, Barton wrote: > > Victor, > > > > StarlingX has two mechanisms for updating software on a running system: > > - updates (also known as patching): This allows the user to build an updated version of a set of RPMs, bundle them into a patch file and apply them to a running system. We are also working on an update mechanism for containerized applications, which will allow new versions the docker images for an application to be deployed to a running system. > Is this process being used to build patches between the daily/weekly > builds? Can we even patch between these? If I understand this uses the > "smart" package manager, or is that for the upgrade process? > > I understand that there is not yet a process for updating the > containers, is there a specification being worked on for describing how > the containerized applications will be updated? > > Sau! > > - upgrades: This allows the user to upgrade a running system from one StarlingX release to the next, including the OS, RPMs and applications. This mechanism will only be supported when moving between StarlingX releases - any fixes delivered to an existing release will use the update mechanism. > > > > The update mechanism would be used to deliver a CVE fix (as per your example) to a running system. > > > > Bart > > Thanks a lot, Bart, following Saul question, is there any place where I can get documentation about it. The actual problem that I have is that I have a dedicated HW for measuring the footprint of the STX image ( described in the performance presentation I gave 2 weeks ago ) but I really don't want to reinstall the ISO every time there is a new ISO released in CEGN that I have to measure and send the results to my personal DB to track any degradation. Curtis and I have the AR to make it work on packet infra , Curtis has been working very hard to make the packet infra, I am in charge fo the test suite. But I had the roadblock of asking myself, do I have to reinstall the ISO every time. I am glad that you clarify me that point that there is a way, now if you can point to more documentation about it I can use to unblock my AR Thanks a lot Victor Rodriguez > > -----Original Message----- > > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > > Sent: April 23, 2019 7:02 PM > > To: starlingx-discuss at lists.starlingx.io; Thebeau, Michel > > Subject: Re: [Starlingx-discuss] How to update an installed image > > > > Hi > > > > Based on recommendations from Michael I am going to rewrite my question: > > > > I have a server, all in one with STX simplex configuration but I > > installed the ISO like a month ago, now I want to get the latest > > version of STX > > > > I think this a really important part of the project. I don't see > > myself as sysadmin with a new CVE fixed in the latest version of > > starling x and have to reinstall the full iso in all my nodes. I was > > pretty sure we had this component like starling x update. > > > > Any feedback more than welcome, if this is already a project under > > development is perfect if not, we might spend some time discussing it > > > > Regards > > > > Victor Rodriguez > > > > > > > > > > Something like preupg or update-manager in the case of Centos and Ubuntu > > > > On Mon, Apr 22, 2019 at 9:11 AM Victor Rodriguez wrote: > >> > >> Hi team > >> > >> I would like to know more about the image update mechanism we have in > >> starting X. I have a simplex system installed and I want to keep my > >> system updated with the latest version released in > >> http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall > >> the full ISO again every week. Is there any way to do a sw update in > >> the starling x system so I keep my infrastructure updated w/o having > >> to reinstall the ISO? > >> > >> Thanks a lot > >> > >> Regards > >> > >> Victor Rodriguez > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cboylan at sapwetik.org Wed Apr 24 15:50:06 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 24 Apr 2019 11:50:06 -0400 Subject: [Starlingx-discuss] StarlingX / Infra teams sync-up at PTG In-Reply-To: <1b8469d0-387f-4853-b487-f9519dcc5130@linux.intel.com> References: <1b8469d0-387f-4853-b487-f9519dcc5130@linux.intel.com> Message-ID: On Tue, Apr 23, 2019, at 4:55 PM, Saul Wold wrote: > > > Infra Team, > > The StarlingX Project would like to request a 30-60 minute sync up to > talk about some of the challenges that StarlingX faces with regards to > build and test infrastructure. > > As StarlingX is an integration project that creates a Linux Distribution > with a Cloud infrastructure on top of it, this makes it more challenging > to both build and test. The current OpenStack Foundation infrastructure > is good as building and testing projects such as Nova, Neutron, ... It > could be used to build and test the individual components of the > StarlingX Flock, such as Fault and others. It's not well suited to build > the complete StarlingX ISO and test that ISO. > > We want to explore what the existing resources that are available and > understand how and what we can add to the infrastructure to enable the > build and testing that StarlingX will require. > > We hope that we can find an hour timeslot during the PTG that we can > talk further about this with both teams. Our timeslots overlap Thrusday > afternoon and Friday so we could put it on our adgendas. > > Please let us know if you have available time slots. I'm thinking the best day for us is Friday (we'll want to dig into our team specific items on Thursday). Does Just after the lunch break on Friday (say 1:30pm) in the Infra/QA room work? If so we'll see you there. I'll go ahead and add you to our etherpad [0] as I'm editing that today too. [0] https://etherpad.openstack.org/p/2019-denver-ptg-infra-planning Clark From vm.rod25 at gmail.com Wed Apr 24 16:03:03 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 24 Apr 2019 11:03:03 -0500 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F26839@SHSMSX104.ccr.corp.intel.com> References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35F26839@SHSMSX104.ccr.corp.intel.com> Message-ID: On Wed, Apr 24, 2019 at 7:09 AM Xie, Cindy wrote: > > >>> If a clear ubuntu system have been installed in server, how can I to deploy starlingx for test or development? Is it right to install these DEB file by hand? > I don't think you can do so - StarlingX flocks services needs a lot of configurations before it can be started, so just install flocks DEB files built out by ourselves cannot work. A demo is possible by using Devstack to launch those services. I believe to make StarlingX fully functional is a project deserves more planning and digesting still. This is a good PTG topics and please have Shuquan to raise it in TSC as well. > > Thx. - cindy > > -----Original Message----- > From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > Sent: Wednesday, April 24, 2019 7:40 PM > To: Victor Rodriguez > Cc: Xie, Cindy ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? > > Hi Cindy, Victor, > Hi Kunpeng My comments inline: > I am very glad to receive your replies. We have the requirements to deploy STX in other linux systems. For this work, I think deployment on Ubuntu is the best start to understand the flow of starlingx installation. I hope I can do something for multi-os and I am trying to follow in your footsteps. I am really glad to hear this, I am happy that all the work we have done to enable the multi os on ubuntu will be useful for you and your costumer. Please take the time to play witht the build sysmte we have and rise concerns and bugs on our current multi OS buidl system. > > Otherwise, I find your goal is to build DEB files and finally create an iso. So far yes If a clear ubuntu system have been installed in server, how can I to deploy starlingx for test or development? Is it right to install these DEB file by hand? Ok here comes an improtant part that makes starling x project special. We dont only have DEBs or RPMs that coudl be installed a running Linux system. We have the following components to consider : -> Installer ( during hte installation of the ISO you decide what configuration do you want ) that is hard ( not imnpossible ) to configure from a simple apt-get install -> Config controller: As described in [0] the script is used to configure the first controller in the StarlingX cluster as controller-0. The prompts are grouped by configuration area. We are still on the exploratory face on how to deal with this -> Runtime Dependencies: Trying to answering your question, Is it right to install these DEB files by hand ? yes and no, if you install one package of the flock service, it might ask for more as a runtime dependency. a solution could be to set up an STX deb repository accessible to your cloud system where apt can resolve the runtime dependencies previously compiled ( we also have to solve the build time dependencies, working hard on that ). That is for the flocks, there are other patches to host packages like Nova, Keystone and others that might need to be reinstalled (among them, the kernel with the extra patches described on the README I share with you ) on your Ubuntu host system. being completely honest with you are working on that as fast as we can trying to solve all these parts. Now this is only for a simplex all in one configuration, which is the scope for now of our POC, for a multi-node, multi-controller solution we have to deal with other problems ( one problem at the time ) [0] https://wiki.openstack.org/wiki/StarlingX/Installation_Guide_Virtual_Environment/Controller_Storage Regarding the use of devstack for this, I will recommend you to sync with Dean, but as a general rule and as Cindy mention, Devstack is not for production just for demo and as far as I know, the work done so far to make the flock part of devstack is not functional yet. But please double check with Dean. In the meantime, any help on this TODO list is more than welcome. I will send a mail latter to encourage the community and TSC to make this project part of the official STX plan and no longer a POC. Happy to help Regards Victor Rodriguez > > Thanks > Kunpeng > > > > On Apr 24, 2019, at 07:43, Victor Rodriguez wrote: > > > > Hi Kunpeng > > > > Cindy is right, we are working very hard to make it possible as soon > > as possible. Right now the current state of the project is described > > in this mail: > > > > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/00414 > > 5.html > > > > The current multi-os build system we have been working on lives in: > > > > https://github.com/starlingx-staging/stx-packaging > > > > It has a very clear README with videos and examples > > > > With that you can start building parts fo the flock services as DEB > > packages, you can also create a live ISO image that you can test. > > > > In the previous mail pointed you will see the list of TODO thins we have. > > > > The full architecture is described in this document as well: > > > > https://drive.google.com/open?id=1ck7vGH50AIAjUx9GNrIGtowG5qg7OYUBNdJy > > Y-5ZvDc > > > > Regarding your use case, we are more than happy to help with the > > developing of tools to enable the community to support other OS. In > > this case the Kylin OS is not in our scope, however, it is the Ubuntu > > OS. I am not familiar with Kylin but a nice first exploratory phase > > could be that you take what we have right now, and test if you can > > install the packages of the flock that we have ported to Ubuntu and > > see if they can be installed. ( no functional test has been performed > > yet ) > > > > If you have any questions please let me know, any feedback to our > > multi-os build system is more than welcome > > > > Regards > > > > Victor Rodriguez > > > > > > > > On Tue, Apr 23, 2019 at 7:26 AM Xie, Cindy wrote: > >> > >> Kunpeng > >> > >> The current STX version doesn’t support deployment on Ubuntu yet – Victor is working on the build for Ubuntu but the functionality of the image is not yet testable. > >> > >> > >> > >> We are interested to understand the requirements: does 99cloud have customer who is asking to have StarlingX on Ubuntu? What is the user scenario? And we are very much welcome your contribution to multi-OS effort lead by Cesar/Victor. > >> > >> > >> > >> Thx. - cindy > >> > >> > >> > >> From: 张鲲鹏 [mailto:zhang.kunpeng at 99cloud.net] > >> Sent: Tuesday, April 23, 2019 7:23 PM > >> To: starlingx-discuss at lists.starlingx.io > >> Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? > >> > >> > >> > >> Hi all > >> > >> > >> > >> I am trying to deploy STX in kylin system[1], an operating system base on ubuntu 16.04. I don’t know how to deploy STX in an installed system. I know there are many dependent components and softwares to be installed. I am trying to install those softwares one by one, but I don’t know whether this way is right or not. > >> > >> Does somebody try to deploy STX in ubuntu? Can you help me how to work for it? > >> > >> > >> > >> Thanks > >> > >> Kunpeng > >> > >> > >> > >> [1]http://en.kylinos.cn/products_detail/productId=21.html > >> > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > From ada.cabrales at intel.com Wed Apr 24 16:54:18 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 24 Apr 2019 16:54:18 +0000 Subject: [Starlingx-discuss] Update on sanity status In-Reply-To: References: Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC8F82@FMSMSX114.amr.corp.intel.com> Question, Frank: Are you using CENGN ISO for your lab deployments? A. From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Wednesday, April 24, 2019 10:21 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Update on sanity status Maria's sanity test email from earlier today is reporting that system application-apply stx-openstack is failing (hanging) on both virtual and bare metal environments. As discussed on the community call today the plan to debug these failures is for Ada's team to reproduce the issue and then request assistance from the containers subteam to triage the failure today. On bare metal labs at our location we are not seeing these failures as of loads built on April 18th or later. We were seeing these types of failures on virtual loads and a commit was merged today to address: https://review.opendev.org/#/c/655240/ In case this commit also addresses the failures reported earlier today in Maria's sanity test email, Don Penney has triggered a new CENGN build to pick up this commit. Frank From: Miller, Frank Sent: Thursday, April 18, 2019 5:11 PM To: starlingx-discuss at lists.starlingx.io Subject: Update on sanity status This week has seen a number of issues that impacted sanity. Two of the issues were addressed by commits earlier in the week as well as this morning. It looks like at least one issue remains that is preventing the stx-openstack application from successfully coming up on some platforms. This issue is being tracked by https://bugs.launchpad.net/starlingx/+bug/1825423 Until a solution is merged and sanity results are good, I suggest you revert to the most recent sane loads which were from the April 10 and April 11 builds: Apr 11: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T013000Z/outputs/iso/ Apr 10: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T013000Z/ An update will be sent on Monday. Frank P.S. The most recent sanity report is indicating LP https://bugs.launchpad.net/starlingx/+bug/1825045 is causing failures but while the symptoms look the same we do not feel this is actually the LP that is causing sanity to fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Wed Apr 24 16:54:50 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Wed, 24 Apr 2019 16:54:50 +0000 Subject: [Starlingx-discuss] Update on sanity status In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC8F82@FMSMSX114.amr.corp.intel.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC8F82@FMSMSX114.amr.corp.intel.com> Message-ID: Yes From: Cabrales, Ada [mailto:ada.cabrales at intel.com] Sent: Wednesday, April 24, 2019 12:54 PM To: Miller, Frank ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Update on sanity status Question, Frank: Are you using CENGN ISO for your lab deployments? A. From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Wednesday, April 24, 2019 10:21 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Update on sanity status Maria's sanity test email from earlier today is reporting that system application-apply stx-openstack is failing (hanging) on both virtual and bare metal environments. As discussed on the community call today the plan to debug these failures is for Ada's team to reproduce the issue and then request assistance from the containers subteam to triage the failure today. On bare metal labs at our location we are not seeing these failures as of loads built on April 18th or later. We were seeing these types of failures on virtual loads and a commit was merged today to address: https://review.opendev.org/#/c/655240/ In case this commit also addresses the failures reported earlier today in Maria's sanity test email, Don Penney has triggered a new CENGN build to pick up this commit. Frank From: Miller, Frank Sent: Thursday, April 18, 2019 5:11 PM To: starlingx-discuss at lists.starlingx.io Subject: Update on sanity status This week has seen a number of issues that impacted sanity. Two of the issues were addressed by commits earlier in the week as well as this morning. It looks like at least one issue remains that is preventing the stx-openstack application from successfully coming up on some platforms. This issue is being tracked by https://bugs.launchpad.net/starlingx/+bug/1825423 Until a solution is merged and sanity results are good, I suggest you revert to the most recent sane loads which were from the April 10 and April 11 builds: Apr 11: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190411T013000Z/outputs/iso/ Apr 10: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190410T013000Z/ An update will be sent on Monday. Frank P.S. The most recent sanity report is indicating LP https://bugs.launchpad.net/starlingx/+bug/1825045 is causing failures but while the symptoms look the same we do not feel this is actually the LP that is causing sanity to fail. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Apr 24 17:17:28 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 24 Apr 2019 19:17:28 +0200 Subject: [Starlingx-discuss] Meetings during the Summit next week? Message-ID: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> Hi StarlingX Community, As next week is the Open Infrastructure Summit and PTG I wanted to check if all the community/project calls will be kept as I would like to re-use the Zoom account to provide remote participation options for a few sessions at the event. The Summit will run in Mountain Time and I would need the Zoom account at: * Tuesday (April 30) 10:40am - 12:30pm, 2:30pm - 3:20pm - Edge Forum sessions * Thursday (May 2) 9am - 6pm - Edge Wg and StarlingX PTG sessions * Friday (May 3) 9am - 6pm - StarlingX PTG session Please let me know if you there is any collision with the above mentioned slots where you still plan to run the calls and I will find another option for those slots. I believe there might be a few calls on Thursday but otherwise it should work. Thanks, Ildikó From Stefan.Dinescu at windriver.com Wed Apr 24 17:58:34 2019 From: Stefan.Dinescu at windriver.com (Dinescu, Stefan) Date: Wed, 24 Apr 2019 17:58:34 +0000 Subject: [Starlingx-discuss] Openstackclient will move to a container Message-ID: Hi everyone, As part of storyboard [0], openstackclients will move from a baremetal installation to being run inside a container. The platform openstackclient will only be able to be used for platform services (keystone, barbican). For all other services (nova, glance, cinder etc) the containerized clients must be used. To ensure a smooth transition, the submitted code will include a wrapper so that openstack commands will function as normal. The "openstack" command is aliased to this wrapper and will only be able to be used for the container services. The clients pod will be configured automatically with the correct "clouds.yaml" auth file, so no extra steps are needed to configure the pod. In order to use the platform openstack command, another alias is provided for it: "platform-openstack". You can also access the platform openstack by using the full path of the executable: "/usr/bin/openstack" For the first batch of commits, the platform clients will not be removed, but they are expected to be removed in the following weeks, so please update any automation scripts you might have for this new behavior. If you have any questions regarding this feature/change, feel free to ask me. Thanks, Stefan [0] Storyboard: https://storyboard.openstack.org/#!/story/2005312 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Wed Apr 24 17:59:11 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 24 Apr 2019 17:59:11 +0000 Subject: [Starlingx-discuss] Meetings during the Summit next week? In-Reply-To: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> References: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDC90E7@FMSMSX114.amr.corp.intel.com> Tuesday at 10:40 conflicts with the Testing meeting. However, we don't have burning issues right now. We can work offline for next week. No problem here. Ada > -----Original Message----- > From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] > Sent: Wednesday, April 24, 2019 12:17 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Zvonar, Bill > Subject: [Starlingx-discuss] Meetings during the Summit next week? > > Hi StarlingX Community, > > As next week is the Open Infrastructure Summit and PTG I wanted to check if all > the community/project calls will be kept as I would like to re-use the Zoom > account to provide remote participation options for a few sessions at the event. > > The Summit will run in Mountain Time and I would need the Zoom account at: > > * Tuesday (April 30) 10:40am - 12:30pm, 2:30pm - 3:20pm - Edge Forum > sessions > * Thursday (May 2) 9am - 6pm - Edge Wg and StarlingX PTG sessions > * Friday (May 3) 9am - 6pm - StarlingX PTG session > > Please let me know if you there is any collision with the above mentioned slots > where you still plan to run the calls and I will find another option for those slots. > I believe there might be a few calls on Thursday but otherwise it should work. > > Thanks, > Ildikó > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.p.gomez at intel.com Wed Apr 24 18:18:53 2019 From: juan.p.gomez at intel.com (Gomez, Juan P) Date: Wed, 24 Apr 2019 18:18:53 +0000 Subject: [Starlingx-discuss] Problem getting metrics with Gnocchi in StarlingX Message-ID: <0483622846A57742B81A944248DD69042FC4945D@fmsmsx101.amr.corp.intel.com> Hi, While trying to get metrics from cli command $ gnocchi metric list, a connection failure is displayed 1. Does anyone know if We are missing a new configuration for the recent Container architecture for Gnocchi ? Procedure: Machine Config: Config: MN-Local ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190407T233001Z/outputs/ Error Log: controller-0:~$ gnocchi metric list Unable to establish connection to http://localhost:8041/v1/metric?: HTTPConnectionPool(host='localhost', port=8041): Max retries exceeded with url: /v1/metric (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) controller-0:~$ Steps: 1. Register Gnocchi service in Keystone openstack role add --project service --user ceilometer admin openstack user create --domain default --password-prompt ceilometer openstack service create --name ceilometer --description "Telemetry" metering openstack user create --domain default --password-prompt gnocchi openstack service create --name gnocchi --description "Metric Service" metric openstack role add --project service --user gnocchi admin 2. Create the Metric service API endpoints: openstack endpoint create --region RegionOne metric public http://controller:8041 openstack endpoint create --region RegionOne metric internal http://controller:8041 openstack endpoint create --region RegionOne metric admin http://controller:8041 3. Create the database for Gnocchi's indexer- Failed to create data base root at testmachine-S1200SP:/home/testmachine# mysql -u root -p The program 'mysql' can be found in the following packages: * mysql-client-core-5.7 * mariadb-client-core-10.0 Try: apt install 4. Edit the /etc/gnocchi/gnocchi.conf file and add Keystone options -(In Controller-0) >> system helm-override-update gnocchi openstack --values gnocchi-keystone.yaml >> system application-apply stx-openstack cat > gnocchi-keystone.yamll < file_basepath: /var/lib/gnocchi driver: file metricd: metric_processing_delay: 1 EOF 5. Finally execute gnocchi cli command for metrics $ gnocchi metric list , Its getting connection error. Best Regards, JP Juan Pablo Gomez Software Quality Assurance Engineer OTC Edge Computing -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Apr 24 18:32:10 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 24 Apr 2019 13:32:10 -0500 Subject: [Starlingx-discuss] Problem getting metrics with Gnocchi in StarlingX In-Reply-To: <0483622846A57742B81A944248DD69042FC4945D@fmsmsx101.amr.corp.intel.com> References: <0483622846A57742B81A944248DD69042FC4945D@fmsmsx101.amr.corp.intel.com> Message-ID: Have you consider using a container solution already proved : https://julien.danjou.info/using-gnocchi-with-docker/ I will give a try and see if I can use this for the metrics project that we have here : https://github.com/starlingx-staging/tools-contrib/tree/master/stx-metrics/footprint will let you know if this docker solution works for me regards On Wed, Apr 24, 2019 at 1:19 PM Gomez, Juan P wrote: > > Hi, > > > > While trying to get metrics from cli command $ gnocchi metric list, a connection failure is displayed > > > > 1. Does anyone know if We are missing a new configuration for the recent Container architecture for Gnocchi ? > > > > > > Procedure: > > > > Machine Config: > > Config: MN-Local > > ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190407T233001Z/outputs/ > > > > Error Log: > > controller-0:~$ gnocchi metric list > > Unable to establish connection to http://localhost:8041/v1/metric?: HTTPConnectionPool(host='localhost', port=8041): Max retries exceeded with url: /v1/metric (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) > > controller-0:~$ > > > > Steps: > > > > 1. Register Gnocchi service in Keystone > > openstack role add --project service --user ceilometer admin > > openstack user create --domain default --password-prompt ceilometer > > openstack service create --name ceilometer --description "Telemetry" metering > > openstack user create --domain default --password-prompt gnocchi > > openstack service create --name gnocchi --description "Metric Service" metric > > openstack role add --project service --user gnocchi admin > > > > 2. Create the Metric service API endpoints: > > openstack endpoint create --region RegionOne metric public http://controller:8041 > > openstack endpoint create --region RegionOne metric internal http://controller:8041 > > openstack endpoint create --region RegionOne metric admin http://controller:8041 > > > > 3. Create the database for Gnocchi’s indexer- Failed to create data base > > root at testmachine-S1200SP:/home/testmachine# mysql -u root -p > > The program 'mysql' can be found in the following packages: > > * mysql-client-core-5.7 > > * mariadb-client-core-10.0 > > Try: apt install > > > > 4. Edit the /etc/gnocchi/gnocchi.conf file and add Keystone options -(In Controller-0) > > >> system helm-override-update gnocchi openstack --values gnocchi-keystone.yaml > > >> system application-apply stx-openstack > > > > cat > gnocchi-keystone.yamll < > conf: > > plugins: > > gnocchi_conf: > > api: > > auth_mode: keystone > > keystone_authtoken: > > auth_type: password > > auth_url: http://controller:5000/v3 > > project_domain_name: Default > > user_domain_name: Default > > project_name: service > > username: gnocchi > > password: Madawaska at 1 > > interface: internalURL > > region_name: RegionOne > > indexer: > > url: mysql+pymysql://gnocchi:Madawaska at 1@controller/gnocchi > > storage: > > coordination_url: file:///var/lib/gnocchi/locks > > file_basepath: /var/lib/gnocchi > > driver: file > > metricd: > > metric_processing_delay: 1 > > EOF > > > > 5. Finally execute gnocchi cli command for metrics $ gnocchi metric list , Its getting connection error. > > > > > > Best Regards, > > JP > > > > > > Juan Pablo Gomez > > Software Quality Assurance Engineer > > OTC Edge Computing > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Chris.Winnicki at windriver.com Wed Apr 24 18:32:23 2019 From: Chris.Winnicki at windriver.com (Winnicki, Chris) Date: Wed, 24 Apr 2019 18:32:23 +0000 Subject: [Starlingx-discuss] Issue when uploading large log files when opening launchpads Message-ID: <7E4792BA14B1DE4BAB354DF77FE0233ABC8B04C0@ALA-MBD.corp.ad.wrs.com> Uploading large log files is an issue when opening StarlingX launchpads. Running "collect" can often generate a tar file in excess of 500MB. Launchpads rejects uploads greater than (approximately) 30 or 40 MB. Current solution is to use "split" to chop up a large tar / log file, upload the smaller individual files and then reconstruct them to the original tar, ex: split -b 29M ALL_NODES_20190422.191652.tar ALL_NODES_20190422.191652.tar.part cat ALL_NODES_20190422.191652.tar.part* > ALL_NODES_20190422.191652.tar The above method is time consuming; both - for the reporter and the assignee. Below are two defects that were opened using the above method: https://bugs.launchpad.net/starlingx/+bug/1826221 https://bugs.launchpad.net/starlingx/+bug/1826227 Would anyone have a solution for uploading large log files for LPs? Do we have any shared project space that could be used for this purpose - so we can upload a large file in one shot and then simply link to it in LP ? Regards, Chris Winnicki chris.winnicki at windriver.com 613-963-1329 -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Apr 24 18:42:08 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 24 Apr 2019 13:42:08 -0500 Subject: [Starlingx-discuss] Openstackclient will move to a container In-Reply-To: References: Message-ID: On Wed, Apr 24, 2019 at 12:59 PM Dinescu, Stefan wrote: > As part of storyboard [0], openstackclients will move from a baremetal installation to being run inside a container. Perhaps my understanding of the k8s architecture is lacking but I really do not understand exactly why you would want to put a CLI into a container to be called from outside the container. OSC is already slow enough on startup, adding the overhead of a docker exec (how I am imagining you will actually use it) just makes that even worse. dt -- Dean Troyer dtroyer at gmail.com From Ian.Jolliffe at windriver.com Wed Apr 24 18:51:54 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Wed, 24 Apr 2019 18:51:54 +0000 Subject: [Starlingx-discuss] [TSC] Minutes 4/18 TSC meeting Message-ID: <1D1EBE1B-9DAC-451D-9330-929D769D0968@windriver.com> Hi all; Here are the notes from the call last week: Release Plan Update (TSC) Options at https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=0 Quick review of options Make a call on which option to pursue. TSC reached consensus - on delaying Distributed Cloud and we selected option 2 - as recommended by the release team. Release date will be 8/30/2019. Curtis - Creation of Packet.com infrastructure Special Interest Group Only exist for say 4-6 months as we figure out how we want to use packet A central place to discuss use, requirements, access, infra, ect Once we have standardized our use, disband the SIG as other teams will be using it Weekly meeting Additional Use case ideas community sandbox for potential collaborative troubleshooting PSA - OpenDev infrastructure update happening tomorrow - Gerrit will be down for part of the 8hour maintenance window. This was raised on Community call UTC - 1500 IRC is a good place to reach out if there are issues. ML announcement: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-April/004123.html PSA - Release 3 content ideas etherpad is available - feel free to chime in prior to PTG https://etherpad.openstack.org/p/stx-ptg-denver -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Wed Apr 24 19:34:15 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 24 Apr 2019 19:34:15 +0000 Subject: [Starlingx-discuss] Reverting commit due to load breakage Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F814@ALA-MBD.corp.ad.wrs.com> The following commit has broken installations and is being reverted: https://review.opendev.org/#/c/652803 The failure happens after installing the first controller and unlocking it. None of the SM managed services will start. There will be an error like this in /var/log/puppet/latest/puppet.log: 2019-04-24T18:28:24.486 ^[[1;31mError: 2019-04-24 18:28:24 +0000 /Stage[main]/Platform::Kubernetes::Coredns/Exec[restrict coredns to master nodes]/returns: change from notrun to 0 failed: kubectl --kubeconfig=/etc/kubernetes/admin.conf -n kube-system patch deployment coredns -p '{"spec":{"template":{"spec":{"nodeSelector":{"node-role.kubernetes.io/master":""}}}}}' returned 1 instead of one of [0]^[[0m Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Apr 24 19:54:30 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 24 Apr 2019 19:54:30 +0000 Subject: [Starlingx-discuss] Meetings during the Summit next week? In-Reply-To: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> References: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A2B767@ALA-MBD.corp.ad.wrs.com> Hi Ildiko, Tuesday - just 2 (Distro OpenStack & Test) both cancelled (Bruce & Ada). Thursday - I think you just need confirmation from the Build & STX in a Box meetings... - Networking (7:15am MDT): before 9am MDT - TSC (8am MDT): before 9am MDT - Build (9am MDT): Cesar? - STX in a Box (10:30am MDT): ? - Release (12pm MDT): we can skip it Friday - no regular meetings. Bill.... -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, April 24, 2019 1:17 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Meetings during the Summit next week? Hi StarlingX Community, As next week is the Open Infrastructure Summit and PTG I wanted to check if all the community/project calls will be kept as I would like to re-use the Zoom account to provide remote participation options for a few sessions at the event. The Summit will run in Mountain Time and I would need the Zoom account at: * Tuesday (April 30) 10:40am - 12:30pm, 2:30pm - 3:20pm - Edge Forum sessions * Thursday (May 2) 9am - 6pm - Edge Wg and StarlingX PTG sessions * Friday (May 3) 9am - 6pm - StarlingX PTG session Please let me know if you there is any collision with the above mentioned slots where you still plan to run the calls and I will find another option for those slots. I believe there might be a few calls on Thursday but otherwise it should work. Thanks, Ildikó From Numan.Waheed at windriver.com Wed Apr 24 20:26:19 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Wed, 24 Apr 2019 20:26:19 +0000 Subject: [Starlingx-discuss] QAT upgrade testing In-Reply-To: <90D309A9E5805640B40D067D1B8EC8BF626963E9@SHSMSX103.ccr.corp.intel.com> References: <90D309A9E5805640B40D067D1B8EC8BF626963E9@SHSMSX103.ccr.corp.intel.com> Message-ID: <3CAA827B7A79BA46B15B280EC82088FE4829F45A@ALA-MBD.corp.ad.wrs.com> Hi Chris, Can you please provide the steps for QAT testing. Thanks, Numan. From: Wang, Hai Tao Sent: April-24-19 10:29 AM To: Waheed, Numan ; starlingx-discuss at lists.starlingx.io Cc: Perez, Ricardo O Subject: QAT upgrade testing Hi Numan, We are now testing the QAT upgrade on CentOS7.6. For functional test before upgrade, we start to verify guest can be launched with multiple crypto VFs. Could you share some detail step on how to create a new flavor and add extra-spec for QAT device? And it is nice if you provide the corresponding guest VM image to support the test so that we can go through the basic QAT test. Thanks Haitao From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Tuesday, April 16, 2019 4:50 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Upstreaming Automation Framework StarlingX community has felt the lack of an automation framework since the beginning of this project. I am excited to share that we are working on upstreaming the automation framework that Wind River has been using for over three years now. This automation framework is based on PyTest but has been customized by adding Keywords that help test case creation simple and quick for this project. PyTest was chosen as automation framework because of its maintainability, debugability, flexibility and scalability. It has simple syntax and parametrization capability that allows to scale quickly. It possesses strong support for test fixtures and state management via setup/teardown hooks. Test case selection and deselection is fairly easy with the use of Markers. As mentioned earlier, this framework has been in use for over three years. The framework and a set of test cases will become available to community in phases. In the first phase, we will be upstreaming the framework and related keywords. Next phase will include upstreaming the test case. We also plan to create a wiki for helping community members in using this framework and executing automated test cases or writing their own test cases. Stay tuned. Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Apr 24 20:30:37 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 24 Apr 2019 20:30:37 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 4/24/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1B060CF@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marvin.Huang at windriver.com Wed Apr 24 20:33:18 2019 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Wed, 24 Apr 2019 20:33:18 +0000 Subject: [Starlingx-discuss] questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE484C6@ALA-MBD.corp.ad.wrs.com> Hi Austin, According to (Storyboard) 2004763, I know that you're working on the feature "Huge page management". I've some questions about the feature. Though one of its 2 tasks is still in Review (the other is shown merged), you might now have the answers already. In the description of the Story, there are requirement: "- Enable k8s huge page feature for worker nodes that do not have the openstack compute label. It should be disabled otherwise." Questions: what is this meaning to users? By 'Enable', is it meaning users can modify memory allocation on the node? (via the following): system host-modify [-2M <2M hugepages number>] [-1G <1G hugepages number>] [-f ] ... or Horizon: Admin -> Platform -> Host Inventory ... Otherwise ('disabled'), the CLIs (system host-memory-xxx) will reject any requests? Or the corresponding Horizon pages do not have any items to update the memory application? Or those were disabled? "- Automatically defaults for worker nodes with openstack compute label. Changes will be applied on the unlock. - Current 2M huge page default settings - 1-1G huge page per numa node for vswitch " Questions: in this situation, is the k8s huge page feature disabled (according to the above requirement)? And the (host-memory) CLIs will reject any requests? And a question related with VMs: If a VM using huge page (with flavor having 'hw:mem_page_size=large' or 'hw:mem_page_size=1048576') is launched, will the free memory pages decreased accordingly on the worker it's running on? That is, if the VM is consuming 1G huge-page, the number of free page of 1G size on the hosting worker should be reduced by 1. Is this still the expected behavior? This is the assumption in https://bugs.launchpad.net/starlingx/+bug/1813325. Regards, Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Wed Apr 24 22:29:46 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 24 Apr 2019 22:29:46 +0000 Subject: [Starlingx-discuss] Meetings during the Summit next week? In-Reply-To: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> References: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710FF880F@fmsmsx123.amr.corp.intel.com> I'm canceling Build and Multi-OS meetings the summit week, so , no issues. Regards Cesar Lara -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: Wednesday, April 24, 2019 12:17 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: [Starlingx-discuss] Meetings during the Summit next week? Hi StarlingX Community, As next week is the Open Infrastructure Summit and PTG I wanted to check if all the community/project calls will be kept as I would like to re-use the Zoom account to provide remote participation options for a few sessions at the event. The Summit will run in Mountain Time and I would need the Zoom account at: * Tuesday (April 30) 10:40am - 12:30pm, 2:30pm - 3:20pm - Edge Forum sessions * Thursday (May 2) 9am - 6pm - Edge Wg and StarlingX PTG sessions * Friday (May 3) 9am - 6pm - StarlingX PTG session Please let me know if you there is any collision with the above mentioned slots where you still plan to run the calls and I will find another option for those slots. I believe there might be a few calls on Thursday but otherwise it should work. Thanks, Ildikó _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Apr 24 23:30:47 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 24 Apr 2019 23:30:47 +0000 Subject: [Starlingx-discuss] no non-OpenStack dist call next week Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F276C5@SHSMSX104.ccr.corp.intel.com> All, China team will be on Labor Day holiday thus I am cancelling the non-OpenStack dist call for next week. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Thu Apr 25 01:54:33 2019 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 25 Apr 2019 01:54:33 +0000 Subject: [Starlingx-discuss] questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 In-Reply-To: <74D9C1EDDC44EF468303629CF9A2832C9CE484C6@ALA-MBD.corp.ad.wrs.com> References: <74D9C1EDDC44EF468303629CF9A2832C9CE484C6@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi, Marvin: This feature is not directly related with "system host-memory-modify" function, user still should be able to modify mem config as before. This feature is enable K8S feature-gate as describe in [1]. And K8S enabling hugepage feature is opposite to compute label ( means compute label is tagged, then k8s hugepage feature is disabled (false), if compute label is not tagged , then k8s hugepage is enabled ) About your question about VM hugepage decrease, I did not dig into VM mem , so I cannot give more comments. [1] https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ Thanks. BR Austin Sun. From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Thursday, April 25, 2019 4:33 AM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Hi Austin, According to (Storyboard) 2004763, I know that you're working on the feature "Huge page management". I've some questions about the feature. Though one of its 2 tasks is still in Review (the other is shown merged), you might now have the answers already. In the description of the Story, there are requirement: "- Enable k8s huge page feature for worker nodes that do not have the openstack compute label. It should be disabled otherwise." Questions: what is this meaning to users? By 'Enable', is it meaning users can modify memory allocation on the node? (via the following): system host-modify [-2M <2M hugepages number>] [-1G <1G hugepages number>] [-f ] ... or Horizon: Admin -> Platform -> Host Inventory ... Otherwise ('disabled'), the CLIs (system host-memory-xxx) will reject any requests? Or the corresponding Horizon pages do not have any items to update the memory application? Or those were disabled? "- Automatically defaults for worker nodes with openstack compute label. Changes will be applied on the unlock. - Current 2M huge page default settings - 1-1G huge page per numa node for vswitch " Questions: in this situation, is the k8s huge page feature disabled (according to the above requirement)? And the (host-memory) CLIs will reject any requests? And a question related with VMs: If a VM using huge page (with flavor having 'hw:mem_page_size=large' or 'hw:mem_page_size=1048576') is launched, will the free memory pages decreased accordingly on the worker it's running on? That is, if the VM is consuming 1G huge-page, the number of free page of 1G size on the hosting worker should be reduced by 1. Is this still the expected behavior? This is the assumption in https://bugs.launchpad.net/starlingx/+bug/1813325. Regards, Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Apr 23 13:06:12 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 23 Apr 2019 13:06:12 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/24 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F25414@SHSMSX104.ccr.corp.intel.com> Agenda for 4/24 meeting: - Ceph upgrade status 1. Ceph dev build validation status (Fernando) 2. test plan clarification (Fernando/Daniel) 3. Patch rebase (Daniel) - QAT driver upgrade status 1. Dev status (base rebase, QAT driver load, ISO generation, etc) (Haitao) 2. validation prepration (Ricardo) - libvirt/qemu patch reduction PR merge status (Jim/Dean) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 24, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From fernando.hernandez.gonzalez at intel.com Tue Apr 23 23:04:29 2019 From: fernando.hernandez.gonzalez at intel.com (Hernandez Gonzalez, Fernando) Date: Tue, 23 Apr 2019 23:04:29 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/24 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F25414@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F25414@SHSMSX104.ccr.corp.intel.com> Message-ID: <03D458D5BAFF6041973594B00B4E58CE5D69BE7F@CRSMSX104.amr.corp.intel.com> Adding content to the agenda. Fernando Hernandez Gonzalez Software Cloud Engineer ____________________________________ Office: +52.33.16.45.01.34 inet 86450134 -----Original Message----- From: Xie, Cindy Sent: Tuesday, April 23, 2019 8:06 AM To: starlingx-discuss at lists.starlingx.io Cc: Rowsell, Brent ; Wold, Saul ; Hernandez Gonzalez, Fernando ; Badea, Daniel ; Wang, Hai Tao ; Perez, Ricardo O ; Somerville, Jim ; Khalil, Ghada ; Troyer, Dean Subject: Agenda: Weekly StarlingX non-OpenStack Distro meeting, 4/24 Agenda for 4/24 meeting: - Ceph upgrade status 1. Ceph dev build validation status (Fernando). Pre-merged basic test ran. __________________________________________________________________________________________________ # | Test ID | Comments 1 | STOR_TIER_005 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed 2 | STOR_TIER_006 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed 3 | STOR_TIER_007 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed 4 | STOR_TIER_008 | Per Frank Miller: ...deferred until task 30351 under the HELM SB 2003909 is completed **SB- https://storyboard.openstack.org/#!/story/2003909 7 | STOR_PROCESS_011 | - WIP - Emails have been back and forward and test case has been re-worked. | Pending questions: | @Daniel. - Does StarlingX Ceph solution require at least ceph-mgr, or 2 (active and standby) at the same time? It seems that only one ceph-mgr also works. - In the case I described above, once ceph-mgr on controller-1 took over the active role, ceph-mgr on controller-0 hardly won the chance to be "active" any more. Is it reasonable? - By "systemctl status sm.service | grep ceph-mgr", we know ceph-mgr daemon is managed by sm.service, but why sm.service did not restart ceph-mgr WHEN there is still a ceph-mgr on another controller? | @Daniel/Yong. - For STEP 8. Need guidance about what ceph folder/filename must be renamed to get the ceph monitor process killed and never come back. It seems that renaming the ceph services from path "/usr/lib/systemd/system" is not working for this quest. - Do you believe is this still a valid test case? We can focused on kill the service but Is this going to happen to a customer, Is this a real scenario? 8 | STOR_PROCESS_012 | - WIP - Emails have been back and forward and test case has been re-worked. | Pending questions: | This test case was executed almost completely, the only step missing is the "Step 9" when killing the osd process and never back. I need @Daniel/Yong/Tingkie feedback in order to kill ceph service to never come back once they are down. 17 | STOR_FAULT_023 | Not Attempted 20 | STOR_PART_030 | Not Attempted 2. test plan clarification (Fernando/Daniel) 3. Patch rebase (Daniel) - QAT driver upgrade status 1. Dev status (base rebase, QAT driver load, ISO generation, etc) (Haitao) 2. validation prepration (Ricardo) - libvirt/qemu patch reduction PR merge status (Jim/Dean) - Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Waheed, Numan; Hu, Yong; starlingx-discuss at lists.starlingx.io; Liu, ZhipengS; Shang, Dehao; Wold, Saul; Lin, Shuicheng; Zhu, Vivian; Somerville, Jim; Sun, Austin; 'Khalil, Ghada'; Jones, Bruce E; Troyer, Dean; 'Rowsell, Brent' Cc: 'Poncea, Ovidiu'; Gomez, Juan P; Lara, Cesar; Fang, Liang A; Cobbley, David A; 'Chen, Jacky'; Perez Rodriguez, Humberto I; Martinez Monroy, Elio; 'Seiler, Glenn'; Arce Moreno, Abraham; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Shuquan Huang; 'Young, Ken'; Perez Carranza, Jose; Hu, Wei W; Martinez Landa, Hayde; Armstrong, Robert H Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, April 24, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From maria.g.perez.ibarra at intel.com Thu Apr 25 04:11:46 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 25 Apr 2019 04:11:46 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL TOTAL: 57 TCS [Fail : 47] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL TOTAL: 57 TCS [Fail : 47 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Apr 25 09:41:44 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 25 Apr 2019 09:41:44 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F28708@SHSMSX104.ccr.corp.intel.com> * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2279 bytes Desc: not available URL: From Frank.Miller at windriver.com Thu Apr 25 13:20:25 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Thu, 25 Apr 2019 13:20:25 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: References: Message-ID: Maria: It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? Frank [1] https://review.opendev.org/#/c/655240/ From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 25, 2019 12:12 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL TOTAL: 57 TCS [Fail : 47] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL TOTAL: 57 TCS [Fail : 47 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard - Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Apr 25 13:41:50 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 25 Apr 2019 13:41:50 +0000 Subject: [Starlingx-discuss] Openstackclient will move to a container In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB47FC18@ALA-MBD.corp.ad.wrs.com> Dean, The openstack services are containerized and are decoupled from the host platform. Each time we move to a new openstack I do not think we want to put a dependency on the host platform to upgrade the OSC package and any dependencies, hence the container for local client access. Note this is a static container managed by k8s, not dynamically created on each OSC invocation. Brent -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Wednesday, April 24, 2019 2:42 PM To: Dinescu, Stefan Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstackclient will move to a container On Wed, Apr 24, 2019 at 12:59 PM Dinescu, Stefan wrote: > As part of storyboard [0], openstackclients will move from a baremetal installation to being run inside a container. Perhaps my understanding of the k8s architecture is lacking but I really do not understand exactly why you would want to put a CLI into a container to be called from outside the container. OSC is already slow enough on startup, adding the overhead of a docker exec (how I am imagining you will actually use it) just makes that even worse. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Thu Apr 25 13:44:25 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 25 Apr 2019 13:44:25 +0000 Subject: [Starlingx-discuss] How to update an installed image In-Reply-To: References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F409@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA47DDE5@ALA-MBD.corp.ad.wrs.com> The software update/patching framework, as Bart notes, provides a controlled mechanism for system software updates (such as delivery of CVE fixes), bundling updated packages for user management of published fixes and enhancements. In general, this is a post-release mechanism, but can also be used by designers as part of their development in applying or removing controlled updates to a running system. For formal patches, there are considerations around patch removal, interoperability between nodes, config changes, etc. The upgrade framework provides for migration from one release to the next, supporting database migrations, config changes, etc, and is not for moving between weekly or nightly development builds. The only recommended and supported approach for migrating to a new nightly development build is a reinstallation. While some merged development changes may be patchable to a running system, there is no guarantee all updates will be (ex: api changes, db schema changes). We will get wikis published describing the software update/patching framework and build tools shortly. Cheers, Don. -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, April 24, 2019 11:45 AM To: Saul Wold Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to update an installed image On Wed, Apr 24, 2019 at 10:05 AM Saul Wold wrote: > > > > On 4/24/19 5:50 AM, Wensley, Barton wrote: > > Victor, > > > > StarlingX has two mechanisms for updating software on a running system: > > - updates (also known as patching): This allows the user to build an updated version of a set of RPMs, bundle them into a patch file and apply them to a running system. We are also working on an update mechanism for containerized applications, which will allow new versions the docker images for an application to be deployed to a running system. > Is this process being used to build patches between the daily/weekly > builds? Can we even patch between these? If I understand this uses the > "smart" package manager, or is that for the upgrade process? > > I understand that there is not yet a process for updating the > containers, is there a specification being worked on for describing how > the containerized applications will be updated? > > Sau! > > - upgrades: This allows the user to upgrade a running system from one StarlingX release to the next, including the OS, RPMs and applications. This mechanism will only be supported when moving between StarlingX releases - any fixes delivered to an existing release will use the update mechanism. > > > > The update mechanism would be used to deliver a CVE fix (as per your example) to a running system. > > > > Bart > > Thanks a lot, Bart, following Saul question, is there any place where I can get documentation about it. The actual problem that I have is that I have a dedicated HW for measuring the footprint of the STX image ( described in the performance presentation I gave 2 weeks ago ) but I really don't want to reinstall the ISO every time there is a new ISO released in CEGN that I have to measure and send the results to my personal DB to track any degradation. Curtis and I have the AR to make it work on packet infra , Curtis has been working very hard to make the packet infra, I am in charge fo the test suite. But I had the roadblock of asking myself, do I have to reinstall the ISO every time. I am glad that you clarify me that point that there is a way, now if you can point to more documentation about it I can use to unblock my AR Thanks a lot Victor Rodriguez > > -----Original Message----- > > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > > Sent: April 23, 2019 7:02 PM > > To: starlingx-discuss at lists.starlingx.io; Thebeau, Michel > > Subject: Re: [Starlingx-discuss] How to update an installed image > > > > Hi > > > > Based on recommendations from Michael I am going to rewrite my question: > > > > I have a server, all in one with STX simplex configuration but I > > installed the ISO like a month ago, now I want to get the latest > > version of STX > > > > I think this a really important part of the project. I don't see > > myself as sysadmin with a new CVE fixed in the latest version of > > starling x and have to reinstall the full iso in all my nodes. I was > > pretty sure we had this component like starling x update. > > > > Any feedback more than welcome, if this is already a project under > > development is perfect if not, we might spend some time discussing it > > > > Regards > > > > Victor Rodriguez > > > > > > > > > > Something like preupg or update-manager in the case of Centos and Ubuntu > > > > On Mon, Apr 22, 2019 at 9:11 AM Victor Rodriguez wrote: > >> > >> Hi team > >> > >> I would like to know more about the image update mechanism we have in > >> starting X. I have a simplex system installed and I want to keep my > >> system updated with the latest version released in > >> http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall > >> the full ISO again every week. Is there any way to do a sw update in > >> the starling x system so I keep my infrastructure updated w/o having > >> to reinstall the ISO? > >> > >> Thanks a lot > >> > >> Regards > >> > >> Victor Rodriguez > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Thu Apr 25 14:04:08 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 25 Apr 2019 16:04:08 +0200 (CEST) Subject: [Starlingx-discuss] Duplex Config: Cannot unlock host controller-0 without configuring a cluster-host interface. Message-ID: <74456724.918454.1556201048095@communicator.strato.com> Hi, after installing our new 10GB and hopefully compatible network interface Cards I am getting the following error message: [wrsroot at localhost ~(keystone_admin)]$ system host-unlock controller-0 Cannot unlock host controller-0 without configuring a cluster-host interface. Duplex config, iso from 20190325. Any idea where to look? BTW: Can you recommend a newer iso? Thanks Marcek From sgw at linux.intel.com Thu Apr 25 14:19:34 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 25 Apr 2019 07:19:34 -0700 Subject: [Starlingx-discuss] StarlingX / Infra teams sync-up at PTG In-Reply-To: References: <1b8469d0-387f-4853-b487-f9519dcc5130@linux.intel.com> Message-ID: On 4/24/19 8:50 AM, Clark Boylan wrote: > > > On Tue, Apr 23, 2019, at 4:55 PM, Saul Wold wrote: >> >> >> Infra Team, >> >> The StarlingX Project would like to request a 30-60 minute sync up to >> talk about some of the challenges that StarlingX faces with regards to >> build and test infrastructure. >> >> As StarlingX is an integration project that creates a Linux Distribution >> with a Cloud infrastructure on top of it, this makes it more challenging >> to both build and test. The current OpenStack Foundation infrastructure >> is good as building and testing projects such as Nova, Neutron, ... It >> could be used to build and test the individual components of the >> StarlingX Flock, such as Fault and others. It's not well suited to build >> the complete StarlingX ISO and test that ISO. >> >> We want to explore what the existing resources that are available and >> understand how and what we can add to the infrastructure to enable the >> build and testing that StarlingX will require. >> >> We hope that we can find an hour timeslot during the PTG that we can >> talk further about this with both teams. Our timeslots overlap Thrusday >> afternoon and Friday so we could put it on our adgendas. >> >> Please let us know if you have available time slots. > > I'm thinking the best day for us is Friday (we'll want to dig into our team specific items on Thursday). Does Just after the lunch break on Friday (say 1:30pm) in the Infra/QA room work? If so we'll see you there. I'll go ahead and add you to our etherpad [0] as I'm editing that today too. > Clark, We just confirmed the time in the StarlingX TSC. A few of us will break off and join you in the Infra/QA Room at 1:30 for about 30 minutes. Looking forward to a great Summit / PTG and meeting with you guys. Sau! > [0] https://etherpad.openstack.org/p/2019-denver-ptg-infra-planning > > Clark > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From yang.liu at windriver.com Thu Apr 25 16:15:09 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Thu, 25 Apr 2019 16:15:09 +0000 Subject: [Starlingx-discuss] QAT upgrade testing In-Reply-To: <3CAA827B7A79BA46B15B280EC82088FE4829F45A@ALA-MBD.corp.ad.wrs.com> References: <90D309A9E5805640B40D067D1B8EC8BF626963E9@SHSMSX103.ccr.corp.intel.com> <3CAA827B7A79BA46B15B280EC82088FE4829F45A@ALA-MBD.corp.ad.wrs.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F8621A94FE@ALA-MBD.corp.ad.wrs.com> Steps for launching VM using QAT devices can be found in upstream docs. https://docs.openstack.org/nova/pike/admin/pci-passthrough.html Note that there's currently an helm chart override issue for nova charts, which resulted in incorrect pci alias and passthrough whitelist configs in nova.conf. https://bugs.launchpad.net/starlingx/+bug/1824831 You can try to override the helm charts to change the nova.conf to workaround it though. BR, Yang From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: April-24-19 4:26 PM To: Wang, Hai Tao; starlingx-discuss at lists.starlingx.io; Winnicki, Chris Cc: Perez, Ricardo O Subject: Re: [Starlingx-discuss] QAT upgrade testing Hi Chris, Can you please provide the steps for QAT testing. Thanks, Numan. From: Wang, Hai Tao Sent: April-24-19 10:29 AM To: Waheed, Numan ; starlingx-discuss at lists.starlingx.io Cc: Perez, Ricardo O Subject: QAT upgrade testing Hi Numan, We are now testing the QAT upgrade on CentOS7.6. For functional test before upgrade, we start to verify guest can be launched with multiple crypto VFs. Could you share some detail step on how to create a new flavor and add extra-spec for QAT device? And it is nice if you provide the corresponding guest VM image to support the test so that we can go through the basic QAT test. Thanks Haitao From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Tuesday, April 16, 2019 4:50 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Upstreaming Automation Framework StarlingX community has felt the lack of an automation framework since the beginning of this project. I am excited to share that we are working on upstreaming the automation framework that Wind River has been using for over three years now. This automation framework is based on PyTest but has been customized by adding Keywords that help test case creation simple and quick for this project. PyTest was chosen as automation framework because of its maintainability, debugability, flexibility and scalability. It has simple syntax and parametrization capability that allows to scale quickly. It possesses strong support for test fixtures and state management via setup/teardown hooks. Test case selection and deselection is fairly easy with the use of Markers. As mentioned earlier, this framework has been in use for over three years. The framework and a set of test cases will become available to community in phases. In the first phase, we will be upstreaming the framework and related keywords. Next phase will include upstreaming the test case. We also plan to create a wiki for helping community members in using this framework and executing automated test cases or writing their own test cases. Stay tuned. Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marvin.Huang at windriver.com Thu Apr 25 16:17:29 2019 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Thu, 25 Apr 2019 16:17:29 +0000 Subject: [Starlingx-discuss] questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 In-Reply-To: References: <74D9C1EDDC44EF468303629CF9A2832C9CE484C6@ALA-MBD.corp.ad.wrs.com> Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE486E5@ALA-MBD.corp.ad.wrs.com> Hi Austin, Thanks for your information! So given 'This feature is enable K8S feature-gate as describe in [1]': 1 How to tell if this 'hugepage feature' is (currently) enabled or disabled? Any user visible signs in Horizon, CLIs outputs? Or it's transparent to users? Or: a. Is it only reflected in arguments to 'kubeadm init --feature-gates='...,hugepage=enable/disable', which is called to provision a node/master? b. And/or (also) is shown in /var/lib/kubelet/config.yaml, e.g.: controller-0:~$ grep 'featureGate' /var/lib/kubelet/config.yaml -A2 featureGates: HugePages: false c. Any OS level options changes, like options inside /etc/default/grub? 2 Can changing label enable or/and disable the 'hugepage feature'? For example, now assuming worker compute-0 has label 'openstack-compute-node', hence hugepage is disabled: a. We can remove the label 'openstack-compute-node ' using CLI system host-label-remove compute-0 openstack-compute-node b. What the expected the system behavior after the label is removed? The 'hugepage feature' will be enabled after been unlocked, which can be verified using methods in 1? 3 Any user aware difference between the features enabled/disabled? Thanks! Marvin From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Wednesday, April 24, 2019 9:55 PM To: Huang, Marvin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Hi, Marvin: This feature is not directly related with "system host-memory-modify" function, user still should be able to modify mem config as before. This feature is enable K8S feature-gate as describe in [1]. And K8S enabling hugepage feature is opposite to compute label ( means compute label is tagged, then k8s hugepage feature is disabled (false), if compute label is not tagged , then k8s hugepage is enabled ) About your question about VM hugepage decrease, I did not dig into VM mem , so I cannot give more comments. [1] https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ Thanks. BR Austin Sun. From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Thursday, April 25, 2019 4:33 AM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Hi Austin, According to (Storyboard) 2004763, I know that you're working on the feature "Huge page management". I've some questions about the feature. Though one of its 2 tasks is still in Review (the other is shown merged), you might now have the answers already. In the description of the Story, there are requirement: "- Enable k8s huge page feature for worker nodes that do not have the openstack compute label. It should be disabled otherwise." Questions: what is this meaning to users? By 'Enable', is it meaning users can modify memory allocation on the node? (via the following): system host-modify [-2M <2M hugepages number>] [-1G <1G hugepages number>] [-f ] ... or Horizon: Admin -> Platform -> Host Inventory ... Otherwise ('disabled'), the CLIs (system host-memory-xxx) will reject any requests? Or the corresponding Horizon pages do not have any items to update the memory application? Or those were disabled? "- Automatically defaults for worker nodes with openstack compute label. Changes will be applied on the unlock. - Current 2M huge page default settings - 1-1G huge page per numa node for vswitch " Questions: in this situation, is the k8s huge page feature disabled (according to the above requirement)? And the (host-memory) CLIs will reject any requests? And a question related with VMs: If a VM using huge page (with flavor having 'hw:mem_page_size=large' or 'hw:mem_page_size=1048576') is launched, will the free memory pages decreased accordingly on the worker it's running on? That is, if the VM is consuming 1G huge-page, the number of free page of 1G size on the hosting worker should be reduced by 1. Is this still the expected behavior? This is the assumption in https://bugs.launchpad.net/starlingx/+bug/1813325. Regards, Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Thu Apr 25 17:49:38 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 25 Apr 2019 17:49:38 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: References: Message-ID: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> Hi Frank, We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 If additional information is required, please, just let us know. Thanks & Regards, Cristopher Lemus From: "Miller, Frank" Date: Thursday, April 25, 2019 at 8:24 AM To: "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Maria: It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? Frank [1] https://review.opendev.org/#/c/655240/ From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 25, 2019 12:12 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (link) Status: RED =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL TOTAL: 57 TCS [Fail : 47] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL TOTAL: 57 TCS [Fail : 47 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard – Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] * some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 * Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ * Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ * Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Apr 25 19:58:56 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 25 Apr 2019 14:58:56 -0500 Subject: [Starlingx-discuss] How to update an installed image In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA47DDE5@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA8F409@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA47DDE5@ALA-MBD.corp.ad.wrs.com> Message-ID: Thanks Don I really appreciate your guidance here I was really lost, I will wait for the wiki documentation if there is any help I can provide testing feel free to let me know Related to the part of "upgrade framework provides for migration from one release to the next" you mean from last year release to new incoming release only right? like release 2 to 3 or 1 to 2 If that is the case I will be more than happy to add this test case to the starling x test suite to validate. I really think we might need a feature for moving between weekly or nightly development builds, I can bring this topic to the next community meeting or if someone in the community needs to have this feature it will be nice to heard feedback from them Regards Victor Rodriguez On Thu, Apr 25, 2019 at 8:44 AM Penney, Don wrote: > > The software update/patching framework, as Bart notes, provides a controlled mechanism for system software updates (such as delivery of CVE fixes), bundling updated packages for user management of published fixes and enhancements. In general, this is a post-release mechanism, but can also be used by designers as part of their development in applying or removing controlled updates to a running system. For formal patches, there are considerations around patch removal, interoperability between nodes, config changes, etc. > > The upgrade framework provides for migration from one release to the next, supporting database migrations, config changes, etc, and is not for moving between weekly or nightly development builds. > > The only recommended and supported approach for migrating to a new nightly development build is a reinstallation. While some merged development changes may be patchable to a running system, there is no guarantee all updates will be (ex: api changes, db schema changes). > > We will get wikis published describing the software update/patching framework and build tools shortly. > > Cheers, > Don. > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Wednesday, April 24, 2019 11:45 AM > To: Saul Wold > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] How to update an installed image > > On Wed, Apr 24, 2019 at 10:05 AM Saul Wold wrote: > > > > > > > > On 4/24/19 5:50 AM, Wensley, Barton wrote: > > > Victor, > > > > > > StarlingX has two mechanisms for updating software on a running system: > > > - updates (also known as patching): This allows the user to build an updated version of a set of RPMs, bundle them into a patch file and apply them to a running system. We are also working on an update mechanism for containerized applications, which will allow new versions the docker images for an application to be deployed to a running system. > > Is this process being used to build patches between the daily/weekly > > builds? Can we even patch between these? If I understand this uses the > > "smart" package manager, or is that for the upgrade process? > > > > I understand that there is not yet a process for updating the > > containers, is there a specification being worked on for describing how > > the containerized applications will be updated? > > > > Sau! > > > - upgrades: This allows the user to upgrade a running system from one StarlingX release to the next, including the OS, RPMs and applications. This mechanism will only be supported when moving between StarlingX releases - any fixes delivered to an existing release will use the update mechanism. > > > > > > The update mechanism would be used to deliver a CVE fix (as per your example) to a running system. > > > > > > Bart > > > > > Thanks a lot, Bart, following Saul question, is there any place where > I can get documentation about it. > > The actual problem that I have is that I have a dedicated HW for > measuring the footprint of the STX image ( described in the > performance presentation I gave 2 weeks ago ) but I really don't want > to reinstall the ISO every time there is a new ISO released in CEGN > that I have to measure and send the results to my personal DB to track > any degradation. Curtis and I have the AR to make it work on packet > infra , Curtis has been working very hard to make the packet infra, I > am in charge fo the test suite. But I had the roadblock of asking > myself, do I have to reinstall the ISO every time. I am glad that you > clarify me that point that there is a way, now if you can point to > more documentation about it I can use to unblock my AR > > Thanks a lot > > Victor Rodriguez > > > > -----Original Message----- > > > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > > > Sent: April 23, 2019 7:02 PM > > > To: starlingx-discuss at lists.starlingx.io; Thebeau, Michel > > > Subject: Re: [Starlingx-discuss] How to update an installed image > > > > > > Hi > > > > > > Based on recommendations from Michael I am going to rewrite my question: > > > > > > I have a server, all in one with STX simplex configuration but I > > > installed the ISO like a month ago, now I want to get the latest > > > version of STX > > > > > > I think this a really important part of the project. I don't see > > > myself as sysadmin with a new CVE fixed in the latest version of > > > starling x and have to reinstall the full iso in all my nodes. I was > > > pretty sure we had this component like starling x update. > > > > > > Any feedback more than welcome, if this is already a project under > > > development is perfect if not, we might spend some time discussing it > > > > > > Regards > > > > > > Victor Rodriguez > > > > > > > > > > > > > > > Something like preupg or update-manager in the case of Centos and Ubuntu > > > > > > On Mon, Apr 22, 2019 at 9:11 AM Victor Rodriguez wrote: > > >> > > >> Hi team > > >> > > >> I would like to know more about the image update mechanism we have in > > >> starting X. I have a simplex system installed and I want to keep my > > >> system updated with the latest version released in > > >> http://mirror.starlingx.cengn.ca/mirror/ but I don't want to reinstall > > >> the full ISO again every week. Is there any way to do a sw update in > > >> the starling x system so I keep my infrastructure updated w/o having > > >> to reinstall the ISO? > > >> > > >> Thanks a lot > > >> > > >> Regards > > >> > > >> Victor Rodriguez > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Thu Apr 25 20:21:52 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 25 Apr 2019 16:21:52 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 74 - Failure! In-Reply-To: <8541491556027476@myt5-f1576e7b5bad.qloud-c.yandex.net> References: <1290207736.225.1555975867328.JavaMail.javamailuser@localhost> <8541491556027476@myt5-f1576e7b5bad.qloud-c.yandex.net> Message-ID: Don Penney also has access. Scott On 2019-04-23 9:51 a.m., Erich Cordoba wrote: > Hi > > I can investigate this, although Scott is the only one with access to > the server in case something needs to be fixed there. > > -- > Sent from Yandex.Mail for mobile > > 23.04.2019, 08:44, "Miller, Frank" : > > Cesar: > > Scott is out this week so is there someone on the Build subteam > who can investigate this failure? Is it related to the repo moves > to OpenDev from the weekend? > > Frank > > > -----Original Message----- > From: build.starlingx at gmail.com > [mailto:build.starlingx at gmail.com ] > Sent: Monday, April 22, 2019 7:31 PM > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] [build-report] > STX_build_master_master - Build # 74 - Failure! > > Project: STX_build_master_master > Build #: 74 > Status: Failure > Timestamp: 20190422T233000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190422T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Apr 25 23:56:13 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 25 Apr 2019 23:56:13 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190425 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-25 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL]| 37 TCs FAIL Sanity Platform 07 TCs [FAIL]| 05 TCs FAIL TOTAL: 57 TCS [Fail : 42] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] | 52 TCs FAIL Sanity Platform 05 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [Fail : 55 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] - BM environment : some pods are failing during sanity execution https://bugs.launchpad.net/starlingx/+bug/1826308 - Virtual environment : failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ruijing.guo at intel.com Fri Apr 26 01:03:07 2019 From: ruijing.guo at intel.com (Guo, Ruijing) Date: Fri, 26 Apr 2019 01:03:07 +0000 Subject: [Starlingx-discuss] StarlingX PTG Agenda In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB47184F@ALA-MBD.corp.ad.wrs.com> References: <2EE296D083DF2940BF4EBB91D39BB89F40C5AE84@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB47184F@ALA-MBD.corp.ad.wrs.com> Message-ID: <2EE296D083DF2940BF4EBB91D39BB89F40C5CA9D@SHSMSX104.ccr.corp.intel.com> Hi, Brent I plan to add networking items in August release. How can I add the topic in PTG? Thanks, -Ruijing From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, April 19, 2019 10:50 PM To: Guo, Ruijing ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX PTG Agenda Hi Ruijing, The agenda for the PTG is still being worked on. Brent From: Guo, Ruijing [mailto:ruijing.guo at intel.com] Sent: Thursday, April 18, 2019 8:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX PTG Agenda Hi, All, I am looking for starlingX PTG agenda. In https://etherpad.openstack.org/p/stx-ptg-denver, I can see items but I don't see timeslot for the items. Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Fri Apr 26 01:27:45 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 26 Apr 2019 01:27:45 +0000 Subject: [Starlingx-discuss] StarlingX PTG Agenda In-Reply-To: <2EE296D083DF2940BF4EBB91D39BB89F40C5CA9D@SHSMSX104.ccr.corp.intel.com> References: <2EE296D083DF2940BF4EBB91D39BB89F40C5AE84@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB47184F@ALA-MBD.corp.ad.wrs.com> <2EE296D083DF2940BF4EBB91D39BB89F40C5CA9D@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB482F58@ALA-MBD.corp.ad.wrs.com> Hi Ruijing, Please add to the networking section in the following etherpad, https://etherpad.openstack.org/p/stx-ptg-denver Thanks, Brent From: Guo, Ruijing [mailto:ruijing.guo at intel.com] Sent: Thursday, April 25, 2019 9:03 PM To: Rowsell, Brent ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX PTG Agenda Hi, Brent I plan to add networking items in August release. How can I add the topic in PTG? Thanks, -Ruijing From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, April 19, 2019 10:50 PM To: Guo, Ruijing >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX PTG Agenda Hi Ruijing, The agenda for the PTG is still being worked on. Brent From: Guo, Ruijing [mailto:ruijing.guo at intel.com] Sent: Thursday, April 18, 2019 8:23 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX PTG Agenda Hi, All, I am looking for starlingX PTG agenda. In https://etherpad.openstack.org/p/stx-ptg-denver, I can see items but I don't see timeslot for the items. Thanks, -Ruijing -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng1.li at intel.com Fri Apr 26 01:28:50 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Fri, 26 Apr 2019 01:28:50 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> Message-ID: Actually, I had also reported the memory issue[1] days ago. Memory exhaust happens because so little 4K memory is allocated for system/software load. [1] https://bugs.launchpad.net/starlingx/+bug/1825814 Thanks, Cheng From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Friday, April 26, 2019 1:50 AM To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Hi Frank, We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 If additional information is required, please, just let us know. Thanks & Regards, Cristopher Lemus From: "Miller, Frank" > Date: Thursday, April 25, 2019 at 8:24 AM To: "Perez Ibarra, Maria G" >, "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Maria: It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? Frank [1] https://review.opendev.org/#/c/655240/ From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 25, 2019 12:12 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (link) Status: RED =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL TOTAL: 57 TCS [Fail : 47] AIO – Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL TOTAL: 57 TCS [Fail : 47 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard – Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] Standard – Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Fri Apr 26 02:59:27 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Fri, 26 Apr 2019 02:59:27 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> Message-ID: <92F9583D-D670-442D-A437-E72761E815DB@intel.com> Hi, In this case we have: HugePages_Total: 34104 HugePages_Free: 34104 HugePages_Rsvd: 0 HugePages_Surp: 0 So, I'm not sure if it can be related with 1825814. Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. total used free shared buff/cache available Mem: 93G 84G 3.2G 66M 5.6G 4.8G Swap: 0B 0B 0B A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. -Erich [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html From: "Li, Cheng1" Date: Thursday, April 25, 2019 at 8:29 PM To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Actually, I had also reported the memory issue[1] days ago. Memory exhaust happens because so little 4K memory is allocated for system/software load.   [1] https://bugs.launchpad.net/starlingx/+bug/1825814   Thanks, Cheng   From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Friday, April 26, 2019 1:50 AM To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424   Hi Frank,   We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved.   What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308   If additional information is required, please, just let us know.   Thanks & Regards,   Cristopher Lemus   From: "Miller, Frank" Date: Thursday, April 25, 2019 at 8:24 AM To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424   Maria:   It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs.  Can you set up a live debug session so that some container SMEs can investigate?   Frank [1] https://review.opendev.org/#/c/655240/   From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, April 25, 2019 12:12 AM To: mailto:starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424   Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/)   Status: RED   ===========================================   Sanity Test is executed in a Containers – Bare Metal Environment   AIO - Simplex   Setup             Manual [PASS] Provisioning      01 TCs [PASS] Sanity OpenStack  49 TCs [FAIL]| 40 TCs FAIL Sanity Platform   07 TCs [FAIL]| 07 TCs FAIL   TOTAL: 57 TCS [Fail : 47]   AIO – Duplex   Setup             Manual [PASS] Provisioning      01 TCs [PASS] Sanity OpenStack  52 TCs [FAIL] | 42 TCs FAIL Sanity Platform   05 TCs [FAIL] | 05 TCs FAIL   TOTAL: 57 TCS [Fail : 47 TCs]   Standard - Local Storage (2+2)   Setup             Manual [PASS] Provisioning      01 TCs [PASS] Sanity OpenStack  49 TCs [PASS] Sanity Platform   07 TCs [PASS]   TOTAL: 57 TCS PASS   Standard - Dedicated Storage (2+2+2)   Setup             Manual [PASS] Provisioning      01 TCs [PASS] Sanity OpenStack  52 TCs [PASS] Sanity Platform   05 TCs [PASS]   TOTAL: 57 TCS PASS       Sanity Test is executed in a Containers - Virtual Environment   AIO - Simplex   Setup             04 TCs [PASS]  Provisioning      01 TCs [FAIL] Sanity OpenStack  49 TCs [FAIL] Sanity Platform   07 TCs [FAIL]   TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]     AIO - Duplex   Setup             04 TCs [PASS]  Provisioning      01 TCs [FAIL] Sanity OpenStack  49 TCs [FAIL] Sanity Platform   07 TCs [FAIL]   TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]     Standard – Local Storage   Setup             04 TCs [PASS]  Provisioning      01 TCs [FAIL] Sanity OpenStack  49 TCs [FAIL] Sanity Platform   07 TCs [FAIL]   TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]     Standard – Dedicated Storage   Setup             04 TCs [PASS]  Provisioning      01 TCs [FAIL] Sanity OpenStack  49 TCs [FAIL] Sanity Platform   07 TCs [FAIL]   TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs]   - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes.   For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack     Regards Maria G.       From vm.rod25 at gmail.com Fri Apr 26 03:33:39 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 25 Apr 2019 22:33:39 -0500 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: <92F9583D-D670-442D-A437-E72761E815DB@intel.com> References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> Message-ID: Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex (0) https://github.com/clearlinux/psstop Regards Victor Rodriguez On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich < erich.cordoba.malibran at intel.com> wrote: > Hi, > > In this case we have: > > HugePages_Total: 34104 > HugePages_Free: 34104 > HugePages_Rsvd: 0 > HugePages_Surp: 0 > > So, I'm not sure if it can be related with 1825814. > > Also, for people not seeing this issue, how much memory do you have in > your baremetal systems? What's the minimum required memory for running an > AIO system. Our failing system have 97 GB and free -h shows. > > total used free shared buff/cache > available > Mem: 93G 84G 3.2G 66M 5.6G > 4.8G > Swap: 0B 0B 0B > > > A couple months ago I reported a similar issue[0], in that case after > three days in stand-by the system started to throw Out of Memory errors. > Does anyone has performed a longevity test for some days? Maybe the working > systems might fail after a while if the memory usage keeps increasing over > time. > > -Erich > > [0] > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html > > > > From: "Li, Cheng1" > Date: Thursday, April 25, 2019 at 8:29 PM > To: "Lemus Contreras, Cristopher J" < > cristopher.j.lemus.contreras at intel.com>, "Miller, Frank" < > Frank.Miller at windriver.com>, "Perez Ibarra, Maria G" < > maria.g.perez.ibarra at intel.com>, "starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > > Actually, I had also reported the memory issue[1] days ago. > Memory exhaust happens because so little 4K memory is allocated for > system/software load. > > [1] https://bugs.launchpad.net/starlingx/+bug/1825814 > > Thanks, > Cheng > > From: Lemus Contreras, Cristopher J [mailto: > cristopher.j.lemus.contreras at intel.com] > Sent: Friday, April 26, 2019 1:50 AM > To: Miller, Frank ; Perez Ibarra, Maria G < > maria.g.perez.ibarra at intel.com>; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > > Hi Frank, > > We had a zoom call with Al Bailey to troubleshoot the issues that we are > observing. The bug where a single CPU was taking all of the workload is > resolved. > > What we observed seems to be an issue with memory exhaust, additional > information was gathered an added to this bug for further troubleshooting: > https://bugs.launchpad.net/starlingx/+bug/1826308 > > If additional information is required, please, just let us know. > > Thanks & Regards, > > Cristopher Lemus > > From: "Miller, Frank" > Date: Thursday, April 25, 2019 at 8:24 AM > To: "Perez Ibarra, Maria G" , > "mailto:starlingx-discuss at lists.starlingx.io" starlingx-discuss at lists.starlingx.io> > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > > Maria: > > It looks like the commit referenced yesterday [1] is not addressing the > issue in your BM labs. Can you set up a live debug session so that some > container SMEs can investigate? > > Frank > [1] https://review.opendev.org/#/c/655240/ > > From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] > Sent: Thursday, April 25, 2019 12:12 AM > To: mailto:starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > > Status of the Sanity Test for last CENGN ISO: bootimage.iso from > 2019-APRIL-24 ( > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ > ) > > Status: RED > > =========================================== > > Sanity Test is executed in a Containers – Bare Metal Environment > > AIO - Simplex > > Setup Manual [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL > Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL > > TOTAL: 57 TCS [Fail : 47] > > AIO – Duplex > > Setup Manual [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL > Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL > > TOTAL: 57 TCS [Fail : 47 TCs] > > Standard - Local Storage (2+2) > > Setup Manual [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 49 TCs [PASS] > Sanity Platform 07 TCs [PASS] > > TOTAL: 57 TCS PASS > > Standard - Dedicated Storage (2+2+2) > > Setup Manual [PASS] > Provisioning 01 TCs [PASS] > Sanity OpenStack 52 TCs [PASS] > Sanity Platform 05 TCs [PASS] > > TOTAL: 57 TCS PASS > > > > Sanity Test is executed in a Containers - Virtual Environment > > AIO - Simplex > > Setup 04 TCs [PASS] > Provisioning 01 TCs [FAIL] > Sanity OpenStack 49 TCs [FAIL] > Sanity Platform 07 TCs [FAIL] > > TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > > > AIO - Duplex > > Setup 04 TCs [PASS] > Provisioning 01 TCs [FAIL] > Sanity OpenStack 49 TCs [FAIL] > Sanity Platform 07 TCs [FAIL] > > TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > > > Standard – Local Storage > > Setup 04 TCs [PASS] > Provisioning 01 TCs [FAIL] > Sanity OpenStack 49 TCs [FAIL] > Sanity Platform 07 TCs [FAIL] > > TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > > > Standard – Dedicated Storage > > Setup 04 TCs [PASS] > Provisioning 01 TCs [FAIL] > Sanity OpenStack 49 TCs [FAIL] > Sanity Platform 07 TCs [FAIL] > > TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > > - some pods are failing during BM sanity execution. > https://bugs.launchpad.net/starlingx/+bug/1826308 > - Sanity Bare metal was tested with : > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ > - Sanity Virtual was tested with : > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ > - Tomorrow in sanity virtual we will perform a double check with the > latest ISO that includes the fixes. > > For more detail of the tests: > https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack > > > Regards > Maria G. > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Fri Apr 26 15:59:50 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Fri, 26 Apr 2019 11:59:50 -0400 Subject: [Starlingx-discuss] Duplex Config: Cannot unlock host controller-0 without configuring a cluster-host interface. In-Reply-To: <74456724.918454.1556201048095@communicator.strato.com> References: <74456724.918454.1556201048095@communicator.strato.com> Message-ID: <1556294390.3606.146.camel@windriver.com> Hi Marcel, Please confirm a guide you are referring to for your configuration.  For example: https://docs.starlingx.io/installation_guide/latest/index.html And Duplex: https://docs.starlingx.io/deployment_guides/latest/aio_duplex/index.htm l And perhaps: https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnAIOD X Please quote the cluster host listed in config_controller output,  for example: Kubernetes Cluster Network Configuration ---------------------------------------- Cluster pod network subnet: 172.16.0.0/16 Cluster service network subnet: 10.96.0.0/12 Cluster host interface name: enp0s8 Cluster host interface: enp0s8 Cluster host interface MTU: 1500 Cluster host subnet: 192.168.206.0/24 And, give the output of 'system host-if-show controller-0 ' for the management interface.  Per the document "https://wiki.openstack .org/wiki/StarlingX/Containers/InstallationOnAIODX", the mgmt interface is the "cluster host" interface, which is supposed to be configured for controller-0 by the config_controller script. Based on your output we may find the answer or I could help find someone who can dig into it. === With respect to finding a build, the approach I use is to review the "Sanity Test" emails sent to this list, and find one that is green for that configuration I want to test.  Unfortunately it looks like the reports are persistenly RED especially for Duplex.  I recommend asking the question in a seperate email with that specific subject in subject line, and reference the persistent  RED sanity results so that if there is any insight to offer the right eyes will see the question. M On Thu, 2019-04-25 at 16:04 +0200, Marcel Schaible wrote: > Hi, > > after installing our new 10GB and hopefully compatible network > interface Cards I am getting the following error message: > > [wrsroot at localhost ~(keystone_admin)]$ system host-unlock controller- > 0 > Cannot unlock host controller-0 without configuring a cluster-host > interface. > > Duplex config, iso from 20190325. > > Any idea where to look? > > BTW: Can you recommend a newer iso? > > Thanks > > Marcek > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Fri Apr 26 16:30:09 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Fri, 26 Apr 2019 11:30:09 -0500 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> Message-ID: Hi team My findings so far this morning: In order to know how much memory ( really ) a docker is consuming i tested 2 tools ( docker stat and reading from the /proc/pid/mmpas ) I create a simple C code that consumes X KB of memory by malloc and then free it: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c Reserving 5000 Kb of memory Value of String = simple_test Address = 2895619200 Waiting for 30 seconds I compile it and cp into my docker image: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile When I run the docker and monitor the memory with docker stats : It shows only 2.5 Kb of memory when from /proc kernel ifo i get : vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker docker-containe 1857 : 0 Kb dockerd 2758 : 0 Kb docker-containe 3368 : 0 Kb docker-containe 5438 : 0 Kb docker-containe 25159 : 0 Kb docker 25105 : 48378 Kb ( first column is PID second one is memory consumed ) , in this case, it shows 48378 kb vs 5000 kb of memory that i know that i requested In order to find the memory leak, we must rely on the tools we use to measure it, Cristopher can you help me to repeat the same experiment to know if you see the same behavior ? If so we can start to put -m on each docker image to limit the memory size ( 2GB should be enough right ? ) WIP regards On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez wrote: > > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time > > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex > > (0) https://github.com/clearlinux/psstop > > Regards > Victor Rodriguez > > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich wrote: >> >> Hi, >> >> In this case we have: >> >> HugePages_Total: 34104 >> HugePages_Free: 34104 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> >> So, I'm not sure if it can be related with 1825814. >> >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. >> >> total used free shared buff/cache available >> Mem: 93G 84G 3.2G 66M 5.6G 4.8G >> Swap: 0B 0B 0B >> >> >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. >> >> -Erich >> >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html >> >> >> >> From: "Li, Cheng1" >> Date: Thursday, April 25, 2019 at 8:29 PM >> To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Actually, I had also reported the memory issue[1] days ago. >> Memory exhaust happens because so little 4K memory is allocated for system/software load. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814 >> >> Thanks, >> Cheng >> >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] >> Sent: Friday, April 26, 2019 1:50 AM >> To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Hi Frank, >> >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. >> >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 >> >> If additional information is required, please, just let us know. >> >> Thanks & Regards, >> >> Cristopher Lemus >> >> From: "Miller, Frank" >> Date: Thursday, April 25, 2019 at 8:24 AM >> To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Maria: >> >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? >> >> Frank >> [1] https://review.opendev.org/#/c/655240/ >> >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] >> Sent: Thursday, April 25, 2019 12:12 AM >> To: mailto:starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/) >> >> Status: RED >> >> =========================================== >> >> Sanity Test is executed in a Containers – Bare Metal Environment >> >> AIO - Simplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL >> Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47] >> >> AIO – Duplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL >> Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47 TCs] >> >> Standard - Local Storage (2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [PASS] >> Sanity Platform 07 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> Standard - Dedicated Storage (2+2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [PASS] >> Sanity Platform 05 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> >> >> Sanity Test is executed in a Containers - Virtual Environment >> >> AIO - Simplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> AIO - Duplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Local Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Dedicated Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. >> >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack >> >> >> Regards >> Maria G. >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mario.alfredo.c.arevalo at intel.com Fri Apr 26 17:04:26 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Fri, 26 Apr 2019 17:04:26 +0000 Subject: [Starlingx-discuss] helm tool kit && DB/keystone official documentation Message-ID: <6594B51DBE477C48AAE23675314E6C46645BCCA7@fmsmsx107.amr.corp.intel.com> Does anyone know where can I find the official documentation to configure keystone and DB through Helm toolkit? I have found some pages which have helped like this: https://docs.openstack.org/openstack-helm/latest/devref/oslo-config.html to create the config files to oslo. And some information about the conventions behind the docker images involved in OpenStack: https://docs.openstack.org/openstack-helm/latest/devref/images.html Best regards. Mario. From jim.somerville at windriver.com Fri Apr 26 19:52:51 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Fri, 26 Apr 2019 15:52:51 -0400 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> Message-ID: <95d4d335-04f8-5990-77d0-d90428ae5a1a@windriver.com> On 2019-04-18 4:04 p.m., Dean Troyer wrote: > On Thu, Apr 18, 2019 at 10:22 AM Jim Somerville > wrote: >> I've finished reducing the patches on libvirt and qemu. I was able to >> get rid of virtually all of the RHEL patches, replacing them with just a >> minor "support for running on CentOS" patch or two. This will make our >> lives a lot easier moving to newer versions. qemu went from 97 patches >> down to 14, and libvirt from 23 to 13. The STX patches themselves >> required very little rework, this was mostly a testing exercise in the >> container realm with things changing frequently, making it quite >> challenging. > > Awesome! > >> Once you're satisfied with the review, I'll issue pull requests. Once >> you've pulled and created new branches, I'll follow up with the two >> commits, one referring to the new branches in the manifest, and the >> other with minor changes to the qemu spec file in the stx-integ repo. >> Linked so they both go in together. > > It looks like these are on the same upstream base version, correct? > We'll have to add a suffix but that isn't a problem. I'll use '-N' > for that so it doesn't look like part of the upstream version (we used > '.N' for the Nova stable branch in stx-nova, /me kicks self). I have > created stx-qemu/stx/v3.0.0-1 and stx-libvirt/stx/v4.7.0-1. Fire away > with the PRs. Hi Dean, Saul finished approving the new branch contents, so they're ready to merge into your newly created -1 branch versions, assuming you're good with them as well. -Jim > >> One issue concerns me a bit, and that is the tis patch number. It >> starts counting from the last upstream commit, and with me removing >> patches, it is now lower than it used to be. If this is a real concern >> I could just add a fixed 100 to the gitrevcount in both qemu and libvirt >> build_data files, guaranteeing package versions will not collide with >> ones in the past. Your thoughts? > > Is this that number that is supposed to be based on the patch count? > I think we should get rid of that idea and just increment it every > time it need to be incremented. Overloading things like that just > makes everything more brittle. > > Also... > > I still want to encourage folks to do dev work in the primary places > (Gerrit and starlngx-staging on GitHub), this is a very important part > of The Four Opens[0] that is fundamental to being part of the > OpenStack Foundation. In this case it isn't so much development as > cleanup but it still counts as working in the open. Updating a WIP PR > is just as doable as a WIP Gerrit review as things progress. And that > lets people find the work without having to know beforehand where it > is, even as in this case it was on GitHub anyway. > > [I am trying to not pick on Jim specifically here but I did recently > say something in a meeting about this particular work and I thought > this was a good place to expand on why I feel so strongly on this > topic. These principles are fundamental to StarlingX being accepted > as an OpenStack Foundation project and we _will_ be judged on things > like this. We already are (informally) in fact...] > > dt > > [0] The Four Opens: https://governance.openstack.org/tc/reference/opens.html > From dtroyer at gmail.com Fri Apr 26 20:42:02 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 26 Apr 2019 15:42:02 -0500 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: <95d4d335-04f8-5990-77d0-d90428ae5a1a@windriver.com> References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> <95d4d335-04f8-5990-77d0-d90428ae5a1a@windriver.com> Message-ID: On Fri, Apr 26, 2019 at 2:52 PM Jim Somerville wrote: > Saul finished approving the new branch contents, so they're ready to > merge into your newly created -1 branch versions, assuming you're good > with them as well. Done. Thanks Jim dt -- Dean Troyer dtroyer at gmail.com From cristopher.j.lemus.contreras at intel.com Fri Apr 26 22:06:06 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Fri, 26 Apr 2019 22:06:06 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> Message-ID: <1F0FCBCF-A1A5-4E8D-9671-BCD5EFA15BCC@intel.com> Hi All, Some test were made to find the point where the memory is allocated: Just after `config_controller` it's using just a handful of GBs: controller-0:~$ free -h total used free shared buff/cache available Mem: 93G 3.2G 84G 47M 5.5G 88G Swap: 0B 0B 0B controller-0:~$ Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB, this is just with kube-system pods: total used free shared buff/cache available Mem: 93G 71G 19G 45M 1.9G 20G Swap: 0B 0B 0B NAME READY STATUS RESTARTS AGE calico-kube-controllers-84cdb6bd7c-w75rk 1/1 Running 1 36m calico-node-zp8xv 1/1 Running 1 36m coredns-84bb87857f-lp8sl 1/1 Running 1 36m coredns-84bb87857f-r6mdf 0/1 Pending 0 36m kube-apiserver-controller-0 1/1 Running 1 35m kube-controller-manager-controller-0 1/1 Running 2 35m kube-proxy-w7sfq 1/1 Running 1 36m kube-scheduler-controller-0 1/1 Running 2 35m tiller-deploy-d87d7bd75-hjb7w 1/1 Running 1 36m Bug updated with this info. Regards, Cristopher Lemus On 4/26/19, 11:30 AM, "Victor Rodriguez" wrote: Hi team My findings so far this morning: In order to know how much memory ( really ) a docker is consuming i tested 2 tools ( docker stat and reading from the /proc/pid/mmpas ) I create a simple C code that consumes X KB of memory by malloc and then free it: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c Reserving 5000 Kb of memory Value of String = simple_test Address = 2895619200 Waiting for 30 seconds I compile it and cp into my docker image: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile When I run the docker and monitor the memory with docker stats : It shows only 2.5 Kb of memory when from /proc kernel ifo i get : vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker docker-containe 1857 : 0 Kb dockerd 2758 : 0 Kb docker-containe 3368 : 0 Kb docker-containe 5438 : 0 Kb docker-containe 25159 : 0 Kb docker 25105 : 48378 Kb ( first column is PID second one is memory consumed ) , in this case, it shows 48378 kb vs 5000 kb of memory that i know that i requested In order to find the memory leak, we must rely on the tools we use to measure it, Cristopher can you help me to repeat the same experiment to know if you see the same behavior ? If so we can start to put -m on each docker image to limit the memory size ( 2GB should be enough right ? ) WIP regards On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez wrote: > > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time > > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex > > (0) https://github.com/clearlinux/psstop > > Regards > Victor Rodriguez > > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich wrote: >> >> Hi, >> >> In this case we have: >> >> HugePages_Total: 34104 >> HugePages_Free: 34104 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> >> So, I'm not sure if it can be related with 1825814. >> >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. >> >> total used free shared buff/cache available >> Mem: 93G 84G 3.2G 66M 5.6G 4.8G >> Swap: 0B 0B 0B >> >> >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. >> >> -Erich >> >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html >> >> >> >> From: "Li, Cheng1" >> Date: Thursday, April 25, 2019 at 8:29 PM >> To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Actually, I had also reported the memory issue[1] days ago. >> Memory exhaust happens because so little 4K memory is allocated for system/software load. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814 >> >> Thanks, >> Cheng >> >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] >> Sent: Friday, April 26, 2019 1:50 AM >> To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Hi Frank, >> >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. >> >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 >> >> If additional information is required, please, just let us know. >> >> Thanks & Regards, >> >> Cristopher Lemus >> >> From: "Miller, Frank" >> Date: Thursday, April 25, 2019 at 8:24 AM >> To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Maria: >> >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? >> >> Frank >> [1] https://review.opendev.org/#/c/655240/ >> >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] >> Sent: Thursday, April 25, 2019 12:12 AM >> To: mailto:starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/) >> >> Status: RED >> >> =========================================== >> >> Sanity Test is executed in a Containers – Bare Metal Environment >> >> AIO - Simplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL >> Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47] >> >> AIO – Duplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL >> Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47 TCs] >> >> Standard - Local Storage (2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [PASS] >> Sanity Platform 07 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> Standard - Dedicated Storage (2+2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [PASS] >> Sanity Platform 05 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> >> >> Sanity Test is executed in a Containers - Virtual Environment >> >> AIO - Simplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> AIO - Duplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Local Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Dedicated Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. >> >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack >> >> >> Regards >> Maria G. >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From austin.sun at intel.com Sat Apr 27 01:31:56 2019 From: austin.sun at intel.com (Sun, Austin) Date: Sat, 27 Apr 2019 01:31:56 +0000 Subject: [Starlingx-discuss] questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 In-Reply-To: <74D9C1EDDC44EF468303629CF9A2832C9CE486E5@ALA-MBD.corp.ad.wrs.com> References: <74D9C1EDDC44EF468303629CF9A2832C9CE484C6@ALA-MBD.corp.ad.wrs.com> <74D9C1EDDC44EF468303629CF9A2832C9CE486E5@ALA-MBD.corp.ad.wrs.com> Message-ID: Sorry for late, please see online. From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Friday, April 26, 2019 12:17 AM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Hi Austin, Thanks for your information! So given 'This feature is enable K8S feature-gate as describe in [1]': 1 How to tell if this 'hugepage feature' is (currently) enabled or disabled? Any user visible signs in Horizon, CLIs outputs? Or it's transparent to users? Or: a. Is it only reflected in arguments to 'kubeadm init --feature-gates='...,hugepage=enable/disable', which is called to provision a node/master? [Austin] the default hugepage is false, and can be enabled by /etc/sysconfig/kubelet for node. b. And/or (also) is shown in /var/lib/kubelet/config.yaml, e.g.: controller-0:~$ grep 'featureGate' /var/lib/kubelet/config.yaml -A2 featureGates: HugePages: false [Austin] this is default setting . c. Any OS level options changes, like options inside /etc/default/grub? [Austin] As Story describe , disable 1G hugepage grub if vswitch_type is none . 2 Can changing label enable or/and disable the 'hugepage feature'? For example, now assuming worker compute-0 has label 'openstack-compute-node', hence hugepage is disabled: a. We can remove the label 'openstack-compute-node ' using CLI system host-label-remove compute-0 openstack-compute-node b. What the expected the system behavior after the label is removed? The 'hugepage feature' will be enabled after been unlocked, which can be verified using methods [Austin] If label is removed, and unlock, the hugepage should be disabled 3 Any user aware difference between the features enabled/disabled? [Austin] end user may not aware the difference. This will be used for some k8s application. Thanks! Marvin From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Wednesday, April 24, 2019 9:55 PM To: Huang, Marvin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Hi, Marvin: This feature is not directly related with "system host-memory-modify" function, user still should be able to modify mem config as before. This feature is enable K8S feature-gate as describe in [1]. And K8S enabling hugepage feature is opposite to compute label ( means compute label is tagged, then k8s hugepage feature is disabled (false), if compute label is not tagged , then k8s hugepage is enabled ) About your question about VM hugepage decrease, I did not dig into VM mem , so I cannot give more comments. [1] https://kubernetes.io/docs/reference/command-line-tools-reference/feature-gates/ Thanks. BR Austin Sun. From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Thursday, April 25, 2019 4:33 AM To: Sun, Austin > Cc: starlingx-discuss at lists.starlingx.io Subject: questions about '[Feature] Huge page management' https://storyboard.openstack.org/#!/story/2004763 Hi Austin, According to (Storyboard) 2004763, I know that you're working on the feature "Huge page management". I've some questions about the feature. Though one of its 2 tasks is still in Review (the other is shown merged), you might now have the answers already. In the description of the Story, there are requirement: "- Enable k8s huge page feature for worker nodes that do not have the openstack compute label. It should be disabled otherwise." Questions: what is this meaning to users? By 'Enable', is it meaning users can modify memory allocation on the node? (via the following): system host-modify [-2M <2M hugepages number>] [-1G <1G hugepages number>] [-f ] ... or Horizon: Admin -> Platform -> Host Inventory ... Otherwise ('disabled'), the CLIs (system host-memory-xxx) will reject any requests? Or the corresponding Horizon pages do not have any items to update the memory application? Or those were disabled? "- Automatically defaults for worker nodes with openstack compute label. Changes will be applied on the unlock. - Current 2M huge page default settings - 1-1G huge page per numa node for vswitch " Questions: in this situation, is the k8s huge page feature disabled (according to the above requirement)? And the (host-memory) CLIs will reject any requests? And a question related with VMs: If a VM using huge page (with flavor having 'hw:mem_page_size=large' or 'hw:mem_page_size=1048576') is launched, will the free memory pages decreased accordingly on the worker it's running on? That is, if the VM is consuming 1G huge-page, the number of free page of 1G size on the hosting worker should be reduced by 1. Is this still the expected behavior? This is the assumption in https://bugs.launchpad.net/starlingx/+bug/1813325. Regards, Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Sat Apr 27 01:35:24 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Sat, 27 Apr 2019 01:35:24 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190426 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-25 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL]| 37 TCs FAIL Sanity Platform 07 TCs [FAIL]| 05 TCs FAIL TOTAL: 57 TCS [Fail : 42] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [FAIL] | 52 TCs FAIL Sanity Platform 05 TCs [FAIL] | 03 TCs FAIL TOTAL: 57 TCS [Fail : 55 TCs] Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs ] [Fail : 57 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs ] [Fail : 57 TCs] - BM environment : some pods are failing during sanity execution https://bugs.launchpad.net/starlingx/+bug/1826308 - Virtual environment : failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Sat Apr 27 19:46:19 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Sat, 27 Apr 2019 19:46:19 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: <1F0FCBCF-A1A5-4E8D-9671-BCD5EFA15BCC@intel.com> References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> <1F0FCBCF-A1A5-4E8D-9671-BCD5EFA15BCC@intel.com> Message-ID: Hi All: After a prolonged debug session on Friday by various developers, it looks like the memory issue seen in the Intel labs is due to the excessive number of nova pods being launched which is directly related to the number of cores used on the BM servers. The Intel lab servers have many more cores than most of the labs used in WindRiver labs and explains why the memory issue is much rarer in some labs. Al Bailey and Gerry Kopec worked on a solution [1] which should be available in today's builds. In addition while debugging the application-apply issues on AIO labs, in some cases timeouts were being seen either during download or applying of the stx-application. This is believed to be a result of a StoryBoard that merged two weeks ago to affine platform processes and pods to platform cores leaving the other cores available for application pods. This reduces the core processing available during application-apply. To alleviate this issue, two additional commits [2,3] were proposed and merged. Let's review the updated sanity results on Monday and determine if any further actions are required. Frank [1] https://review.opendev.org/#/c/656037/ [2] https://review.opendev.org/#/c/656009/ [3] https://review.opendev.org/#/c/656025/ -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Friday, April 26, 2019 6:06 PM To: Victor Rodriguez ; Cordoba Malibran, Erich Cc: Li, Cheng1 ; Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Hi All, Some test were made to find the point where the memory is allocated: Just after `config_controller` it's using just a handful of GBs: controller-0:~$ free -h total used free shared buff/cache available Mem: 93G 3.2G 84G 47M 5.5G 88G Swap: 0B 0B 0B controller-0:~$ Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB, this is just with kube-system pods: total used free shared buff/cache available Mem: 93G 71G 19G 45M 1.9G 20G Swap: 0B 0B 0B NAME READY STATUS RESTARTS AGE calico-kube-controllers-84cdb6bd7c-w75rk 1/1 Running 1 36m calico-node-zp8xv 1/1 Running 1 36m coredns-84bb87857f-lp8sl 1/1 Running 1 36m coredns-84bb87857f-r6mdf 0/1 Pending 0 36m kube-apiserver-controller-0 1/1 Running 1 35m kube-controller-manager-controller-0 1/1 Running 2 35m kube-proxy-w7sfq 1/1 Running 1 36m kube-scheduler-controller-0 1/1 Running 2 35m tiller-deploy-d87d7bd75-hjb7w 1/1 Running 1 36m Bug updated with this info. Regards, Cristopher Lemus On 4/26/19, 11:30 AM, "Victor Rodriguez" wrote: Hi team My findings so far this morning: In order to know how much memory ( really ) a docker is consuming i tested 2 tools ( docker stat and reading from the /proc/pid/mmpas ) I create a simple C code that consumes X KB of memory by malloc and then free it: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c Reserving 5000 Kb of memory Value of String = simple_test Address = 2895619200 Waiting for 30 seconds I compile it and cp into my docker image: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile When I run the docker and monitor the memory with docker stats : It shows only 2.5 Kb of memory when from /proc kernel ifo i get : vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker docker-containe 1857 : 0 Kb dockerd 2758 : 0 Kb docker-containe 3368 : 0 Kb docker-containe 5438 : 0 Kb docker-containe 25159 : 0 Kb docker 25105 : 48378 Kb ( first column is PID second one is memory consumed ) , in this case, it shows 48378 kb vs 5000 kb of memory that i know that i requested In order to find the memory leak, we must rely on the tools we use to measure it, Cristopher can you help me to repeat the same experiment to know if you see the same behavior ? If so we can start to put -m on each docker image to limit the memory size ( 2GB should be enough right ? ) WIP regards On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez wrote: > > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time > > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex > > (0) https://github.com/clearlinux/psstop > > Regards > Victor Rodriguez > > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich wrote: >> >> Hi, >> >> In this case we have: >> >> HugePages_Total: 34104 >> HugePages_Free: 34104 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> >> So, I'm not sure if it can be related with 1825814. >> >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. >> >> total used free shared buff/cache available >> Mem: 93G 84G 3.2G 66M 5.6G 4.8G >> Swap: 0B 0B 0B >> >> >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. >> >> -Erich >> >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html >> >> >> >> From: "Li, Cheng1" >> Date: Thursday, April 25, 2019 at 8:29 PM >> To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Actually, I had also reported the memory issue[1] days ago. >> Memory exhaust happens because so little 4K memory is allocated for system/software load. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814 >> >> Thanks, >> Cheng >> >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] >> Sent: Friday, April 26, 2019 1:50 AM >> To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Hi Frank, >> >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. >> >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 >> >> If additional information is required, please, just let us know. >> >> Thanks & Regards, >> >> Cristopher Lemus >> >> From: "Miller, Frank" >> Date: Thursday, April 25, 2019 at 8:24 AM >> To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Maria: >> >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? >> >> Frank >> [1] https://review.opendev.org/#/c/655240/ >> >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] >> Sent: Thursday, April 25, 2019 12:12 AM >> To: mailto:starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/) >> >> Status: RED >> >> =========================================== >> >> Sanity Test is executed in a Containers – Bare Metal Environment >> >> AIO - Simplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL >> Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47] >> >> AIO – Duplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL >> Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47 TCs] >> >> Standard - Local Storage (2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [PASS] >> Sanity Platform 07 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> Standard - Dedicated Storage (2+2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [PASS] >> Sanity Platform 05 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> >> >> Sanity Test is executed in a Containers - Virtual Environment >> >> AIO - Simplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> AIO - Duplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Local Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Dedicated Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. >> >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack >> >> >> Regards >> Maria G. >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From gaosong.lc at inspur.com Sun Apr 28 02:11:34 2019 From: gaosong.lc at inspur.com (=?gb2312?B?U29uZyBHYW8gc29uZyAouN/LySk=?=) Date: Sun, 28 Apr 2019 02:11:34 +0000 Subject: [Starlingx-discuss] Python 3 Error in python 2.7 env when building iso Message-ID: Hi Folks: According to the Build Guide for stx.2019.05, there is an error found in the building container, /usr/lib/python2.7/site-packages/mockbuild/package_manager.py Line 162 will interrupt buiding Process By throwing out an FileNotFoundError. While, FileNotFoundError is an Python3 built-in error, not in python2.7. Just reporting to the community. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From zhang.kunpeng at 99cloud.net Mon Apr 29 07:14:07 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 29 Apr 2019 15:14:07 +0800 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35F26839@SHSMSX104.ccr.corp.intel.com> Message-ID: <7ED45F33-FAEA-460B-B089-E524C16F64E4@99cloud.net> Hi Victor, I got one error when I was building python-horizon in ubuntu 16.04 with this command "make package PKG=x.stx-upstream/openstack/python-horizon/ DISTRO=ubuntu”. Below is the last log: 0 packages upgraded, 130 newly installed, 0 to remove and 0 not upgraded. Need to get 15.8 MB/35.5 MB of archives. After unpacking 173 MB will be used. Abort. E: pbuilder-satisfydepends failed. I: Copying back the cached apt archive contents I: unmounting /usr/local/mydebs/ filesystem I: unmounting dev/pts filesystem I: unmounting run/shm filesystem I: unmounting proc filesystem I: cleaning the build env I: removing directory /var/cache/pbuilder/build/15050 and its subdirectories Makefile:5: recipe for target 'all' failed make[1]: *** [all] Error 1 make[1]: Leaving directory '/home/ubuntu/stx-packaging/x.stx-upstream/openstack/python-horizon/ubuntu' Makefile:25: recipe for target 'build_pkg_native' failed make: *** [build_pkg_native] Error 2 Is there a lack of some configurations? Thanks Kunpeng > On Apr 25, 2019, at 00:03, Victor Rodriguez wrote: > > https://wiki.openstack.org/wiki/StarlingX/Installation_Guide_Virtual_Environment/Controller_Storage -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: horizon_build.log Type: application/octet-stream Size: 13704 bytes Desc: not available URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Apr 29 13:33:44 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 29 Apr 2019 13:33:44 +0000 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 Message-ID: We will be holding a meeting today. Planned agenda is below - please add to the etherpad agenda if you have any additional topics to discuss: Etherpad: https://etherpad.openstack.org/p/stx-containerization Agenda: 1. Sanity status/issue discussion: - All in one systems, bare metal - Multi node systems - Virtual systems 2. Test team status & top issues 3. Feature Topics: - Overall status - Technical discussion on any outstanding features 4. Open topics - no meeting Monday May 6th - other items? ------ Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Apr 29 13:42:43 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 29 Apr 2019 08:42:43 -0500 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 In-Reply-To: References: Message-ID: Hi Frank I am in charge of the track of performance and footprint of the STX systems Due to the fact that this bug was a footprint issue I was wondering if should discuss on the meeting on what stages and part of the STX do you want me to track the memory footprint Also, I have a question regarding the chance to set up -m to each docker image so we can limit the amount of memory of each one. One of the experiments we did las Friday debugging the issue was that if we set -m and the system has SWAP it will take memory from there and keep running since we in STX do not have swap the containers fails from starvation. Regards Victor R On Mon, Apr 29, 2019 at 8:34 AM Miller, Frank wrote: > > We will be holding a meeting today. Planned agenda is below – please add to the etherpad agenda if you have any additional topics to discuss: > > > > Etherpad: https://etherpad.openstack.org/p/stx-containerization > > > > Agenda: > > 1. Sanity status/issue discussion: > > - All in one systems, bare metal > > - Multi node systems > > - Virtual systems > > > > 2. Test team status & top issues > > > > 3. Feature Topics: > > - Overall status > > - Technical discussion on any outstanding features > > > > 4. Open topics > > - no meeting Monday May 6th > > - other items? > > > > > > ------ > > Frank > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Mon Apr 29 13:45:04 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Mon, 29 Apr 2019 15:45:04 +0200 (CEST) Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs Message-ID: <1099514314.1023130.1556545504733@communicator.strato.com> Hi, I am using the starlingx Image from 20190411 and trying to setup a duplex configuration. Bringing up the services "system application-apply stx-openstack" seems to hang at 65% in osh-openstack-neutron: | stx-openstack | armada-manifest | manifest.yaml | applying | processing chart: osh-openstack-neutron, overall completion: 65.0% | Output from "sudo docker exec armada_service tail -n 100 -f stx-openstack-apply.log": Any idea what can cause this behaviour and how to debug it? Thanks Marcel From michel.thebeau at windriver.com Mon Apr 29 14:01:25 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 29 Apr 2019 10:01:25 -0400 Subject: [Starlingx-discuss] Python 3 Error in python 2.7 env when building iso In-Reply-To: References: Message-ID: <1556546485.24388.6.camel@windriver.com> Hi, If memory serves me correctly, that's a version of mock that does not properly report python3 as a requirement.  A month ago I fixed this on my build host by downgrading mock to a version that was not broken. Google search is agreeing with my memory: https://bugzilla.redhat.com/show_bug.cgi?id=1696234 https://bugzilla.redhat.com/show_bug.cgi?id=1686107 M On Sun, 2019-04-28 at 02:11 +0000, Song Gao song (高松) wrote: > Hi Folks: >         According to the Build Guide for stx.2019.05, there is an > error found in the building container, > /usr/lib/python2.7/site-packages/mockbuild/package_manager.py Line > 162 will interrupt buiding > Process By throwing out an FileNotFoundError. While, > FileNotFoundError is an Python3 built-in error, > not in python2.7. Just reporting to the community. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Mon Apr 29 14:44:35 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 29 Apr 2019 14:44:35 +0000 Subject: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction In-Reply-To: References: <424305be-e43b-c48f-5d75-f0aa8f23f8c3@windriver.com> <95d4d335-04f8-5990-77d0-d90428ae5a1a@windriver.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA48A4FE@ALA-MBD.corp.ad.wrs.com> The associated starlingx updates have now merged. -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, April 26, 2019 4:42 PM To: Somerville, Jim Cc: Xie, Cindy; Rowsell, Brent; Khalil, Ghada; Saul Wold; Hu, Yong; Liu, ZhipengS; starlingx Subject: Re: [Starlingx-discuss] V1 Review Request: Story 29990: libvirt and qemu patch reduction On Fri, Apr 26, 2019 at 2:52 PM Jim Somerville wrote: > Saul finished approving the new branch contents, so they're ready to > merge into your newly created -1 branch versions, assuming you're good > with them as well. Done. Thanks Jim dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Al.Bailey at windriver.com Mon Apr 29 14:57:37 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Mon, 29 Apr 2019 14:57:37 +0000 Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs In-Reply-To: <1099514314.1023130.1556545504733@communicator.strato.com> References: <1099514314.1023130.1556545504733@communicator.strato.com> Message-ID: Note: Your debugging output from the armada pod is empty. The 65% osh-openstack-neutron is the compute-kit chart group. This includes nova, neutron, openvswitch, nova-api-proxy and neutron. If all those pods do not start up in minutes, the apply will fail. Probably this was the issue: https://bugs.launchpad.net/starlingx/+bug/1826592 Depending on what time the load on April 11 was built, it could also have been: https://bugs.launchpad.net/starlingx/+bug/1824567 Al -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Monday, April 29, 2019 9:45 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs Hi, I am using the starlingx Image from 20190411 and trying to setup a duplex configuration. Bringing up the services "system application-apply stx-openstack" seems to hang at 65% in osh-openstack-neutron: | stx-openstack | armada-manifest | manifest.yaml | applying | processing chart: osh-openstack-neutron, overall completion: 65.0% | Output from "sudo docker exec armada_service tail -n 100 -f stx-openstack-apply.log": Any idea what can cause this behaviour and how to debug it? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Mon Apr 29 17:49:11 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Mon, 29 Apr 2019 19:49:11 +0200 (CEST) Subject: [Starlingx-discuss] Duplex: Controller-1 Kernel Panic Message-ID: <766150029.1037580.1556560151136@communicator.strato.com> [Hardware: ArteSyn MaxCore, 64GB RAM, 2 CPU Boards] Hi, after setting the personality of our controller-1 we get the follwoing kernel panic: … [ 19.876920] iTCO_vendor_support: vendor-support=0 [ 19.878838] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 [ 19.878895] iTCO_wdt: I/O address 0x0460 already in use, device disabled [ 19.878902] iTCO_wdt: probe of iTCO_wdt.0.auto failed with error -16 [ 19.928431] nvme nvme0: missing or invalid SUBNQN field. [ 20.003872] nvme nvme1: missing or invalid SUBNQN field. [ 20.132408] ata1: SATA link down (SStatus 0 SControl 300) [ 20.138460] ata3: SATA link down (SStatus 0 SControl 300) [ 20.144507] ata5: SATA link down (SStatus 0 SControl 300) [ 20.150552] ata4: SATA link down (SStatus 0 SControl 300) [ 20.156587] ata6: SATA link down (SStatus 0 SControl 300) [ 20.162632] ata2: SATA link down (SStatus 0 SControl 300) [ 20.808356] scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 [ 20.831406] sd 0:0:0:0: [sda] Attached SCSI removable disk [ 20.839066] random: fast init done [ 80.503675] nvme nvme1: I/O 22 QID 0 timeout, disable controller [ 80.511636] nvme nvme0: I/O 0 QID 0 timeout, disable controller [ 80.610660] ------------[ cut here ]------------ [ 80.615824] WARNING: CPU: 3 PID: 4793 at kernel/irq/manage.c:1355 __free_irq+0xb3/0x250 [ 80.618638] ------------[ cut here ]------------ [ 80.618642] WARNING: CPU: 1 PID: 4792 at kernel/irq/manage.c:1355 __free_irq+0xb3/0x250 [ 80.618643] Trying to free already-free IRQ 143 [ 80.618673] Modules linked in: sd_mod crc_t10dif crct10dif_generic intel_powerclamp iTCO_wdt coretemp iTCO_vendor_support kvm_intel uas kvm irqbypass crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel ghash_clmulni_intel usb_storage aesni_intel glue_helper ablk_helper cryptd wdat_wdt ahci lpc_ich i2c_i801 libahci nvme nvme_core acpi_power_meter 8021q garp mrp stp llc sunrpc xts lrw gf128mul dm_crypt dm_round_robin dm_multipath dm_snapshot dm_bufio dm_mirror dm_region_hash dm_log dm_zero dm_mod linear raid10 raid456 async_raid6_recov async_memcpy async_pq raid6_pq libcrc32c async_xor xor async_tx raid1 raid0 iscsi_ibft iscsi_boot_sysfs tpm_tis(O) tpm_tis_core(O) tpm(O) xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) mlx5_ib(O) mlx5_core(O) mlxfw(O) mlx4_ib(O) mlx4_en(O) mlx4_core(O) devlink rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_uverbs(O) ib_cm(O) ib_core(O) mlx_compat(O) ixgbe(O) dca i40e(O) e1000e(O) ip_tables [ 80.618683] CPU: 1 PID: 4792 Comm: kworker/1:1H Tainted: G W OE ------------ 3.10.0-957.1.3.el7.1.tis.x86_64 #1 [ 80.618684] Hardware name: ARTESYN PCIE-7410/PCIE-7410, BIOS 1.7.3 5-Jun-2018 10:49 [ 80.618689] Workqueue: kblockd blk_mq_timeout_work [ 80.618690] Call Trace: [ 80.618695] [] dump_stack+0x19/0x1b [ 80.618700] [] __warn+0xd8/0x100 [ 80.618702] [] warn_slowpath_fmt+0x5f/0x80 [ 80.618703] [] __free_irq+0xb3/0x250 [ 80.618704] [] free_irq+0x39/0x90 [ 80.618708] [] nvme_dev_disable+0x113/0x4a0 [nvme] [ 80.618712] [] ? dev_warn+0x6c/0x90 [ 80.618714] [] nvme_timeout+0x204/0x2d0 [nvme] [ 80.618719] [] ? cpuacct_charge+0x61/0x70 [ 80.618720] [] ? update_curr+0x14c/0x210 [ 80.618722] [] blk_mq_rq_timed_out+0x32/0x80 [ 80.618723] [] blk_mq_check_expired+0x5c/0x60 [ 80.618725] [] bt_iter+0x54/0x60 [ 80.618726] [] blk_mq_queue_tag_busy_iter+0x11b/0x290 [ 80.618727] [] ? blk_mq_rq_timed_out+0x80/0x80 [ 80.618728] [] ? blk_mq_rq_timed_out+0x80/0x80 [ 80.618732] [] ? __sched_fork+0x250/0x260 [ 80.618733] [] blk_mq_timeout_work+0xbb/0x1c0 [ 80.618738] [] process_one_work+0x176/0x4a0 [ 80.618739] [] worker_thread+0x126/0x3b0 [ 80.618741] [] ? manage_workers.isra.28+0x2a0/0x2a0 [ 80.618743] [] kthread+0xd1/0xe0 [ 80.618744] [] ? kthread_create_on_node+0x140/0x140 [ 80.618748] [] ret_from_fork_nospec_begin+0x7/0x21 [ 80.618750] [] ? kthread_create_on_node+0x140/0x140 [ 80.618750] ---[ end trace 29827fc8c242fa85 ]--- [ 80.618756] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048 [ 80.618757] IP: [] free_irq+0x39/0x90 [ 80.618758] PGD 0 [ 80.618759] Oops: 0000 [#1] PREEMPT SMP [ 80.618774] Modules linked in: sd_mod crc_t10dif crct10dif_generic intel_powerclamp iTCO_wdt coretemp iTCO_vendor_support kvm_intel uas kvm irqbypass crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel ghash_clmulni_intel usb_storage aesni_intel glue_helper ablk_helper cryptd wdat_wdt ahci lpc_ich i2c_i801 libahci nvme nvme_core acpi_power_meter 8021q garp mrp stp llc sunrpc xts lrw gf128mul dm_crypt dm_round_robin dm_multipath dm_snapshot dm_bufio dm_mirror dm_region_hash dm_log dm_zero dm_mod linear raid10 raid456 async_raid6_recov async_memcpy async_pq raid6_pq libcrc32c async_xor xor async_tx raid1 raid0 iscsi_ibft iscsi_boot_sysfs tpm_tis(O) tpm_tis_core(O) tpm(O) xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) mlx5_ib(O) mlx5_core(O) mlxfw(O) mlx4_ib(O) mlx4_en(O) mlx4_core(O) devlink rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_uverbs(O) ib_cm(O) ib_core(O) mlx_compat(O) ixgbe(O) dca i40e(O) e1000e(O) ip_tables [ 80.618780] CPU: 1 PID: 4792 Comm: kworker/1:1H Tainted: G W OE ------------ 3.10.0-957.1.3.el7.1.tis.x86_64 #1 [ 80.618780] Hardware name: ARTESYN PCIE-7410/PCIE-7410, BIOS 1.7.3 5-Jun-2018 10:49 [ 80.618782] Workqueue: kblockd blk_mq_timeout_work … I suspect that the root cause is the error "nvme nvme0: missing or invalid SUBNQN field." nvme0n1 is our boot_device and rootfs_device. Asking Google I'll found something similar (https://unix.stackexchange.com/questions/470778/nvme-missing-or-invalid-subnqn-field). What is the official way to change the kernel boot parameters for pxe boot? Thanks Marcel From marcel at schaible-consulting.de Mon Apr 29 17:50:34 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Mon, 29 Apr 2019 19:50:34 +0200 (CEST) Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs In-Reply-To: References: <1099514314.1023130.1556545504733@communicator.strato.com> Message-ID: <2132456036.1037617.1556560234918@communicator.strato.com> Hi Albert, thanks for clarification. Which build would you recommend? Thanks Marcel > "Bailey, Henry Albert (Al)" hat am 29. April 2019 um 16:57 geschrieben: > > > Note: Your debugging output from the armada pod is empty. > > The 65% osh-openstack-neutron is the compute-kit chart group. This includes nova, neutron, openvswitch, nova-api-proxy and neutron. > > If all those pods do not start up in minutes, the apply will fail. > > Probably this was the issue: > https://bugs.launchpad.net/starlingx/+bug/1826592 > > Depending on what time the load on April 11 was built, it could also have been: > https://bugs.launchpad.net/starlingx/+bug/1824567 > > Al > > > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Monday, April 29, 2019 9:45 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs > > Hi, > > I am using the starlingx Image from 20190411 and trying to setup a duplex configuration. > > Bringing up the services > > "system application-apply stx-openstack" > > seems to hang at 65% in osh-openstack-neutron: > > | stx-openstack | armada-manifest | manifest.yaml | applying | processing chart: osh-openstack-neutron, overall completion: 65.0% | > > Output from "sudo docker exec armada_service tail -n 100 -f stx-openstack-apply.log": > > Any idea what can cause this behaviour and how to debug it? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Mon Apr 29 18:25:07 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 29 Apr 2019 18:25:07 +0000 Subject: [Starlingx-discuss] Duplex: Controller-1 Kernel Panic In-Reply-To: <766150029.1037580.1556560151136@communicator.strato.com> References: <766150029.1037580.1556560151136@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA48B6FD@ALA-MBD.corp.ad.wrs.com> Hi Marcel, We don't currently have a mechanism to allow a user to add arbitrary kernel parameters to the installation boot cmdline. In order to test to see if adding this parameter resolves your issue, you could modify the /pxeboot/pxelinux.cfg/ file that corresponds to your controller-1. If it works, we can look at either adding it to the default cmdline, or add support for specifying either this value or something more flexible. Cheers, Don. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Monday, April 29, 2019 1:49 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Duplex: Controller-1 Kernel Panic [Hardware: ArteSyn MaxCore, 64GB RAM, 2 CPU Boards] Hi, after setting the personality of our controller-1 we get the follwoing kernel panic: … [ 19.876920] iTCO_vendor_support: vendor-support=0 [ 19.878838] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 [ 19.878895] iTCO_wdt: I/O address 0x0460 already in use, device disabled [ 19.878902] iTCO_wdt: probe of iTCO_wdt.0.auto failed with error -16 [ 19.928431] nvme nvme0: missing or invalid SUBNQN field. [ 20.003872] nvme nvme1: missing or invalid SUBNQN field. [ 20.132408] ata1: SATA link down (SStatus 0 SControl 300) [ 20.138460] ata3: SATA link down (SStatus 0 SControl 300) [ 20.144507] ata5: SATA link down (SStatus 0 SControl 300) [ 20.150552] ata4: SATA link down (SStatus 0 SControl 300) [ 20.156587] ata6: SATA link down (SStatus 0 SControl 300) [ 20.162632] ata2: SATA link down (SStatus 0 SControl 300) [ 20.808356] scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 [ 20.831406] sd 0:0:0:0: [sda] Attached SCSI removable disk [ 20.839066] random: fast init done [ 80.503675] nvme nvme1: I/O 22 QID 0 timeout, disable controller [ 80.511636] nvme nvme0: I/O 0 QID 0 timeout, disable controller [ 80.610660] ------------[ cut here ]------------ [ 80.615824] WARNING: CPU: 3 PID: 4793 at kernel/irq/manage.c:1355 __free_irq+0xb3/0x250 [ 80.618638] ------------[ cut here ]------------ [ 80.618642] WARNING: CPU: 1 PID: 4792 at kernel/irq/manage.c:1355 __free_irq+0xb3/0x250 [ 80.618643] Trying to free already-free IRQ 143 [ 80.618673] Modules linked in: sd_mod crc_t10dif crct10dif_generic intel_powerclamp iTCO_wdt coretemp iTCO_vendor_support kvm_intel uas kvm irqbypass crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel ghash_clmulni_intel usb_storage aesni_intel glue_helper ablk_helper cryptd wdat_wdt ahci lpc_ich i2c_i801 libahci nvme nvme_core acpi_power_meter 8021q garp mrp stp llc sunrpc xts lrw gf128mul dm_crypt dm_round_robin dm_multipath dm_snapshot dm_bufio dm_mirror dm_region_hash dm_log dm_zero dm_mod linear raid10 raid456 async_raid6_recov async_memcpy async_pq raid6_pq libcrc32c async_xor xor async_tx raid1 raid0 iscsi_ibft iscsi_boot_sysfs tpm_tis(O) tpm_tis_core(O) tpm(O) xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) mlx5_ib(O) mlx5_core(O) mlxfw(O) mlx4_ib(O) mlx4_en(O) mlx4_core(O) devlink rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_uverbs(O) ib_cm(O) ib_core(O) mlx_compat(O) ixgbe(O) dca i40e(O) e1000e(O) ip_tables [ 80.618683] CPU: 1 PID: 4792 Comm: kworker/1:1H Tainted: G W OE ------------ 3.10.0-957.1.3.el7.1.tis.x86_64 #1 [ 80.618684] Hardware name: ARTESYN PCIE-7410/PCIE-7410, BIOS 1.7.3 5-Jun-2018 10:49 [ 80.618689] Workqueue: kblockd blk_mq_timeout_work [ 80.618690] Call Trace: [ 80.618695] [] dump_stack+0x19/0x1b [ 80.618700] [] __warn+0xd8/0x100 [ 80.618702] [] warn_slowpath_fmt+0x5f/0x80 [ 80.618703] [] __free_irq+0xb3/0x250 [ 80.618704] [] free_irq+0x39/0x90 [ 80.618708] [] nvme_dev_disable+0x113/0x4a0 [nvme] [ 80.618712] [] ? dev_warn+0x6c/0x90 [ 80.618714] [] nvme_timeout+0x204/0x2d0 [nvme] [ 80.618719] [] ? cpuacct_charge+0x61/0x70 [ 80.618720] [] ? update_curr+0x14c/0x210 [ 80.618722] [] blk_mq_rq_timed_out+0x32/0x80 [ 80.618723] [] blk_mq_check_expired+0x5c/0x60 [ 80.618725] [] bt_iter+0x54/0x60 [ 80.618726] [] blk_mq_queue_tag_busy_iter+0x11b/0x290 [ 80.618727] [] ? blk_mq_rq_timed_out+0x80/0x80 [ 80.618728] [] ? blk_mq_rq_timed_out+0x80/0x80 [ 80.618732] [] ? __sched_fork+0x250/0x260 [ 80.618733] [] blk_mq_timeout_work+0xbb/0x1c0 [ 80.618738] [] process_one_work+0x176/0x4a0 [ 80.618739] [] worker_thread+0x126/0x3b0 [ 80.618741] [] ? manage_workers.isra.28+0x2a0/0x2a0 [ 80.618743] [] kthread+0xd1/0xe0 [ 80.618744] [] ? kthread_create_on_node+0x140/0x140 [ 80.618748] [] ret_from_fork_nospec_begin+0x7/0x21 [ 80.618750] [] ? kthread_create_on_node+0x140/0x140 [ 80.618750] ---[ end trace 29827fc8c242fa85 ]--- [ 80.618756] BUG: unable to handle kernel NULL pointer dereference at 0000000000000048 [ 80.618757] IP: [] free_irq+0x39/0x90 [ 80.618758] PGD 0 [ 80.618759] Oops: 0000 [#1] PREEMPT SMP [ 80.618774] Modules linked in: sd_mod crc_t10dif crct10dif_generic intel_powerclamp iTCO_wdt coretemp iTCO_vendor_support kvm_intel uas kvm irqbypass crct10dif_pclmul crct10dif_common crc32_pclmul crc32c_intel ghash_clmulni_intel usb_storage aesni_intel glue_helper ablk_helper cryptd wdat_wdt ahci lpc_ich i2c_i801 libahci nvme nvme_core acpi_power_meter 8021q garp mrp stp llc sunrpc xts lrw gf128mul dm_crypt dm_round_robin dm_multipath dm_snapshot dm_bufio dm_mirror dm_region_hash dm_log dm_zero dm_mod linear raid10 raid456 async_raid6_recov async_memcpy async_pq raid6_pq libcrc32c async_xor xor async_tx raid1 raid0 iscsi_ibft iscsi_boot_sysfs tpm_tis(O) tpm_tis_core(O) tpm(O) xprtrdma(O) svcrdma(O) rpcrdma(O) nvmet_rdma(O) nvme_rdma(O) ib_srp(O) ib_isert(O) ib_iser(O) rdma_rxe(O) mlx5_ib(O) mlx5_core(O) mlxfw(O) mlx4_ib(O) mlx4_en(O) mlx4_core(O) devlink rdma_ucm(O) rdma_cm(O) iw_cm(O) ib_ucm(O) ib_uverbs(O) ib_cm(O) ib_core(O) mlx_compat(O) ixgbe(O) dca i40e(O) e1000e(O) ip_tables [ 80.618780] CPU: 1 PID: 4792 Comm: kworker/1:1H Tainted: G W OE ------------ 3.10.0-957.1.3.el7.1.tis.x86_64 #1 [ 80.618780] Hardware name: ARTESYN PCIE-7410/PCIE-7410, BIOS 1.7.3 5-Jun-2018 10:49 [ 80.618782] Workqueue: kblockd blk_mq_timeout_work … I suspect that the root cause is the error "nvme nvme0: missing or invalid SUBNQN field." nvme0n1 is our boot_device and rootfs_device. Asking Google I'll found something similar (https://unix.stackexchange.com/questions/470778/nvme-missing-or-invalid-subnqn-field). What is the official way to change the kernel boot parameters for pxe boot? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Al.Bailey at windriver.com Mon Apr 29 18:24:40 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Mon, 29 Apr 2019 18:24:40 +0000 Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs In-Reply-To: <2132456036.1037617.1556560234918@communicator.strato.com> References: <1099514314.1023130.1556545504733@communicator.strato.com> <2132456036.1037617.1556560234918@communicator.strato.com> Message-ID: My understanding is that sanity is underway for the April 28 build, and is looking promising. I believe the sanity results will be published later tonight. If it passes, I think we would point the latest_green_build to that revision and that would be the build we recommend. Link to latest green build is here, it's currently indicating the April 11 load. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_green_build/ Al -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Monday, April 29, 2019 1:51 PM To: Bailey, Henry Albert (Al); starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs Hi Albert, thanks for clarification. Which build would you recommend? Thanks Marcel > "Bailey, Henry Albert (Al)" hat am 29. April 2019 um 16:57 geschrieben: > > > Note: Your debugging output from the armada pod is empty. > > The 65% osh-openstack-neutron is the compute-kit chart group. This includes nova, neutron, openvswitch, nova-api-proxy and neutron. > > If all those pods do not start up in minutes, the apply will fail. > > Probably this was the issue: > https://bugs.launchpad.net/starlingx/+bug/1826592 > > Depending on what time the load on April 11 was built, it could also have been: > https://bugs.launchpad.net/starlingx/+bug/1824567 > > Al > > > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Monday, April 29, 2019 9:45 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex: Bringing up osh-openstack-neutron hangs > > Hi, > > I am using the starlingx Image from 20190411 and trying to setup a duplex configuration. > > Bringing up the services > > "system application-apply stx-openstack" > > seems to hang at 65% in osh-openstack-neutron: > > | stx-openstack | armada-manifest | manifest.yaml | applying | processing chart: osh-openstack-neutron, overall completion: 65.0% | > > Output from "sudo docker exec armada_service tail -n 100 -f stx-openstack-apply.log": > > Any idea what can cause this behaviour and how to debug it? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From elio.martinez.monroy at intel.com Mon Apr 29 19:17:21 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Mon, 29 Apr 2019 19:17:21 +0000 Subject: [Starlingx-discuss] AZs Awareness Message-ID: <1466AF2176E6F040BD63860D0A241BBD46CBE927@FMSMSX109.amr.corp.intel.com> Should we consider to evaluate AZ Awareness in our availabitity zones? BR [cid:image001.png at 01CF8BAC.3B4C5DD0] Martinez Monroy, Elio. QA Engineer. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 4914 bytes Desc: image001.png URL: From cristopher.j.lemus.contreras at intel.com Mon Apr 29 20:21:29 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Mon, 29 Apr 2019 20:21:29 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> <1F0FCBCF-A1A5-4E8D-9671-BCD5EFA15BCC@intel.com> Message-ID: <0427FED8-440B-4129-94F7-2CBF6DBBDE8E@intel.com> Hi Frank, With latest ISO, all baremetal configurations are passing sanity test (Green Status), regarding memory usage, during the unlock of controller-0, it jumps from using 5.5GB to 72GB, when we reported the bug, the usage was 71GB, almost the same as today. I'm assuming that docker reserves the memory because the pods/containers are not limited, as we can see on docker stats, almost all containers have their limit set by the total amount of physical memory on the system, Is this behavior expected? is there a way to properly track down memory usage at docker level? Ideally, something that can help to determine when memory is being heavily impacted and something that helps to provide valuable information when we report bugs. I added some outputs about memory usage at os level and what is reported by docker on the bug: https://bugs.launchpad.net/starlingx/+bug/1826308 Thanks! Cristopher Lemus On 4/27/19, 2:46 PM, "Miller, Frank" wrote: Hi All: After a prolonged debug session on Friday by various developers, it looks like the memory issue seen in the Intel labs is due to the excessive number of nova pods being launched which is directly related to the number of cores used on the BM servers. The Intel lab servers have many more cores than most of the labs used in WindRiver labs and explains why the memory issue is much rarer in some labs. Al Bailey and Gerry Kopec worked on a solution [1] which should be available in today's builds. In addition while debugging the application-apply issues on AIO labs, in some cases timeouts were being seen either during download or applying of the stx-application. This is believed to be a result of a StoryBoard that merged two weeks ago to affine platform processes and pods to platform cores leaving the other cores available for application pods. This reduces the core processing available during application-apply. To alleviate this issue, two additional commits [2,3] were proposed and merged. Let's review the updated sanity results on Monday and determine if any further actions are required. Frank [1] https://review.opendev.org/#/c/656037/ [2] https://review.opendev.org/#/c/656009/ [3] https://review.opendev.org/#/c/656025/ -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Friday, April 26, 2019 6:06 PM To: Victor Rodriguez ; Cordoba Malibran, Erich Cc: Li, Cheng1 ; Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Hi All, Some test were made to find the point where the memory is allocated: Just after `config_controller` it's using just a handful of GBs: controller-0:~$ free -h total used free shared buff/cache available Mem: 93G 3.2G 84G 47M 5.5G 88G Swap: 0B 0B 0B controller-0:~$ Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB, this is just with kube-system pods: total used free shared buff/cache available Mem: 93G 71G 19G 45M 1.9G 20G Swap: 0B 0B 0B NAME READY STATUS RESTARTS AGE calico-kube-controllers-84cdb6bd7c-w75rk 1/1 Running 1 36m calico-node-zp8xv 1/1 Running 1 36m coredns-84bb87857f-lp8sl 1/1 Running 1 36m coredns-84bb87857f-r6mdf 0/1 Pending 0 36m kube-apiserver-controller-0 1/1 Running 1 35m kube-controller-manager-controller-0 1/1 Running 2 35m kube-proxy-w7sfq 1/1 Running 1 36m kube-scheduler-controller-0 1/1 Running 2 35m tiller-deploy-d87d7bd75-hjb7w 1/1 Running 1 36m Bug updated with this info. Regards, Cristopher Lemus On 4/26/19, 11:30 AM, "Victor Rodriguez" wrote: Hi team My findings so far this morning: In order to know how much memory ( really ) a docker is consuming i tested 2 tools ( docker stat and reading from the /proc/pid/mmpas ) I create a simple C code that consumes X KB of memory by malloc and then free it: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c Reserving 5000 Kb of memory Value of String = simple_test Address = 2895619200 Waiting for 30 seconds I compile it and cp into my docker image: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile When I run the docker and monitor the memory with docker stats : It shows only 2.5 Kb of memory when from /proc kernel ifo i get : vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker docker-containe 1857 : 0 Kb dockerd 2758 : 0 Kb docker-containe 3368 : 0 Kb docker-containe 5438 : 0 Kb docker-containe 25159 : 0 Kb docker 25105 : 48378 Kb ( first column is PID second one is memory consumed ) , in this case, it shows 48378 kb vs 5000 kb of memory that i know that i requested In order to find the memory leak, we must rely on the tools we use to measure it, Cristopher can you help me to repeat the same experiment to know if you see the same behavior ? If so we can start to put -m on each docker image to limit the memory size ( 2GB should be enough right ? ) WIP regards On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez wrote: > > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time > > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex > > (0) https://github.com/clearlinux/psstop > > Regards > Victor Rodriguez > > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich wrote: >> >> Hi, >> >> In this case we have: >> >> HugePages_Total: 34104 >> HugePages_Free: 34104 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> >> So, I'm not sure if it can be related with 1825814. >> >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. >> >> total used free shared buff/cache available >> Mem: 93G 84G 3.2G 66M 5.6G 4.8G >> Swap: 0B 0B 0B >> >> >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. >> >> -Erich >> >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html >> >> >> >> From: "Li, Cheng1" >> Date: Thursday, April 25, 2019 at 8:29 PM >> To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Actually, I had also reported the memory issue[1] days ago. >> Memory exhaust happens because so little 4K memory is allocated for system/software load. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814 >> >> Thanks, >> Cheng >> >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] >> Sent: Friday, April 26, 2019 1:50 AM >> To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Hi Frank, >> >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. >> >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 >> >> If additional information is required, please, just let us know. >> >> Thanks & Regards, >> >> Cristopher Lemus >> >> From: "Miller, Frank" >> Date: Thursday, April 25, 2019 at 8:24 AM >> To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Maria: >> >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? >> >> Frank >> [1] https://review.opendev.org/#/c/655240/ >> >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] >> Sent: Thursday, April 25, 2019 12:12 AM >> To: mailto:starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/) >> >> Status: RED >> >> =========================================== >> >> Sanity Test is executed in a Containers – Bare Metal Environment >> >> AIO - Simplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL >> Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47] >> >> AIO – Duplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL >> Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47 TCs] >> >> Standard - Local Storage (2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [PASS] >> Sanity Platform 07 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> Standard - Dedicated Storage (2+2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [PASS] >> Sanity Platform 05 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> >> >> Sanity Test is executed in a Containers - Virtual Environment >> >> AIO - Simplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> AIO - Duplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Local Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Dedicated Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. >> >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack >> >> >> Regards >> Maria G. >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bin.yang at intel.com Tue Apr 30 01:33:01 2019 From: bin.yang at intel.com (Yang, Bin) Date: Tue, 30 Apr 2019 01:33:01 +0000 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 In-Reply-To: References: Message-ID: <1556587980.30870.13.camel@intel.com> Hi Victor, K8s always not works with "swap enabled". Refer to: https://github.com/ku bernetes/kubernetes/issues/53533 As "Angus Lees" mentioned, "Applications should configure limits.memory with the max ram they ever want to use, and requests.memory with their intended working set (including kernel buffers, etc)" I think it needs to assign enough memory for STX critical containers to keep system being stable. Please correct me if wrong. thanks, Bin On Mon, 2019-04-29 at 08:42 -0500, Victor Rodriguez wrote: > Hi Frank > > I am in charge of the track of performance and footprint of the STX systems > > Due to the fact that this bug was a footprint issue I was wondering if > should discuss on the meeting on what stages and part of the STX do > you want me to track the memory footprint > > Also, I have a question regarding the chance to set up -m to each > docker image so we can limit the amount of memory of each one. One of > the experiments we did las Friday debugging the issue was that if we > set -m and the system has SWAP it will take memory from there and keep > running since we in STX do not have swap the containers fails from > starvation. > > Regards > > Victor R > > On Mon, Apr 29, 2019 at 8:34 AM Miller, Frank > wrote: > > > > > > We will be holding a meeting today.  Planned agenda is below – please add to > > the etherpad agenda if you have any additional topics to discuss: > > > > > > > > Etherpad: https://etherpad.openstack.org/p/stx-containerization > > > > > > > > Agenda: > > > > 1. Sanity status/issue discussion: > > > >     - All in one systems, bare metal > > > >     - Multi node systems > > > >     - Virtual systems > > > > > > > > 2. Test team status & top issues > > > > > > > > 3. Feature Topics: > > > >     - Overall status > > > >     - Technical discussion on any outstanding features > > > > > > > > 4. Open topics > > > >     - no meeting Monday May 6th > > > >     - other items? > > > > > > > > > > > > ------ > > > > Frank > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Tue Apr 30 02:24:43 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 30 Apr 2019 02:24:43 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-28 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Tue Apr 30 03:09:36 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 30 Apr 2019 03:09:36 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: <0427FED8-440B-4129-94F7-2CBF6DBBDE8E@intel.com> References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> <1F0FCBCF-A1A5-4E8D-9671-BCD5EFA15BCC@intel.com> <0427FED8-440B-4129-94F7-2CBF6DBBDE8E@intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB498486@ALA-MBD.corp.ad.wrs.com> Hi Christopher, Re: during the unlock of controller-0, it jumps from using 5.5GB to 72GB, when we reported the bug A portion of the memory is reserved for the infrastructure the remainder is allocated as hugepages which is used as backing store for the VM's. This is why you see the avail memory drop. Brent -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Monday, April 29, 2019 4:21 PM To: Miller, Frank ; Victor Rodriguez ; Cordoba Malibran, Erich Cc: Li, Cheng1 ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Hi Frank, With latest ISO, all baremetal configurations are passing sanity test (Green Status), regarding memory usage, during the unlock of controller-0, it jumps from using 5.5GB to 72GB, when we reported the bug, the usage was 71GB, almost the same as today. I'm assuming that docker reserves the memory because the pods/containers are not limited, as we can see on docker stats, almost all containers have their limit set by the total amount of physical memory on the system, Is this behavior expected? is there a way to properly track down memory usage at docker level? Ideally, something that can help to determine when memory is being heavily impacted and something that helps to provide valuable information when we report bugs. I added some outputs about memory usage at os level and what is reported by docker on the bug: https://bugs.launchpad.net/starlingx/+bug/1826308 Thanks! Cristopher Lemus On 4/27/19, 2:46 PM, "Miller, Frank" wrote: Hi All: After a prolonged debug session on Friday by various developers, it looks like the memory issue seen in the Intel labs is due to the excessive number of nova pods being launched which is directly related to the number of cores used on the BM servers. The Intel lab servers have many more cores than most of the labs used in WindRiver labs and explains why the memory issue is much rarer in some labs. Al Bailey and Gerry Kopec worked on a solution [1] which should be available in today's builds. In addition while debugging the application-apply issues on AIO labs, in some cases timeouts were being seen either during download or applying of the stx-application. This is believed to be a result of a StoryBoard that merged two weeks ago to affine platform processes and pods to platform cores leaving the other cores available for application pods. This reduces the core processing available during application-apply. To alleviate this issue, two additional commits [2,3] were proposed and merged. Let's review the updated sanity results on Monday and determine if any further actions are required. Frank [1] https://review.opendev.org/#/c/656037/ [2] https://review.opendev.org/#/c/656009/ [3] https://review.opendev.org/#/c/656025/ -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Friday, April 26, 2019 6:06 PM To: Victor Rodriguez ; Cordoba Malibran, Erich Cc: Li, Cheng1 ; Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 Hi All, Some test were made to find the point where the memory is allocated: Just after `config_controller` it's using just a handful of GBs: controller-0:~$ free -h total used free shared buff/cache available Mem: 93G 3.2G 84G 47M 5.5G 88G Swap: 0B 0B 0B controller-0:~$ Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB, this is just with kube-system pods: total used free shared buff/cache available Mem: 93G 71G 19G 45M 1.9G 20G Swap: 0B 0B 0B NAME READY STATUS RESTARTS AGE calico-kube-controllers-84cdb6bd7c-w75rk 1/1 Running 1 36m calico-node-zp8xv 1/1 Running 1 36m coredns-84bb87857f-lp8sl 1/1 Running 1 36m coredns-84bb87857f-r6mdf 0/1 Pending 0 36m kube-apiserver-controller-0 1/1 Running 1 35m kube-controller-manager-controller-0 1/1 Running 2 35m kube-proxy-w7sfq 1/1 Running 1 36m kube-scheduler-controller-0 1/1 Running 2 35m tiller-deploy-d87d7bd75-hjb7w 1/1 Running 1 36m Bug updated with this info. Regards, Cristopher Lemus On 4/26/19, 11:30 AM, "Victor Rodriguez" wrote: Hi team My findings so far this morning: In order to know how much memory ( really ) a docker is consuming i tested 2 tools ( docker stat and reading from the /proc/pid/mmpas ) I create a simple C code that consumes X KB of memory by malloc and then free it: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c Reserving 5000 Kb of memory Value of String = simple_test Address = 2895619200 Waiting for 30 seconds I compile it and cp into my docker image: https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile When I run the docker and monitor the memory with docker stats : It shows only 2.5 Kb of memory when from /proc kernel ifo i get : vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker docker-containe 1857 : 0 Kb dockerd 2758 : 0 Kb docker-containe 3368 : 0 Kb docker-containe 5438 : 0 Kb docker-containe 25159 : 0 Kb docker 25105 : 48378 Kb ( first column is PID second one is memory consumed ) , in this case, it shows 48378 kb vs 5000 kb of memory that i know that i requested In order to find the memory leak, we must rely on the tools we use to measure it, Cristopher can you help me to repeat the same experiment to know if you see the same behavior ? If so we can start to put -m on each docker image to limit the memory size ( 2GB should be enough right ? ) WIP regards On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez wrote: > > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time > > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex > > (0) https://github.com/clearlinux/psstop > > Regards > Victor Rodriguez > > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich wrote: >> >> Hi, >> >> In this case we have: >> >> HugePages_Total: 34104 >> HugePages_Free: 34104 >> HugePages_Rsvd: 0 >> HugePages_Surp: 0 >> >> So, I'm not sure if it can be related with 1825814. >> >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. >> >> total used free shared buff/cache available >> Mem: 93G 84G 3.2G 66M 5.6G 4.8G >> Swap: 0B 0B 0B >> >> >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. >> >> -Erich >> >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html >> >> >> >> From: "Li, Cheng1" >> Date: Thursday, April 25, 2019 at 8:29 PM >> To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Actually, I had also reported the memory issue[1] days ago. >> Memory exhaust happens because so little 4K memory is allocated for system/software load. >> >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814 >> >> Thanks, >> Cheng >> >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] >> Sent: Friday, April 26, 2019 1:50 AM >> To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Hi Frank, >> >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. >> >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 >> >> If additional information is required, please, just let us know. >> >> Thanks & Regards, >> >> Cristopher Lemus >> >> From: "Miller, Frank" >> Date: Thursday, April 25, 2019 at 8:24 AM >> To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Maria: >> >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? >> >> Frank >> [1] https://review.opendev.org/#/c/655240/ >> >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] >> Sent: Thursday, April 25, 2019 12:12 AM >> To: mailto:starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 >> >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/) >> >> Status: RED >> >> =========================================== >> >> Sanity Test is executed in a Containers – Bare Metal Environment >> >> AIO - Simplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL >> Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47] >> >> AIO – Duplex >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL >> Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL >> >> TOTAL: 57 TCS [Fail : 47 TCs] >> >> Standard - Local Storage (2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 49 TCs [PASS] >> Sanity Platform 07 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> Standard - Dedicated Storage (2+2+2) >> >> Setup Manual [PASS] >> Provisioning 01 TCs [PASS] >> Sanity OpenStack 52 TCs [PASS] >> Sanity Platform 05 TCs [PASS] >> >> TOTAL: 57 TCS PASS >> >> >> >> Sanity Test is executed in a Containers - Virtual Environment >> >> AIO - Simplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> AIO - Duplex >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Local Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> >> Standard – Dedicated Storage >> >> Setup 04 TCs [PASS] >> Provisioning 01 TCs [FAIL] >> Sanity OpenStack 49 TCs [FAIL] >> Sanity Platform 07 TCs [FAIL] >> >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] >> >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. >> >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack >> >> >> Regards >> Maria G. >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Tue Apr 30 03:29:49 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 30 Apr 2019 03:29:49 +0000 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB49856A@ALA-MBD.corp.ad.wrs.com> Victor, See inline Brent -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Monday, April 29, 2019 9:43 AM To: Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 Hi Frank I am in charge of the track of performance and footprint of the STX systems Due to the fact that this bug was a footprint issue I was wondering if should discuss on the meeting on what stages and part of the STX do you want me to track the memory footprint Also, I have a question regarding the chance to set up -m to each docker image so we can limit the amount of memory of each one. One of the experiments we did las Friday debugging the issue was that if we set -m and the system has SWAP it will take memory from there and keep running since we in STX do not have swap the containers fails from starvation. [BR] Enabling SWAP in general is not an option for performance reasons. On top of that it will not work with k8s. Regards Victor R On Mon, Apr 29, 2019 at 8:34 AM Miller, Frank wrote: > > We will be holding a meeting today. Planned agenda is below – please add to the etherpad agenda if you have any additional topics to discuss: > > > > Etherpad: https://etherpad.openstack.org/p/stx-containerization > > > > Agenda: > > 1. Sanity status/issue discussion: > > - All in one systems, bare metal > > - Multi node systems > > - Virtual systems > > > > 2. Test team status & top issues > > > > 3. Feature Topics: > > - Overall status > > - Technical discussion on any outstanding features > > > > 4. Open topics > > - no meeting Monday May 6th > > - other items? > > > > > > ------ > > Frank > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Tue Apr 30 04:17:02 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 30 Apr 2019 04:17:02 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 In-Reply-To: References: Message-ID: Maria - the sanities are much improved and I don't think they should be labelled RED. Looks more like YELLOW to me. There are 2 issues reported in this sanity. Can we get updates on the status of the 2 issues from the LP primes: * Cindy for: VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 * Mingyuan for: Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 Let us know if you require assistance to triage/debug. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 29, 2019 10:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-28 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Apr 30 04:25:08 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 30 Apr 2019 04:25:08 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35F3DC92@SHSMSX104.ccr.corp.intel.com> Hi, Frank, I just assign LP#1824412 to Zhipeng. However, due to the fact that China is going to Labor Day holiday and will be black-out for 4 days, I am doubt how much progress we can make. It's appreciated if WR can have resource on both 1824412 and 1826445. Thanks. - cindy From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 30, 2019 12:17 PM To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Maria - the sanities are much improved and I don't think they should be labelled RED. Looks more like YELLOW to me. There are 2 issues reported in this sanity. Can we get updates on the status of the 2 issues from the LP primes: * Cindy for: VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 * Mingyuan for: Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 Let us know if you require assistance to triage/debug. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 29, 2019 10:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-28 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Tue Apr 30 05:02:39 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 30 Apr 2019 05:02:39 +0000 Subject: [Starlingx-discuss] 2nd node stuck at pxe installation Message-ID: <9700A18779F35F49AF027300A49E7C765FEC641D@SHSMSX101.ccr.corp.intel.com> Hi all, I try to do duplex deploy, and find controller-1 stuck at pxe installation step. The screen stuck at "Loading rel-19.01/installer-initrd...........ready". But package installation is not started as host-show command shown in controller-0. I meet this issue with latest code. Do you know what cause the issue, or how to debug it? Thanks. [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot at controller-0 ~(keystone_admin)]$ system host-show controller-1 | grep install | install_output | text | | install_state | None | | install_state_info | None | | subfunction_avail | not-installed Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Apr 30 09:17:50 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 30 Apr 2019 09:17:50 +0000 Subject: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. Message-ID: Hi All: 1) When I tried to build dev images . I meet below error. "Unsupported BUILDER in /home/wrsroot/starlingx/workspace/localdisk/designer/wrsroot/starlingx/cgcs-root/stx/stx-integ/database/mariadb/centos/*.dev_docker_image:" Build command is something like (sudo ./build-stx-images.sh --os centos --stream dev --base starlingx/stx-centos:master-dev-latest --wheels ~/starlingx/wheel/stx-centos-stable-wheels.tar --only stx-fm-rest-api) Does mariadb not support for dev build ? 2) how to push image built by developer to deployed environment directly ? Do we have any wiki or guide about this ? Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Apr 30 12:54:04 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 30 Apr 2019 12:54:04 +0000 Subject: [Starlingx-discuss] Community Call (May 1, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A35802@ALA-MBD.corp.ad.wrs.com> Reminder that we will be holding the Community call tomorrow - please feel free to add to the agenda at [0]. I'm expecting a brief call, since the Summit is underway. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1500_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190501T1400 From Frank.Miller at windriver.com Tue Apr 30 13:33:16 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 30 Apr 2019 13:33:16 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F3DC92@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F3DC92@SHSMSX104.ccr.corp.intel.com> Message-ID: Cindy: I'll look for someone to triage 1824412 and update with any findings. But for 1826445 this is only reported in a virtual lab in an Intel site. We are not able to reproduce this issue in our labs or virtual environment. I suggest we wait for your team to return from Labor Day to reproduce and investigate. Frank From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, April 30, 2019 12:25 AM To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Hi, Frank, I just assign LP#1824412 to Zhipeng. However, due to the fact that China is going to Labor Day holiday and will be black-out for 4 days, I am doubt how much progress we can make. It's appreciated if WR can have resource on both 1824412 and 1826445. Thanks. - cindy From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 30, 2019 12:17 PM To: Perez Ibarra, Maria G >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Maria - the sanities are much improved and I don't think they should be labelled RED. Looks more like YELLOW to me. There are 2 issues reported in this sanity. Can we get updates on the status of the 2 issues from the LP primes: * Cindy for: VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 * Mingyuan for: Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 Let us know if you require assistance to triage/debug. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 29, 2019 10:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-28 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Apr 30 13:54:17 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 30 Apr 2019 13:54:17 +0000 Subject: [Starlingx-discuss] 2nd node stuck at pxe installation In-Reply-To: <9700A18779F35F49AF027300A49E7C765FEC641D@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FEC641D@SHSMSX101.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA48BB86@ALA-MBD.corp.ad.wrs.com> You can set the console either via the web browser in the installation parameters, or at command-line, such as: system host-update 2 personality=controller console= The default is the serial console, "ttyS0,115200". If you're using a graphical console, you can set it blank, or console=tty0, for example. This should allow you to see if there are some errors occurring trying to load the installer. From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Tuesday, April 30, 2019 1:03 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] 2nd node stuck at pxe installation Hi all, I try to do duplex deploy, and find controller-1 stuck at pxe installation step. The screen stuck at "Loading rel-19.01/installer-initrd...........ready". But package installation is not started as host-show command shown in controller-0. I meet this issue with latest code. Do you know what cause the issue, or how to debug it? Thanks. [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ [wrsroot at controller-0 ~(keystone_admin)]$ system host-show controller-1 | grep install | install_output | text | | install_state | None | | install_state_info | None | | subfunction_avail | not-installed Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Tue Apr 30 14:11:47 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Tue, 30 Apr 2019 14:11:47 +0000 Subject: [Starlingx-discuss] Ansible bootstrap in one node configuration Message-ID: <80ED4CE81E3D8F4099306648E95DAFE44CA18ED2@ALA-MBD.corp.ad.wrs.com> Hello Ada, The One node configuration wiki has been updated with instructions to bootstrap the controller using Ansible playbook. Could you give it a try and let me know if there are any issues? Once the new bootstrap method has been successfully incorporated into your sanity workflow, we plan to proceed with the remaining three configurations and cutover to Ansible bootstrap. Config_controller will be disabled as part of the cutover. It would be great if your team can test this as soon as possible this week as we are looking to cutover to Ansible soon and get some soak time started. Thank you. Tee -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Apr 30 14:21:16 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 30 Apr 2019 08:21:16 -0600 Subject: [Starlingx-discuss] Zoom link for Edge Forum sessions today and PTG on Thursday - Friday Message-ID: <3767B020-781C-4780-AA49-3497F5A1E9B9@gmail.com> Hi, As I mentioned earlier I will open a Zoom call for the Edge WG Forum sessions today and the edge and StarlingX sessions at the PTG on Thursday and Friday. You can see the Summit schedule here: https://www.openstack.org/summit/denver-2019/summit-schedule/#day=2019-04-30 PTG schedule is here: https://www.openstack.org/ptg/#tab_schedule Forum session etherpads: https://wiki.openstack.org/wiki/Forum/Denver2019 Dial-in info is here: https://zoom.us/j/642623527 One tap mobile +16699006833,,642623527# US (San Jose) +16468769923,,642623527# US (New York) Dial by your location +1 669 900 6833 US (San Jose) +1 646 876 9923 US (New York) Meeting ID: 642 623 527 Find your local number: https://zoom.us/u/achaHVeO9b Please let me know if you have any questions. Thanks and Best Regards, Ildikó From jose.perez.carranza at intel.com Tue Apr 30 14:41:01 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 30 Apr 2019 14:41:01 +0000 Subject: [Starlingx-discuss] Ansible bootstrap in one node configuration In-Reply-To: <80ED4CE81E3D8F4099306648E95DAFE44CA18ED2@ALA-MBD.corp.ad.wrs.com> References: <80ED4CE81E3D8F4099306648E95DAFE44CA18ED2@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A96A761@fmsmsx101.amr.corp.intel.com> Hi Tee I'll do a manual try and let you know the results. For automation purposes we use a configuration file, is this option also available using ANSIBLE ? if its supported were I can find the steps to follow ? Regards, José From: Ngo, Tee [mailto:Tee.Ngo at windriver.com] Sent: Tuesday, April 30, 2019 9:12 AM To: Cabrales, Ada Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Ansible bootstrap in one node configuration Hello Ada, The One node configuration wiki has been updated with instructions to bootstrap the controller using Ansible playbook. Could you give it a try and let me know if there are any issues? Once the new bootstrap method has been successfully incorporated into your sanity workflow, we plan to proceed with the remaining three configurations and cutover to Ansible bootstrap. Config_controller will be disabled as part of the cutover. It would be great if your team can test this as soon as possible this week as we are looking to cutover to Ansible soon and get some soak time started. Thank you. Tee -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Tue Apr 30 14:46:18 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Tue, 30 Apr 2019 16:46:18 +0200 (CEST) Subject: [Starlingx-discuss] 2nd node stuck at pxe installation In-Reply-To: References: Message-ID: <2125688793.1092495.1556635578983@communicator.strato.com> Hi, we have the same problem with the image from 20190411. It Looks like that controller-1 is not finding the pxe boot image. Kind regards Marcel > Message: 1 > Date: Tue, 30 Apr 2019 13:54:17 +0000 > From: "Penney, Don" > To: "Lin, Shuicheng" , > "starlingx-discuss at lists.starlingx.io" > > Subject: Re: [Starlingx-discuss] 2nd node stuck at pxe installation > Message-ID: > <6703202FD9FDFF4A8DA9ACF104AE129FBA48BB86 at ALA-MBD.corp.ad.wrs.com> > Content-Type: text/plain; charset="utf-8" > > You can set the console either via the web browser in the installation parameters, or at command-line, such as: > system host-update 2 personality=controller console= > > The default is the serial console, "ttyS0,115200". If you're using a graphical console, you can set it blank, or console=tty0, for example. This should allow you to see if there are some errors occurring trying to load the installer. > > > From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] > Sent: Tuesday, April 30, 2019 1:03 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] 2nd node stuck at pxe installation > > Hi all, > I try to do duplex deploy, and find controller-1 stuck at pxe installation step. > The screen stuck at "Loading rel-19.01/installer-initrd...........ready". But package installation is not started as host-show command shown in controller-0. > I meet this issue with latest code. Do you know what cause the issue, or how to debug it? > Thanks. > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | available | > | 2 | controller-1 | controller | locked | disabled | offline | > +----+--------------+-------------+----------------+-------------+--------------+ > [wrsroot at controller-0 ~(keystone_admin)]$ system host-show controller-1 | grep install > | install_output | text | > | install_state | None | > | install_state_info | None | > | subfunction_avail | not-installed > > Best Regards > Shuicheng > From Don.Penney at windriver.com Tue Apr 30 15:03:46 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 30 Apr 2019 15:03:46 +0000 Subject: [Starlingx-discuss] 2nd node stuck at pxe installation In-Reply-To: <2125688793.1092495.1556635578983@communicator.strato.com> References: <2125688793.1092495.1556635578983@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA48BC76@ALA-MBD.corp.ad.wrs.com> You can also look in /var/log/daemon.log for TFTP logs showing the requests and transfers of files, and /www/var/log/lighttpd-access.log for transfer of the squashfs.img. Since the console showed "Loading rel-19.01/installer-initrd...........ready", it seems like the TFTP transfer was fine, which is why I suggested modifying the console setting to ensure you're seeing what's happening. This is what I'd expect to see if the console is set to serial, but there's no serial console attached (such as using virtual box without a serial console). For example, you can see the bzImage and initrd here being transferred via TFTP in daemon.log: 2019-04-26T16:27:34.000 controller-0 dnsmasq-tftp[96915]: info sent /pxeboot/rel-19.01/installer-bzImage to 192.168.202.24 2019-04-26T16:27:36.000 controller-0 dnsmasq-tftp[96915]: info sent /pxeboot/rel-19.01/installer-initrd to 192.168.202.24 followed by the squashfs.img in /www/var/log/lighttpd-access.log: 192.168.202.24 pxecontroller:8080 - [26/Apr/2019:16:28:12 +0000] "GET /feed/rel-19.01//LiveOS/squashfs.img HTTP/1.1" 200 368152576 "-" "curl/7.29.0" If only the bzImage and initrd is transferred, without seeing the logs on console, my guess would be an unsupported NIC preventing the initrd from accessing the management network in order to request the squashfs.img (which provides the installer rootfs) -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Tuesday, April 30, 2019 10:46 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] 2nd node stuck at pxe installation Hi, we have the same problem with the image from 20190411. It Looks like that controller-1 is not finding the pxe boot image. Kind regards Marcel > Message: 1 > Date: Tue, 30 Apr 2019 13:54:17 +0000 > From: "Penney, Don" > To: "Lin, Shuicheng" , > "starlingx-discuss at lists.starlingx.io" > > Subject: Re: [Starlingx-discuss] 2nd node stuck at pxe installation > Message-ID: > <6703202FD9FDFF4A8DA9ACF104AE129FBA48BB86 at ALA-MBD.corp.ad.wrs.com> > Content-Type: text/plain; charset="utf-8" > > You can set the console either via the web browser in the installation parameters, or at command-line, such as: > system host-update 2 personality=controller console= > > The default is the serial console, "ttyS0,115200". If you're using a graphical console, you can set it blank, or console=tty0, for example. This should allow you to see if there are some errors occurring trying to load the installer. > > > From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] > Sent: Tuesday, April 30, 2019 1:03 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] 2nd node stuck at pxe installation > > Hi all, > I try to do duplex deploy, and find controller-1 stuck at pxe installation step. > The screen stuck at "Loading rel-19.01/installer-initrd...........ready". But package installation is not started as host-show command shown in controller-0. > I meet this issue with latest code. Do you know what cause the issue, or how to debug it? > Thanks. > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | available | > | 2 | controller-1 | controller | locked | disabled | offline | > +----+--------------+-------------+----------------+-------------+--------------+ > [wrsroot at controller-0 ~(keystone_admin)]$ system host-show controller-1 | grep install > | install_output | text | > | install_state | None | > | install_state_info | None | > | subfunction_avail | not-installed > > Best Regards > Shuicheng > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Tee.Ngo at windriver.com Tue Apr 30 15:37:41 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Tue, 30 Apr 2019 15:37:41 +0000 Subject: [Starlingx-discuss] Ansible bootstrap in one node configuration In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A96A761@fmsmsx101.amr.corp.intel.com> References: <80ED4CE81E3D8F4099306648E95DAFE44CA18ED2@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2A96A761@fmsmsx101.amr.corp.intel.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE44CA18F4A@ALA-MBD.corp.ad.wrs.com> Hi José, You certainly can automate with Ansible bootstrap. The default parameters can be found in host_vars/default.yml of the bootstrap playbook. Please refer to the wiki for the location of playbook. You can overwrite one or more of these default parameters using either ansible-playbook command line option -e (--extra-vars) or a user override file which is in .yml format. The default directory which Ansible looks for user override files is /home/wrsroot (this default directory can also be customized). A config override file must have the following naming convention: .yml. Inventory_hostname being the target host specified in the inventory file that is used when you run the bootstrap playbook. As documented on the wiki, the default bootstrap inventory file is /etc/ansible/hosts. You can specify a custom inventory file using ansible-playbook command line opiton -i (--inventory). I will be able to assist better if I know a bit more about your automation flow and the sample configuration file you use for AIO-SX. Let's set up a zoom session to discuss your setup. Tee From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: April-30-19 10:41 AM To: Ngo, Tee; Cabrales, Ada Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Ansible bootstrap in one node configuration Hi Tee I'll do a manual try and let you know the results. For automation purposes we use a configuration file, is this option also available using ANSIBLE ? if its supported were I can find the steps to follow ? Regards, José From: Ngo, Tee [mailto:Tee.Ngo at windriver.com] Sent: Tuesday, April 30, 2019 9:12 AM To: Cabrales, Ada Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Ansible bootstrap in one node configuration Hello Ada, The One node configuration wiki has been updated with instructions to bootstrap the controller using Ansible playbook. Could you give it a try and let me know if there are any issues? Once the new bootstrap method has been successfully incorporated into your sanity workflow, we plan to proceed with the remaining three configurations and cutover to Ansible bootstrap. Config_controller will be disabled as part of the cutover. It would be great if your team can test this as soon as possible this week as we are looking to cutover to Ansible soon and get some soak time started. Thank you. Tee -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Apr 30 16:44:04 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 30 Apr 2019 11:44:04 -0500 Subject: [Starlingx-discuss] [MultiOS] How to deploy STX in other linux systems? In-Reply-To: <7ED45F33-FAEA-460B-B089-E524C16F64E4@99cloud.net> References: <75EE28BA-C061-48A6-AF48-58DCA482894E@99cloud.net> <2FD5DDB5A04D264C80D42CA35194914F35F25393@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35F26839@SHSMSX104.ccr.corp.intel.com> <7ED45F33-FAEA-460B-B089-E524C16F64E4@99cloud.net> Message-ID: Hi Kunpeng We are able to see the same error, we will be working on the fix asap, thanks for using the build system and report the issue Regards Victor R On Mon, Apr 29, 2019 at 2:15 AM 张鲲鹏 wrote: > > Hi Victor, > > I got one error when I was building python-horizon in ubuntu 16.04 with this command "make package PKG=x.stx-upstream/openstack/python-horizon/ DISTRO=ubuntu”. > > > Below is the last log: > > 0 packages upgraded, 130 newly installed, 0 to remove and 0 not upgraded. > Need to get 15.8 MB/35.5 MB of archives. After unpacking 173 MB will be used. > Abort. > E: pbuilder-satisfydepends failed. > I: Copying back the cached apt archive contents > I: unmounting /usr/local/mydebs/ filesystem > I: unmounting dev/pts filesystem > I: unmounting run/shm filesystem > I: unmounting proc filesystem > I: cleaning the build env > I: removing directory /var/cache/pbuilder/build/15050 and its subdirectories > Makefile:5: recipe for target 'all' failed > make[1]: *** [all] Error 1 > make[1]: Leaving directory '/home/ubuntu/stx-packaging/x.stx-upstream/openstack/python-horizon/ubuntu' > Makefile:25: recipe for target 'build_pkg_native' failed > make: *** [build_pkg_native] Error 2 > > > Is there a lack of some configurations? > > Thanks > Kunpeng > > > > > On Apr 25, 2019, at 00:03, Victor Rodriguez wrote: > > https://wiki.openstack.org/wiki/StarlingX/Installation_Guide_Virtual_Environment/Controller_Storage > > From maria.g.perez.ibarra at intel.com Tue Apr 30 17:11:24 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 30 Apr 2019 17:11:24 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 In-Reply-To: References: Message-ID: Hello Frank, I also see the improvement in these results more as yellow, however I had the instruction to classify the results as RED whenever a provisioning failed on any configuration, treating it as a critical issue. This could be a good opportunity to define the results classification within the test teams. Regards Maria G. From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, April 29, 2019 11:17 PM To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Maria - the sanities are much improved and I don't think they should be labelled RED. Looks more like YELLOW to me. There are 2 issues reported in this sanity. Can we get updates on the status of the 2 issues from the LP primes: * Cindy for: VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 * Mingyuan for: Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 Let us know if you require assistance to triage/debug. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 29, 2019 10:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-28 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Apr 30 17:31:13 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 30 Apr 2019 17:31:13 +0000 Subject: [Starlingx-discuss] Meetings during the Summit next week? References: <869C2570-5676-4C18-A3ED-E0710FFC0206@gmail.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A35E84@ALA-MBD.corp.ad.wrs.com> Reminder that the Release Team meeting is cancelled for this week (normally on Thursdays at 6pm UTC, 2pm EDT). Bill... -----Original Message----- From: Zvonar, Bill Sent: Wednesday, April 24, 2019 3:54 PM To: 'Ildiko Vancsa' ; starlingx-discuss at lists.starlingx.io Subject: RE: Meetings during the Summit next week? Hi Ildiko, Tuesday - just 2 (Distro OpenStack & Test) both cancelled (Bruce & Ada). Thursday - I think you just need confirmation from the Build & STX in a Box meetings... - Networking (7:15am MDT): before 9am MDT - TSC (8am MDT): before 9am MDT - Build (9am MDT): Cesar? - STX in a Box (10:30am MDT): ? - Release (12pm MDT): we can skip it Friday - no regular meetings. Bill.... -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, April 24, 2019 1:17 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Meetings during the Summit next week? Hi StarlingX Community, As next week is the Open Infrastructure Summit and PTG I wanted to check if all the community/project calls will be kept as I would like to re-use the Zoom account to provide remote participation options for a few sessions at the event. The Summit will run in Mountain Time and I would need the Zoom account at: * Tuesday (April 30) 10:40am - 12:30pm, 2:30pm - 3:20pm - Edge Forum sessions * Thursday (May 2) 9am - 6pm - Edge Wg and StarlingX PTG sessions * Friday (May 3) 9am - 6pm - StarlingX PTG session Please let me know if you there is any collision with the above mentioned slots where you still plan to run the calls and I will find another option for those slots. I believe there might be a few calls on Thursday but otherwise it should work. Thanks, Ildikó From vm.rod25 at gmail.com Tue Apr 30 19:42:15 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 30 Apr 2019 14:42:15 -0500 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 In-Reply-To: <1556587980.30870.13.camel@intel.com> References: <1556587980.30870.13.camel@intel.com> Message-ID: Thanks, Bin On Mon, Apr 29, 2019 at 8:33 PM Yang, Bin wrote: > > Hi Victor, > > K8s always not works with "swap enabled". Refer to: https://github.com/ku > bernetes/kubernetes/issues/53533 > This is a clear answer to my question with a good thread where I got good points > As "Angus Lees" mentioned, "Applications should configure limits.memory > with the max ram they ever want to use, and requests.memory with their intended > working set (including kernel buffers, etc)" > Agree , Doing a search with Cristopher we found that 2GB might be a good limit > I think it needs to assign enough memory for STX critical containers to > keep system being stable. Please correct me if wrong. My question, as not expert in docker, is if the option docker run -m, --memory bytes Memory limit could be a good solution to limit the memory bound Regards > > thanks, > Bin > > > On Mon, 2019-04-29 at 08:42 -0500, Victor Rodriguez wrote: > > Hi Frank > > > > I am in charge of the track of performance and footprint of the STX systems > > > > Due to the fact that this bug was a footprint issue I was wondering if > > should discuss on the meeting on what stages and part of the STX do > > you want me to track the memory footprint > > > > Also, I have a question regarding the chance to set up -m to each > > docker image so we can limit the amount of memory of each one. One of > > the experiments we did las Friday debugging the issue was that if we > > set -m and the system has SWAP it will take memory from there and keep > > running since we in STX do not have swap the containers fails from > > starvation. > > > > Regards > > > > Victor R > > > > On Mon, Apr 29, 2019 at 8:34 AM Miller, Frank > > wrote: > > > > > > > > > We will be holding a meeting today. Planned agenda is below – please add to > > > the etherpad agenda if you have any additional topics to discuss: > > > > > > > > > > > > Etherpad: https://etherpad.openstack.org/p/stx-containerization > > > > > > > > > > > > Agenda: > > > > > > 1. Sanity status/issue discussion: > > > > > > - All in one systems, bare metal > > > > > > - Multi node systems > > > > > > - Virtual systems > > > > > > > > > > > > 2. Test team status & top issues > > > > > > > > > > > > 3. Feature Topics: > > > > > > - Overall status > > > > > > - Technical discussion on any outstanding features > > > > > > > > > > > > 4. Open topics > > > > > > - no meeting Monday May 6th > > > > > > - other items? > > > > > > > > > > > > > > > > > > ------ > > > > > > Frank > > > > > > > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Tue Apr 30 19:50:36 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 30 Apr 2019 14:50:36 -0500 Subject: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB49856A@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB49856A@ALA-MBD.corp.ad.wrs.com> Message-ID: On Mon, Apr 29, 2019 at 10:29 PM Rowsell, Brent wrote: > > Victor, > > See inline > > Brent > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Monday, April 29, 2019 9:43 AM > To: Miller, Frank > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Agenda: Weekly Containerization Meeting Monday April 29 > > Hi Frank > > I am in charge of the track of performance and footprint of the STX systems > > Due to the fact that this bug was a footprint issue I was wondering if should discuss on the meeting on what stages and part of the STX do you want me to track the memory footprint > > Also, I have a question regarding the chance to set up -m to each docker image so we can limit the amount of memory of each one. One of the experiments we did las Friday debugging the issue was that if we set -m and the system has SWAP it will take memory from there and keep running since we in STX do not have swap the containers fails from starvation. > [BR] Enabling SWAP, in general, is not an option for performance reasons. On top of that it will not work with k8s. > I agree after reading the full thread[0] that Bin share and reading docker documentation that yes swap is not working on k8, thanks for the clarification it helps me a lot. Now related to the performance. I do understand that if the scheduler sends a pod to a machine it should never use swap at all. but here my question ( as Linux performance eng ) could be. Why ? and what benchmarks prove that ?. As far as I know, please correct me if I am wrong, It’s normal and can be a good thing for Linux systems to use some swap, even if there is still available RAM. The Linux Kernel will move memory pages which are hardly ever used into swap space to ensure that even more cachable space is made available in-memory for more frequently used memory pages (a page is a piece of memory). Swap usage becomes a performance problem when the Kernel is pressured to continuously move memory pages in and out of memory and swap space. ( is this the corner case that we are preventing not to use the swap ) . Another advantage is that swap gives admins time to react to low memory issues. We will often notice the server acting slowly and upon login will notice heavy swapping. Without swap running out of memory can create much more sudden and severe chain reactions. Obed/Dean, do you know of any benchmark or use case that proves this ? just wondering [0] https://github.com/kubernetes/kubernetes/issues/53533 > Regards > > Victor R > > On Mon, Apr 29, 2019 at 8:34 AM Miller, Frank wrote: > > > > We will be holding a meeting today. Planned agenda is below – please add to the etherpad agenda if you have any additional topics to discuss: > > > > > > > > Etherpad: https://etherpad.openstack.org/p/stx-containerization > > > > > > > > Agenda: > > > > 1. Sanity status/issue discussion: > > > > - All in one systems, bare metal > > > > - Multi node systems > > > > - Virtual systems > > > > > > > > 2. Test team status & top issues > > > > > > > > 3. Feature Topics: > > > > - Overall status > > > > - Technical discussion on any outstanding features > > > > > > > > 4. Open topics > > > > - no meeting Monday May 6th > > > > - other items? > > > > > > > > > > > > ------ > > > > Frank > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jose.perez.carranza at intel.com Tue Apr 30 19:53:44 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 30 Apr 2019 19:53:44 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35F3DC92@SHSMSX104.ccr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A96A8DA@fmsmsx101.amr.corp.intel.com> Hi Frank LP 1826445 is updated with our latest findings, this seems to be a resources issue, Christopher was able complete successfully the "application apply" on a configuration were VMs hosting nodes have 8 cores and 32GB each controllers and computes. This is a considerable increase to the values mentioned on the wiki (16 GB for controllers and 10 GB for computes). Regards, José From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 30, 2019 8:33 AM To: Xie, Cindy ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Cindy: I'll look for someone to triage 1824412 and update with any findings. But for 1826445 this is only reported in a virtual lab in an Intel site. We are not able to reproduce this issue in our labs or virtual environment. I suggest we wait for your team to return from Labor Day to reproduce and investigate. Frank From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Tuesday, April 30, 2019 12:25 AM To: Miller, Frank >; Perez Ibarra, Maria G >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Hi, Frank, I just assign LP#1824412 to Zhipeng. However, due to the fact that China is going to Labor Day holiday and will be black-out for 4 days, I am doubt how much progress we can make. It's appreciated if WR can have resource on both 1824412 and 1826445. Thanks. - cindy From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, April 30, 2019 12:17 PM To: Perez Ibarra, Maria G >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Maria - the sanities are much improved and I don't think they should be labelled RED. Looks more like YELLOW to me. There are 2 issues reported in this sanity. Can we get updates on the status of the 2 issues from the LP primes: · Cindy for: VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 · Mingyuan for: Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 Let us know if you require assistance to triage/debug. Frank From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Monday, April 29, 2019 10:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190428 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-28 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From hayde.martinez.landa at intel.com Tue Apr 30 20:37:17 2019 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Tue, 30 Apr 2019 20:37:17 +0000 Subject: [Starlingx-discuss] Meetings during the Summit next week? Message-ID: <058C6736-BDB6-47FC-B3E0-DCBA867EC85B@intel.com> Stx in a Box meeting for this week will also be cancelled due to the Summit. Best Hayde On 4/30/19, 12:31 PM, "Zvonar, Bill" wrote: Reminder that the Release Team meeting is cancelled for this week (normally on Thursdays at 6pm UTC, 2pm EDT). Bill... -----Original Message----- From: Zvonar, Bill Sent: Wednesday, April 24, 2019 3:54 PM To: 'Ildiko Vancsa' ; starlingx-discuss at lists.starlingx.io Subject: RE: Meetings during the Summit next week? Hi Ildiko, Tuesday - just 2 (Distro OpenStack & Test) both cancelled (Bruce & Ada). Thursday - I think you just need confirmation from the Build & STX in a Box meetings... - Networking (7:15am MDT): before 9am MDT - TSC (8am MDT): before 9am MDT - Build (9am MDT): Cesar? - STX in a Box (10:30am MDT): ? - Release (12pm MDT): we can skip it Friday - no regular meetings. Bill.... -----Original Message----- From: Ildiko Vancsa Sent: Wednesday, April 24, 2019 1:17 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Meetings during the Summit next week? Hi StarlingX Community, As next week is the Open Infrastructure Summit and PTG I wanted to check if all the community/project calls will be kept as I would like to re-use the Zoom account to provide remote participation options for a few sessions at the event. The Summit will run in Mountain Time and I would need the Zoom account at: * Tuesday (April 30) 10:40am - 12:30pm, 2:30pm - 3:20pm - Edge Forum sessions * Thursday (May 2) 9am - 6pm - Edge Wg and StarlingX PTG sessions * Friday (May 3) 9am - 6pm - StarlingX PTG session Please let me know if you there is any collision with the above mentioned slots where you still plan to run the calls and I will find another option for those slots. I believe there might be a few calls on Thursday but otherwise it should work. Thanks, Ildikó _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Apr 30 21:01:58 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 30 Apr 2019 17:01:58 -0400 Subject: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. In-Reply-To: References: Message-ID: CENGN isn't building dev images these days.  So it's not being tested in a while. cat stx/stx-integ/centos_dev_docker_images.inc    database/mariadb However there is no file ... stx/stx-integ/database/mariadb/centos/stx-mariadb.dev_docker_image that's definitely inconsistent.  Raise a launchpad. Scott On 2019-04-30 5:17 a.m., Sun, Austin wrote: > > Hi All: > > 1) When I tried to build dev images . I meet below error. > >     “Unsupported BUILDER in > /home/wrsroot/starlingx/workspace/localdisk/designer/wrsroot/starlingx/cgcs-root/stx/stx-integ/database/mariadb/centos/*.dev_docker_image:” > > Build command is something like (sudo ./build-stx-images.sh --os > centos --stream dev --base starlingx/stx-centos:master-dev-latest > --wheels ~/starlingx/wheel/stx-centos-stable-wheels.tar --only > stx-fm-rest-api) > > Does mariadb not support for dev build ? > > 2) how to push image built by developer to deployed environment >  directly ? > >     Do we have any wiki or guide about this ? > > Thanks. > > BR > Austin Sun. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Apr 30 21:40:33 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 30 Apr 2019 21:40:33 +0000 Subject: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA492899@ALA-MBD.corp.ad.wrs.com> Sorry, Austin, I didn't see your email this morning. #1: The stx-mariadb image is not built in the dev build stream, much like other images. It now only builds in stable. The stx-fm-rest-api image, as well, only builds in the stable stream. The error you're getting is because there are a couple of centos_dev_docker_images.inc files with stale entries. This is just a warning to stderr and should not impact anything. #2: You can use the --push option of build-stx-images.sh, combined with --user and --registry options to tag and push the image(s) to a private registry or the docker hub. From there, I think there are instructions available on loading those images onto your controller. Gerry, are those instructions on a wiki? From: Scott Little [mailto:scott.little at windriver.com] Sent: Tuesday, April 30, 2019 5:02 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. CENGN isn't building dev images these days. So it's not being tested in a while. cat stx/stx-integ/centos_dev_docker_images.inc database/mariadb However there is no file ... stx/stx-integ/database/mariadb/centos/stx-mariadb.dev_docker_image that's definitely inconsistent. Raise a launchpad. Scott On 2019-04-30 5:17 a.m., Sun, Austin wrote: Hi All: 1) When I tried to build dev images . I meet below error. "Unsupported BUILDER in /home/wrsroot/starlingx/workspace/localdisk/designer/wrsroot/starlingx/cgcs-root/stx/stx-integ/database/mariadb/centos/*.dev_docker_image:" Build command is something like (sudo ./build-stx-images.sh --os centos --stream dev --base starlingx/stx-centos:master-dev-latest --wheels ~/starlingx/wheel/stx-centos-stable-wheels.tar --only stx-fm-rest-api) Does mariadb not support for dev build ? 2) how to push image built by developer to deployed environment directly ? Do we have any wiki or guide about this ? Thanks. BR Austin Sun. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Gerry.Kopec at windriver.com Tue Apr 30 22:34:42 2019 From: Gerry.Kopec at windriver.com (Kopec, Gerald (Gerry)) Date: Tue, 30 Apr 2019 22:34:42 +0000 Subject: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA492899@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA492899@ALA-MBD.corp.ad.wrs.com> Message-ID: <58CF5BABC9A76946A638A0E8AE48D1737183EE63@ALA-MBD.corp.ad.wrs.com> To load a private docker image onto your controller try these instructions: https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages#Testing_Image_on_Running_System Then follow recommended link to: https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ#How_do_I_make_changes_to_the_code_or_configuration_in_a_pod_for_debugging_purposes.3F Gerry From: Penney, Don Sent: Tuesday, April 30, 2019 5:41 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Cc: Kopec, Gerald (Gerry) Subject: RE: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. Sorry, Austin, I didn't see your email this morning. #1: The stx-mariadb image is not built in the dev build stream, much like other images. It now only builds in stable. The stx-fm-rest-api image, as well, only builds in the stable stream. The error you're getting is because there are a couple of centos_dev_docker_images.inc files with stale entries. This is just a warning to stderr and should not impact anything. #2: You can use the --push option of build-stx-images.sh, combined with --user and --registry options to tag and push the image(s) to a private registry or the docker hub. From there, I think there are instructions available on loading those images onto your controller. Gerry, are those instructions on a wiki? From: Scott Little [mailto:scott.little at windriver.com] Sent: Tuesday, April 30, 2019 5:02 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Build Docker images and how to use temporary image for debug in deploy env. CENGN isn't building dev images these days. So it's not being tested in a while. cat stx/stx-integ/centos_dev_docker_images.inc database/mariadb However there is no file ... stx/stx-integ/database/mariadb/centos/stx-mariadb.dev_docker_image that's definitely inconsistent. Raise a launchpad. Scott On 2019-04-30 5:17 a.m., Sun, Austin wrote: Hi All: 1) When I tried to build dev images . I meet below error. "Unsupported BUILDER in /home/wrsroot/starlingx/workspace/localdisk/designer/wrsroot/starlingx/cgcs-root/stx/stx-integ/database/mariadb/centos/*.dev_docker_image:" Build command is something like (sudo ./build-stx-images.sh --os centos --stream dev --base starlingx/stx-centos:master-dev-latest --wheels ~/starlingx/wheel/stx-centos-stable-wheels.tar --only stx-fm-rest-api) Does mariadb not support for dev build ? 2) how to push image built by developer to deployed environment directly ? Do we have any wiki or guide about this ? Thanks. BR Austin Sun. _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Apr 30 23:26:16 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 30 Apr 2019 23:26:16 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190430 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-30 (link) Status: RED =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [FAIL] | 03 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS [Fail : 3 TCs] AIO - Duplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Standard - Local Storage (2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: 57 TCS PASS Standard - Dedicated Storage (2+2+2) Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 05 TCs [PASS] TOTAL: 57 TCS PASS Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] | 3 TCs FAIL Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] [Fail : 3 TCs] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage Setup 04 TCs [PASS] Provisioning 01 TCs [FAIL] Sanity OpenStack 49 TCs [FAIL] Sanity Platform 07 TCs [FAIL] TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] ---------------------------------------------------------- - VM resize failed by "No valid host was found" https://bugs.launchpad.net/starlingx/+bug/1824412 - Failing application-apply due error on osh-openstack-openvswitch https://bugs.launchpad.net/starlingx/+bug/1826445 For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Apr 30 23:36:01 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 30 Apr 2019 18:36:01 -0500 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB498486@ALA-MBD.corp.ad.wrs.com> References: <63839F7A-75A9-4266-A2F5-D373391CBAD4@intel.com> <92F9583D-D670-442D-A437-E72761E815DB@intel.com> <1F0FCBCF-A1A5-4E8D-9671-BCD5EFA15BCC@intel.com> <0427FED8-440B-4129-94F7-2CBF6DBBDE8E@intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB498486@ALA-MBD.corp.ad.wrs.com> Message-ID: On Mon, Apr 29, 2019 at 10:09 PM Rowsell, Brent wrote: > > Hi Christopher, > > Re: during the unlock of controller-0, it jumps from using 5.5GB to 72GB, when we reported the bug > > A portion of the memory is reserved for the infrastructure the remainder is allocated as hugepages which is used as backing store for the VM's. > This is why you see the avail memory drop. Thanks a lot for the hit, Brent. Erich, Cristopher and I did a debug and find out that in a simplex how many pages do we have, we found out that they are a total of 34927 of 2 MB each one ( described in boot parameters ) which gives us: 69854 MB = 69 GB Now, I have a few questions from the architecture perspective : 1) Why do we assign that number of page tables ? was this based on experiments that show the best performance? if so what benchmarks were used to assign this value 2) Can we make that the script that set up the number of huge pages adjust the value if is a simplex all in one? we might not need that much amount of memory for vms if we are n a simplex AIO. Thinking on a dynamic number of huge pages according to the starling X configuration. 3) Is there any feedback on the community that can provide us with benchmarks where they see better performance by the use and reservation of this specific number/size of memory pages Thanks a lot Victor R > > Brent > > -----Original Message----- > From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] > Sent: Monday, April 29, 2019 4:21 PM > To: Miller, Frank ; Victor Rodriguez ; Cordoba Malibran, Erich > Cc: Li, Cheng1 ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > > Hi Frank, > > With latest ISO, all baremetal configurations are passing sanity test (Green Status), regarding memory usage, during the unlock of controller-0, it jumps from using 5.5GB to 72GB, when we reported the bug, the usage was 71GB, almost the same as today. > > I'm assuming that docker reserves the memory because the pods/containers are not limited, as we can see on docker stats, almost all containers have their limit set by the total amount of physical memory on the system, Is this behavior expected? is there a way to properly track down memory usage at docker level? Ideally, something that can help to determine when memory is being heavily impacted and something that helps to provide valuable information when we report bugs. > > I added some outputs about memory usage at os level and what is reported by docker on the bug: https://bugs.launchpad.net/starlingx/+bug/1826308 > > Thanks! > > Cristopher Lemus > > > On 4/27/19, 2:46 PM, "Miller, Frank" wrote: > > Hi All: > > After a prolonged debug session on Friday by various developers, it looks like the memory issue seen in the Intel labs is due to the excessive number of nova pods being launched which is directly related to the number of cores used on the BM servers. The Intel lab servers have many more cores than most of the labs used in WindRiver labs and explains why the memory issue is much rarer in some labs. Al Bailey and Gerry Kopec worked on a solution [1] which should be available in today's builds. > > In addition while debugging the application-apply issues on AIO labs, in some cases timeouts were being seen either during download or applying of the stx-application. This is believed to be a result of a StoryBoard that merged two weeks ago to affine platform processes and pods to platform cores leaving the other cores available for application pods. This reduces the core processing available during application-apply. To alleviate this issue, two additional commits [2,3] were proposed and merged. > > Let's review the updated sanity results on Monday and determine if any further actions are required. > > Frank > [1] https://review.opendev.org/#/c/656037/ > [2] https://review.opendev.org/#/c/656009/ > [3] https://review.opendev.org/#/c/656025/ > > -----Original Message----- > From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] > Sent: Friday, April 26, 2019 6:06 PM > To: Victor Rodriguez ; Cordoba Malibran, Erich > Cc: Li, Cheng1 ; Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > > Hi All, > > Some test were made to find the point where the memory is allocated: > > Just after `config_controller` it's using just a handful of GBs: > > controller-0:~$ free -h > total used free shared buff/cache available > Mem: 93G 3.2G 84G 47M 5.5G 88G > Swap: 0B 0B 0B > controller-0:~$ > > > Right after the unlock, when the system pass from "offline" status to "intest" it jumps from using 5.1GB to 71GB, this is just with kube-system pods: > > total used free shared buff/cache available > Mem: 93G 71G 19G 45M 1.9G 20G > Swap: 0B 0B 0B > > > > NAME READY STATUS RESTARTS AGE > calico-kube-controllers-84cdb6bd7c-w75rk 1/1 Running 1 36m > calico-node-zp8xv 1/1 Running 1 36m > coredns-84bb87857f-lp8sl 1/1 Running 1 36m > coredns-84bb87857f-r6mdf 0/1 Pending 0 36m > kube-apiserver-controller-0 1/1 Running 1 35m > kube-controller-manager-controller-0 1/1 Running 2 35m > kube-proxy-w7sfq 1/1 Running 1 36m > kube-scheduler-controller-0 1/1 Running 2 35m > tiller-deploy-d87d7bd75-hjb7w 1/1 Running 1 36m > > > > Bug updated with this info. > > Regards, > > Cristopher Lemus > > > > > On 4/26/19, 11:30 AM, "Victor Rodriguez" wrote: > > Hi team > > My findings so far this morning: > > In order to know how much memory ( really ) a docker is consuming i > tested 2 tools ( docker stat and reading from the /proc/pid/mmpas ) > > I create a simple C code that consumes X KB of memory by malloc and > then free it: > > https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/memory.c > > Reserving 5000 Kb of memory > Value of String = simple_test > Address = 2895619200 > Waiting for 30 seconds > > I compile it and cp into my docker image: > > https://github.com/VictorRodriguez/hobbies/blob/master/dev_ops/footprint/Dockerfile > > When I run the docker and monitor the memory with docker stats : > > It shows only 2.5 Kb of memory when from /proc kernel ifo i get : > > vmrod at vmrod-ubuntu-devel:/tmp$ ./usr/bin/psstop | grep docker > docker-containe 1857 : 0 Kb > dockerd 2758 : 0 Kb > docker-containe 3368 : 0 Kb > docker-containe 5438 : 0 Kb > docker-containe 25159 : 0 Kb > docker 25105 : 48378 Kb > > ( first column is PID second one is memory consumed ) , in this case, > it shows 48378 kb vs 5000 kb of memory that i know that i requested > > In order to find the memory leak, we must rely on the tools we use to > measure it, Cristopher can you help me to repeat the same experiment > to know if you see the same behavior ? If so we can start to put -m on > each docker image to limit the memory size ( 2GB should be enough > right ? ) > > WIP > > regards > > On Thu, Apr 25, 2019 at 10:33 PM Victor Rodriguez wrote: > > > > Can we consider the track of vm used by the running proces from /proc? we can work on a script using psstop(0) or other similar tool,what do you think. This might help us to find the process is consuming the memory over the time > > > > I also see the same problem of consuming almost 90% of the memory not only in all in one systems but also in duplex > > > > (0) https://github.com/clearlinux/psstop > > > > Regards > > Victor Rodriguez > > > > On Thu, Apr 25, 2019, 21:59 Cordoba Malibran, Erich wrote: > >> > >> Hi, > >> > >> In this case we have: > >> > >> HugePages_Total: 34104 > >> HugePages_Free: 34104 > >> HugePages_Rsvd: 0 > >> HugePages_Surp: 0 > >> > >> So, I'm not sure if it can be related with 1825814. > >> > >> Also, for people not seeing this issue, how much memory do you have in your baremetal systems? What's the minimum required memory for running an AIO system. Our failing system have 97 GB and free -h shows. > >> > >> total used free shared buff/cache available > >> Mem: 93G 84G 3.2G 66M 5.6G 4.8G > >> Swap: 0B 0B 0B > >> > >> > >> A couple months ago I reported a similar issue[0], in that case after three days in stand-by the system started to throw Out of Memory errors. Does anyone has performed a longevity test for some days? Maybe the working systems might fail after a while if the memory usage keeps increasing over time. > >> > >> -Erich > >> > >> [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/002923.html > >> > >> > >> > >> From: "Li, Cheng1" > >> Date: Thursday, April 25, 2019 at 8:29 PM > >> To: "Lemus Contreras, Cristopher J" , "Miller, Frank" , "Perez Ibarra, Maria G" , "starlingx-discuss at lists.starlingx.io" > >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > >> > >> Actually, I had also reported the memory issue[1] days ago. > >> Memory exhaust happens because so little 4K memory is allocated for system/software load. > >> > >> [1] https://bugs.launchpad.net/starlingx/+bug/1825814 > >> > >> Thanks, > >> Cheng > >> > >> From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] > >> Sent: Friday, April 26, 2019 1:50 AM > >> To: Miller, Frank ; Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io > >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > >> > >> Hi Frank, > >> > >> We had a zoom call with Al Bailey to troubleshoot the issues that we are observing. The bug where a single CPU was taking all of the workload is resolved. > >> > >> What we observed seems to be an issue with memory exhaust, additional information was gathered an added to this bug for further troubleshooting: https://bugs.launchpad.net/starlingx/+bug/1826308 > >> > >> If additional information is required, please, just let us know. > >> > >> Thanks & Regards, > >> > >> Cristopher Lemus > >> > >> From: "Miller, Frank" > >> Date: Thursday, April 25, 2019 at 8:24 AM > >> To: "Perez Ibarra, Maria G" , "mailto:starlingx-discuss at lists.starlingx.io" > >> Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > >> > >> Maria: > >> > >> It looks like the commit referenced yesterday [1] is not addressing the issue in your BM labs. Can you set up a live debug session so that some container SMEs can investigate? > >> > >> Frank > >> [1] https://review.opendev.org/#/c/655240/ > >> > >> From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] > >> Sent: Thursday, April 25, 2019 12:12 AM > >> To: mailto:starlingx-discuss at lists.starlingx.io > >> Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190424 > >> > >> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-APRIL-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/) > >> > >> Status: RED > >> > >> =========================================== > >> > >> Sanity Test is executed in a Containers – Bare Metal Environment > >> > >> AIO - Simplex > >> > >> Setup Manual [PASS] > >> Provisioning 01 TCs [PASS] > >> Sanity OpenStack 49 TCs [FAIL]| 40 TCs FAIL > >> Sanity Platform 07 TCs [FAIL]| 07 TCs FAIL > >> > >> TOTAL: 57 TCS [Fail : 47] > >> > >> AIO – Duplex > >> > >> Setup Manual [PASS] > >> Provisioning 01 TCs [PASS] > >> Sanity OpenStack 52 TCs [FAIL] | 42 TCs FAIL > >> Sanity Platform 05 TCs [FAIL] | 05 TCs FAIL > >> > >> TOTAL: 57 TCS [Fail : 47 TCs] > >> > >> Standard - Local Storage (2+2) > >> > >> Setup Manual [PASS] > >> Provisioning 01 TCs [PASS] > >> Sanity OpenStack 49 TCs [PASS] > >> Sanity Platform 07 TCs [PASS] > >> > >> TOTAL: 57 TCS PASS > >> > >> Standard - Dedicated Storage (2+2+2) > >> > >> Setup Manual [PASS] > >> Provisioning 01 TCs [PASS] > >> Sanity OpenStack 52 TCs [PASS] > >> Sanity Platform 05 TCs [PASS] > >> > >> TOTAL: 57 TCS PASS > >> > >> > >> > >> Sanity Test is executed in a Containers - Virtual Environment > >> > >> AIO - Simplex > >> > >> Setup 04 TCs [PASS] > >> Provisioning 01 TCs [FAIL] > >> Sanity OpenStack 49 TCs [FAIL] > >> Sanity Platform 07 TCs [FAIL] > >> > >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > >> > >> > >> AIO - Duplex > >> > >> Setup 04 TCs [PASS] > >> Provisioning 01 TCs [FAIL] > >> Sanity OpenStack 49 TCs [FAIL] > >> Sanity Platform 07 TCs [FAIL] > >> > >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > >> > >> > >> Standard – Local Storage > >> > >> Setup 04 TCs [PASS] > >> Provisioning 01 TCs [FAIL] > >> Sanity OpenStack 49 TCs [FAIL] > >> Sanity Platform 07 TCs [FAIL] > >> > >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > >> > >> > >> Standard – Dedicated Storage > >> > >> Setup 04 TCs [PASS] > >> Provisioning 01 TCs [FAIL] > >> Sanity OpenStack 49 TCs [FAIL] > >> Sanity Platform 07 TCs [FAIL] > >> > >> TOTAL: [ 61 TCs PASS ] [Fail : 57 TCs] > >> > >> - some pods are failing during BM sanity execution. https://bugs.launchpad.net/starlingx/+bug/1826308 > >> - Sanity Bare metal was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T145954Z/ > >> - Sanity Virtual was tested with : http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190424T013000Z/ > >> - Tomorrow in sanity virtual we will perform a double check with the latest ISO that includes the fixes. > >> > >> For more detail of the tests: https://wiki.openstack.org/wiki/StarlingX/Test/SanityTests#Sanity-OpenStack > >> > >> > >> Regards > >> Maria G. > >> > >> > >> > >> > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Mon Apr 29 15:03:30 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 29 Apr 2019 15:03:30 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX Test meeting - 9:00 PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CDCADD5@FMSMSX114.amr.corp.intel.com> Cancelling weekly meeting - freeing up the zoom slot a. Weekly meetings on Tuesdays at 9am PDT / 1600 UTC * Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3488 bytes Desc: not available URL: From Frank.Miller at windriver.com Tue Apr 30 20:57:56 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 30 Apr 2019 20:57:56 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Weekly Containerization Meeting Message-ID: I'll be cancelling the meeting for Monday May 6th. Also will set up a new calendar invite for future meetings where no response is required so that this type of email doesn't get held up due to "awaits moderator approval". ==============> For those contributing to or interested in the Containerization subproject, the plan is to meet weekly until the containerization StoryBoards are completed. Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 7802 bytes Desc: not available URL: