From ada.cabrales at intel.com Sat Dec 1 01:05:04 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Sat, 1 Dec 2018 01:05:04 +0000 Subject: [Starlingx-discuss] Sanity of Mirror Builds In-Reply-To: References: Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7C37647E@fmsmsx104.amr.corp.intel.com> Great! Yes, the meeting is on. Can you attend to talk about this? Regards Ada From: Young, Ken [mailto:Ken.Young at windriver.com] Sent: Friday, November 30, 2018 12:24 PM To: Cabrales, Ada Cc: starlingx-discuss at lists.starlingx.io Subject: Sanity of Mirror Builds Ada, I have updated the Build team Wiki to highlight the location of the nightly builds. For your convenience, you can find them here: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ As we discussed on the community call, I would like to discuss next steps for the usage of these builds. Are you planning to have a test meeting on Tuesday, Dec 4th @ 9 PST? Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Sat Dec 1 01:54:46 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Sat, 1 Dec 2018 01:54:46 +0000 Subject: [Starlingx-discuss] Sanity of Mirror Builds In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7C37647E@fmsmsx104.amr.corp.intel.com> References: , <4F6AACE4B0F173488D033B02A8BB5B7E7C37647E@fmsmsx104.amr.corp.intel.com> Message-ID: I can. I look forward to discussing this on Tuesday. /KenY Sent from my iPhone On Nov 30, 2018, at 8:05 PM, Cabrales, Ada > wrote: Great! Yes, the meeting is on. Can you attend to talk about this? Regards Ada From: Young, Ken [mailto:Ken.Young at windriver.com] Sent: Friday, November 30, 2018 12:24 PM To: Cabrales, Ada > Cc: starlingx-discuss at lists.starlingx.io Subject: Sanity of Mirror Builds Ada, I have updated the Build team Wiki to highlight the location of the nightly builds. For your convenience, you can find them here: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ As we discussed on the community call, I would like to discuss next steps for the usage of these builds. Are you planning to have a test meeting on Tuesday, Dec 4th @ 9 PST? Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Sun Dec 2 12:20:57 2018 From: yong.hu at intel.com (Hu, Yong) Date: Sun, 2 Dec 2018 12:20:57 +0000 Subject: [Starlingx-discuss] When is the StarlingX Infrastructure Containerization feature released? In-Reply-To: <000c01d4884e$194f21b0$4bed6510$@fiberhome.com> References: <000c01d4884e$194f21b0$4bed6510$@fiberhome.com> Message-ID: <26597007-BA9E-48C0-B119-2A53F6768714@intel.com> You may see some related information here in Wiki: https://wiki.openstack.org/wiki/StarlingX/Containers From: 赵伟 Date: Friday, 30 November 2018 at 4:49 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] When is the StarlingX Infrastructure Containerization feature released? Hello everyone, StarlingX plan to use k8s + helm to containerize openstack, but I didn’t see the specific R&D plan online. When is the StarlingX Infrastructure Containerization feature released? March, July, or November 2019? Best regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From quickconvey at gmail.com Sun Dec 2 17:28:12 2018 From: quickconvey at gmail.com (Quick Convey) Date: Sun, 2 Dec 2018 22:58:12 +0530 Subject: [Starlingx-discuss] Project-wise implementation details for new contributors Message-ID: Hi, Could you please tell me ,Where can I find project-wise implementation details. Please add something in the Readme file for new contributors. Is there any doc where I can find these details. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Sun Dec 2 22:46:31 2018 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Sun, 2 Dec 2018 22:46:31 +0000 Subject: [Starlingx-discuss] TSC meeting minutes - Nov 29th Message-ID: <1A270B85-BB91-4C02-B59A-BE6C80E25647@windriver.com> The meeting agenda is here: https://etherpad.openstack.org/p/stx-cores Feel free to propose agenda items, also please recognize we may not get to all items each week. Standing topics: Pending reviews for stx-governance https://review.openstack.org/#/q/project:openstack/stx-governance,n,z Pending reviews for stx-specs https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-specs * Call to TSC members we need some more reviews, please have a look. * We agreed that a simple majority is acceptable and we would wait 48 hours prior to merging after achieving the simple majority. 11/29/18 Distro.openstack project - not active currently. Keep it? Disband it? Use it for driving/tracking work on patch upstreaming and code refactoring? -- brucej (5 min) Proposal is to use this sub-project to track patch elimination and guide this work until we get to master. Rebase would not be part of this project. Agreed by TSC Need to appoint PL/TL for the project - review candidates at next TSC meeting Candidates for PL – call to community members for volunteer Candidates for TL – call to community members for volunteer Can all TSC members attend the proposed Jan 15-16 meetup in Phoenix? -- Bruce (5 min) Looks good for most TSC members, Eventbrite to follow for formal registration for community. Release cadence - changes needed? align with upstream OpenStack? Time based or content based? - Bruce & Ghada (15 min) Recommend we follow the milestone of OpenStack and have our own check list, potentially with some offset (initially 6 weeks and revisit in the future) Proposal is to align to two releases a year along with OpenStack. We could do "dot" release for bugs in between. dot release is defined already and can be done. Can pick up an OpenStack stable as required. Action: Bruce and Ghada will refine the proposal and provide update at next week’s meeting. Update from Brent on patch elimination as discussed on community call (10 min) Brent shared charts - will send out after the call Please provide feedback Update from Miguel - cross project discussions (Neutron/Nova) (5 min) http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000138.html Nova response: http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000150.html Generally positive response from cross project teams Neutron - Miguel attended the weekly meeting with the STX networking team and the major spec agreed to at the PTG has merged (https://review.openstack.org/#/c/599980 Network Segment Range Management ) Nova - add Stein Numa aware live migration spec - https://review.openstack.org/#/c/599587/ Next release priorities - Ian ( 20 min ) https://ethercalc.openstack.org/fafyo2729fnr Agreed we need a focus meeting on this – sending a proposed time to the TSC – once the meeting time is confirmed it will be posted to the mailing list Action: PL/TL from sub-projects – please review all items tagged as high priority and provide input on whether or not this initiative is staffed for your sub-project or needs to be discussed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Dec 3 01:24:07 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 3 Dec 2018 01:24:07 +0000 Subject: [Starlingx-discuss] Project-wise implementation details for new contributors In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DE8E73@SHSMSX104.ccr.corp.intel.com> Hi, I will highly recommend you to go to StarlingX wiki page: https://wiki.openstack.org/wiki/StarlingX Two important pages for new developers: - https://docs.starlingx.io/developer_guide/index.html - https://wiki.openstack.org/wiki/StarlingX/CodeSubmissionGuidelines and you’re very much welcome to attend the community meetings which are listed here: https://wiki.openstack.org/wiki/Starlingx/Meetings whenever you have question, come back to this mailing list for help. Thx. - cindy From: Quick Convey [mailto:quickconvey at gmail.com] Sent: Monday, December 3, 2018 1:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Project-wise implementation details for new contributors Hi, Could you please tell me ,Where can I find project-wise implementation details. Please add something in the Readme file for new contributors. Is there any doc where I can find these details. Thanks. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Dec 3 08:53:05 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 3 Dec 2018 08:53:05 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Message-ID: Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: RFE_ADD_SUPPORT_FOR_QUERYING_QUOTAS_WITH_USAGE.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 15904 bytes Desc: RFE_ADD_SUPPORT_FOR_QUERYING_QUOTAS_WITH_USAGE.docx URL: From xuyun at jxresearch.com Mon Dec 3 09:25:42 2018 From: xuyun at jxresearch.com (=?gb2312?B?0OzUzA==?=) Date: Mon, 3 Dec 2018 17:25:42 +0800 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> Message-ID: <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> Hi, I’m trying to deploy a simplex on a bare metal server. After unlocking controller-0 following the installation_guide, two error events are reported and this node was degraded: 200.011 controller-0 experienced a configuration failure. host=controller-0 300.004 No enabled compute host with connectivity to provider network. service=networking.providernet=e542cf30-a07d-41f7-be7b-cc5a4e14b0d7 Would you please give me some hints how to debug this problem? Thank you! Br, Xu Yun -------------- next part -------------- An HTML attachment was scrubbed... URL: From Eric.MacDonald at windriver.com Mon Dec 3 15:02:17 2018 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Mon, 3 Dec 2018 15:02:17 +0000 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> Message-ID: <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> 300.004 is a dependency alarm that should go away once controller-0 is unlocked-enabled. Looks like an All-In-One system your trying to provision. Therefore this could be a controller or compute function configuration error. I would look for the words Error and Warning in the puppet logs and see what that shows. sudo grep �Cr /var/log/puppet �Ce Error �Ce Warning Eric. From: ���� [mailto:xuyun at jxresearch.com] Sent: Monday, December 03, 2018 4:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to bring up controller-0 Hi, I��m trying to deploy a simplex on a bare metal server. After unlocking controller-0 following the installation_guide, two error events are reported and this node was degraded: 200.011 controller-0 experienced a configuration failure. host=controller-0 300.004 No enabled compute host with connectivity to provider network. service=networking.providernet=e542cf30-a07d-41f7-be7b-cc5a4e14b0d7 Would you please give me some hints how to debug this problem? Thank you! Br, Xu Yun -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Dec 3 15:43:04 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 3 Dec 2018 15:43:04 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> Seems like that CentOS 7 just announced 1810 release today (guess this is 7.6): https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=%28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 thx. - cindy -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Saturday, December 1, 2018 7:49 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Centos Distro Direction Folks, As we move forward into the spring release (Stein based), we will also be dealing with another CentOS update. RHEL has already released the 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 update shortl, about 30 days after RHEL releases. We should do the 7.6 Update as we did the 7.5 Update on a feature branch, it took about 2 months last time (including initial setup, rebasing, and de-fuzzing), I expect it will be shorter this time based on our past learning. We should start out with creating the feature branches (I will work with Dean on this) for stx-integ, stx-root, stx-tools, and stx-upstream repos. When we start the work, we need to remember to rebase the feature branches regularly and check for patch fuzzing issues. Cindy, can you please put this on your agenda for the next Non-Openstack Distro meeting. While on the topic of Cento Distro updates, many of you may have heard that RHEL 8 Beta was announced on Nov 14 [0], while this is not a CentOS release we should start thinking about that upgrade as it will be a larger effort as it includes the 4.18 kernel (alas not the 4.19 LTS kernel) along with many other upgrades. We should start a feature branch for CentOS 8 as well to do the updates, This will help reduce some of the patch load from the backported patches. Since we don't know exactly when CentOS 8 will be available this should be a Train-based release target (Fall 2019) (at the earliest) [0] https://www.redhat.com/en/blog/powering-its-future-while-preserving-present-introducing-red-hat-enterprise-linux-8-beta Thanks Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Mon Dec 3 16:48:34 2018 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 3 Dec 2018 08:48:34 -0800 Subject: [Starlingx-discuss] review for story 2004211 patch In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DEAEF5@SHSMSX104.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D8508C2EE5@CDSMSX102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FEC8D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAEAC@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FFCF7@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAEF5@SHSMSX104.ccr.corp.intel.com> Message-ID: <8fd50883-596c-73a4-f303-5c8addb9573e@linux.intel.com> +starlingx-discuss Don, Some additional background: I created the initial storyboard and talked with Dariush, he is supposed to provide the background about a given patch and if it can be removed, I assume he would have checked on the history and talked with the authors. Once he gave the go ahead to try removal, Martin did the work of removing the patch and testing it in the CentOS build environment. I will wait for more details from Martin as suggested by Cindy regarding testing. Sau! On 12/3/18 7:21 AM, Xie, Cindy wrote: > Agree w/ you Don. Just read the patch and see like this is a workaround > to avoid “make check” fail in certain mock config… @ Martin, can you > please double check w/ the patch author on this? And your test confirmed > that without this patch, the build can be successful in this mock config > already? > > Thx. - cindy > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Monday, December 3, 2018 11:14 PM > *To:* Xie, Cindy ; Chen, Haochuan Z > ; Wold, Saul ; Eslimi, > Dariush ; McKenna, Jason > > *Cc:* Liu, ZhipengS ; Lin, Shuicheng > ; Little, Scott > *Subject:* RE: review for story 2004211 patch > > I understand that goal, but this particular patch was added to deal with > a build issue at the time, presumably. Was this a consideration when > removal of this patch was decided upon? Was there any discussion with > Scott or Jason to see if the patch may still be required? > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Monday, December 03, 2018 10:08 AM > *To:* Penney, Don; Chen, Haochuan Z; Wold, Saul; Eslimi, Dariush; > McKenna, Jason > *Cc:* Liu, ZhipengS; Lin, Shuicheng > *Subject:* RE: review for story 2004211 patch > > Don, > > The driving force is to reduce the number of patches that we have to > maintain. This is the goal for non-openstack distro sub-project team. > > Understand that we are not able to reduce all the patches for this > particular package, thus we still need to use sRPM instead of binary > RPM. However, the goodness is that we do not need to re-base those > patches when we have to upgrade CentOS next time. The less patches we > carry, the less upgrade effort will be. > > Thanks. - cindy > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Monday, December 3, 2018 10:45 PM > *To:* Chen, Haochuan Z >; Wold, Saul >; Eslimi, Dariush > >; > McKenna, Jason > > *Cc:* Liu, ZhipengS >; Lin, Shuicheng > >; Xie, Cindy > > > *Subject:* RE: review for story 2004211 patch > > I’ll re-post my question here, since it wasn’t answered on the review > itself: > > What is the driver for removing this patch? Are you sure this is safe to > remove in all build environments? Presumably it was added because of a > build failure at the time. Adding Jason to comment on the history > > You don’t seem to be removing all patches from the package, so that we > could switch to just using the binary, so why remove just this one? > > *From:*Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] > *Sent:* Monday, December 03, 2018 12:40 AM > *To:* Wold, Saul; Eslimi, Dariush; Penney, Don; McKenna, Jason > *Cc:* Liu, ZhipengS; Lin, Shuicheng; Xie, Cindy > *Subject:* review for story 2004211 patch > > Hi folks > > I have submit a patch to eliminate a meta patch, which re-enable make > check in sudo package’s building. > > https://review.openstack.org/#/c/621057/ > > Please help to review. And wait for your opinion. > > Thanks! > > Martin, Chen > > SSG OTC, Software Engineer > > 021-61164330 > From sgw at linux.intel.com Mon Dec 3 17:03:03 2018 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 3 Dec 2018 09:03:03 -0800 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> Message-ID: On 12/3/18 7:43 AM, Xie, Cindy wrote: > Seems like that CentOS 7 just announced 1810 release today (guess this is 7.6): > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=%28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > Yup, timing is! Cindy, can you please put this on the agenda for the next non-openstack Distro meeting. We also have a topic for the TSC (thanks BruceJ) the following morning, TSC members may want to start weighing in here regarding my initial proposal below, which we can talk more about on Thursday. Thanks Sau! > thx. - cindy > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Saturday, December 1, 2018 7:49 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Centos Distro Direction > > > Folks, > > As we move forward into the spring release (Stein based), we will also be dealing with another CentOS update. RHEL has already released the > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 update shortl, about 30 days after RHEL releases. > > We should do the 7.6 Update as we did the 7.5 Update on a feature branch, it took about 2 months last time (including initial setup, rebasing, and de-fuzzing), I expect it will be shorter this time based on our past learning. > > We should start out with creating the feature branches (I will work with Dean on this) for stx-integ, stx-root, stx-tools, and stx-upstream repos. When we start the work, we need to remember to rebase the feature branches regularly and check for patch fuzzing issues. > > Cindy, can you please put this on your agenda for the next Non-Openstack Distro meeting. > > While on the topic of Cento Distro updates, many of you may have heard that RHEL 8 Beta was announced on Nov 14 [0], while this is not a CentOS release we should start thinking about that upgrade as it will be a larger effort as it includes the 4.18 kernel (alas not the 4.19 LTS > kernel) along with many other upgrades. We should start a feature branch for CentOS 8 as well to do the updates, This will help reduce some of the patch load from the backported patches. Since we don't know exactly when CentOS 8 will be available this should be a Train-based release target (Fall 2019) (at the earliest) > > [0] > https://www.redhat.com/en/blog/powering-its-future-while-preserving-present-introducing-red-hat-enterprise-linux-8-beta > > Thanks > Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Don.Penney at windriver.com Mon Dec 3 17:21:04 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 3 Dec 2018 17:21:04 +0000 Subject: [Starlingx-discuss] review for story 2004211 patch In-Reply-To: <8fd50883-596c-73a4-f303-5c8addb9573e@linux.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D8508C2EE5@CDSMSX102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FEC8D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAEAC@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FFCF7@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAEF5@SHSMSX104.ccr.corp.intel.com> <8fd50883-596c-73a4-f303-5c8addb9573e@linux.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA3FFD52@ALA-MBD.corp.ad.wrs.com> I think it's perfectly reasonable for me to ask these questions on reviews when I have them. And to repeat my questions when they're left unanswered on the review. As for the storyboard, Dariush added the comment "Seems reasonable alternative to refactor mock. have you tried to see if it is still valid?", to which there was no answer. I don't know if he looked into the history or talked to Jason about it, and he's on vacation at the moment. I still don't see a reason for removing just this patch. If there are no other modifications, and we're now able to move to the binary RPM, then great. Otherwise, I don't see why we're doing this. Hence the question. I don't know what the issue was with the check, or whether we'd still be impacted. Which is why I added Jason McKenna to the review asking for comment. Cheers, Don. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Monday, December 03, 2018 11:49 AM To: Xie, Cindy; Penney, Don; Chen, Haochuan Z; Eslimi, Dariush; McKenna, Jason; starlingx-discuss at lists.starlingx.io Cc: Liu, ZhipengS; Lin, Shuicheng; Little, Scott Subject: Re: review for story 2004211 patch +starlingx-discuss Don, Some additional background: I created the initial storyboard and talked with Dariush, he is supposed to provide the background about a given patch and if it can be removed, I assume he would have checked on the history and talked with the authors. Once he gave the go ahead to try removal, Martin did the work of removing the patch and testing it in the CentOS build environment. I will wait for more details from Martin as suggested by Cindy regarding testing. Sau! On 12/3/18 7:21 AM, Xie, Cindy wrote: > Agree w/ you Don. Just read the patch and see like this is a workaround > to avoid “make check” fail in certain mock config… @ Martin, can you > please double check w/ the patch author on this? And your test confirmed > that without this patch, the build can be successful in this mock config > already? > > Thx. - cindy > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Monday, December 3, 2018 11:14 PM > *To:* Xie, Cindy ; Chen, Haochuan Z > ; Wold, Saul ; Eslimi, > Dariush ; McKenna, Jason > > *Cc:* Liu, ZhipengS ; Lin, Shuicheng > ; Little, Scott > *Subject:* RE: review for story 2004211 patch > > I understand that goal, but this particular patch was added to deal with > a build issue at the time, presumably. Was this a consideration when > removal of this patch was decided upon? Was there any discussion with > Scott or Jason to see if the patch may still be required? > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Monday, December 03, 2018 10:08 AM > *To:* Penney, Don; Chen, Haochuan Z; Wold, Saul; Eslimi, Dariush; > McKenna, Jason > *Cc:* Liu, ZhipengS; Lin, Shuicheng > *Subject:* RE: review for story 2004211 patch > > Don, > > The driving force is to reduce the number of patches that we have to > maintain. This is the goal for non-openstack distro sub-project team. > > Understand that we are not able to reduce all the patches for this > particular package, thus we still need to use sRPM instead of binary > RPM. However, the goodness is that we do not need to re-base those > patches when we have to upgrade CentOS next time. The less patches we > carry, the less upgrade effort will be. > > Thanks. - cindy > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Monday, December 3, 2018 10:45 PM > *To:* Chen, Haochuan Z >; Wold, Saul >; Eslimi, Dariush > >; > McKenna, Jason > > *Cc:* Liu, ZhipengS >; Lin, Shuicheng > >; Xie, Cindy > > > *Subject:* RE: review for story 2004211 patch > > I’ll re-post my question here, since it wasn’t answered on the review > itself: > > What is the driver for removing this patch? Are you sure this is safe to > remove in all build environments? Presumably it was added because of a > build failure at the time. Adding Jason to comment on the history > > You don’t seem to be removing all patches from the package, so that we > could switch to just using the binary, so why remove just this one? > > *From:*Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] > *Sent:* Monday, December 03, 2018 12:40 AM > *To:* Wold, Saul; Eslimi, Dariush; Penney, Don; McKenna, Jason > *Cc:* Liu, ZhipengS; Lin, Shuicheng; Xie, Cindy > *Subject:* review for story 2004211 patch > > Hi folks > > I have submit a patch to eliminate a meta patch, which re-enable make > check in sudo package’s building. > > https://review.openstack.org/#/c/621057/ > > Please help to review. And wait for your opinion. > > Thanks! > > Martin, Chen > > SSG OTC, Software Engineer > > 021-61164330 > From bruce.e.jones at intel.com Mon Dec 3 17:35:27 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 3 Dec 2018 17:35:27 +0000 Subject: [Starlingx-discuss] January community meeting registration is open Message-ID: <9A85D2917C58154C960D95352B22818BB1EC148C@fmsmsx117.amr.corp.intel.com> If you are planning to attend the community meeting in January, please register for it so we can get a count for logistics (meals, etc...): https://starlingx_jan2019meetup.eventbrite.com. You will get valuable tickets that are sure to be keepsake items for many years to come. You don't need them to attend the event. We just want to make sure we have enough chairs and lunches available. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Mon Dec 3 20:39:35 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 3 Dec 2018 20:39:35 +0000 Subject: [Starlingx-discuss] Issue when working with API requests Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A90FD6F@fmsmsx104.amr.corp.intel.com> Hi I'm trying to change the MTU value of an interface of a controller with API requests directly, I used the same request that the "system --debug host-if-modify controller-1 mgmt0 -m 9000" command is doing but an error is shown [1], does anyone know what is missing from this request? the commands that I'm suing to obtain the TOKEN and send the request are [2] 1- " {"error_message": "{\"debuginfo\": null, \"faultcode\": \"Client\", \"faultstring\": \"Invalid input for field/attribute interface. Value: 'e65db832-4eea-45d1-bfc9-aa7dd0d860aa'. unable to convert to Interface. Error: __init__() takes exactly 1 argument (2 given)\"}"}curl: (6) Could not resolve host: PATCH; Name or service not known" 2- http://paste.openstack.org/show/736594/ Regards, José From haochuan.z.chen at intel.com Tue Dec 4 00:57:39 2018 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 4 Dec 2018 00:57:39 +0000 Subject: [Starlingx-discuss] review for story 2004211 patch In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA3FFD52@ALA-MBD.corp.ad.wrs.com> References: <56829C2A36C2E542B0CCB9854828E4D8508C2EE5@CDSMSX102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FEC8D@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAEAC@SHSMSX104.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FFCF7@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAEF5@SHSMSX104.ccr.corp.intel.com> <8fd50883-596c-73a4-f303-5c8addb9573e@linux.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA3FFD52@ALA-MBD.corp.ad.wrs.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D8508C3065@CDSMSX102.ccr.corp.intel.com> I could make clean build successfully after remove this patch for make check. We could double confirm with Jason, how the build error happens and the history. After this patch removed, I could use sudo rpm, when I submit fix for this story, removing all patch for this package. https://storyboard.openstack.org/#!/story/2004212 Martin, Chen SSG OTC, Software Engineer 021-61164330 -----Original Message----- From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Tuesday, December 4, 2018 1:21 AM To: Saul Wold ; Xie, Cindy ; Chen, Haochuan Z ; Eslimi, Dariush ; McKenna, Jason ; starlingx-discuss at lists.starlingx.io Cc: Liu, ZhipengS ; Lin, Shuicheng ; Little, Scott Subject: RE: review for story 2004211 patch I think it's perfectly reasonable for me to ask these questions on reviews when I have them. And to repeat my questions when they're left unanswered on the review. As for the storyboard, Dariush added the comment "Seems reasonable alternative to refactor mock. have you tried to see if it is still valid?", to which there was no answer. I don't know if he looked into the history or talked to Jason about it, and he's on vacation at the moment. I still don't see a reason for removing just this patch. If there are no other modifications, and we're now able to move to the binary RPM, then great. Otherwise, I don't see why we're doing this. Hence the question. I don't know what the issue was with the check, or whether we'd still be impacted. Which is why I added Jason McKenna to the review asking for comment. Cheers, Don. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Monday, December 03, 2018 11:49 AM To: Xie, Cindy; Penney, Don; Chen, Haochuan Z; Eslimi, Dariush; McKenna, Jason; starlingx-discuss at lists.starlingx.io Cc: Liu, ZhipengS; Lin, Shuicheng; Little, Scott Subject: Re: review for story 2004211 patch +starlingx-discuss Don, Some additional background: I created the initial storyboard and talked with Dariush, he is supposed to provide the background about a given patch and if it can be removed, I assume he would have checked on the history and talked with the authors. Once he gave the go ahead to try removal, Martin did the work of removing the patch and testing it in the CentOS build environment. I will wait for more details from Martin as suggested by Cindy regarding testing. Sau! On 12/3/18 7:21 AM, Xie, Cindy wrote: > Agree w/ you Don. Just read the patch and see like this is a > workaround to avoid “make check” fail in certain mock config… @ > Martin, can you please double check w/ the patch author on this? And > your test confirmed that without this patch, the build can be > successful in this mock config already? > > Thx. - cindy > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Monday, December 3, 2018 11:14 PM > *To:* Xie, Cindy ; Chen, Haochuan Z > ; Wold, Saul ; Eslimi, > Dariush ; McKenna, Jason > > *Cc:* Liu, ZhipengS ; Lin, Shuicheng > ; Little, Scott > *Subject:* RE: review for story 2004211 patch > > I understand that goal, but this particular patch was added to deal > with a build issue at the time, presumably. Was this a consideration > when removal of this patch was decided upon? Was there any discussion > with Scott or Jason to see if the patch may still be required? > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Monday, December 03, 2018 10:08 AM > *To:* Penney, Don; Chen, Haochuan Z; Wold, Saul; Eslimi, Dariush; > McKenna, Jason > *Cc:* Liu, ZhipengS; Lin, Shuicheng > *Subject:* RE: review for story 2004211 patch > > Don, > > The driving force is to reduce the number of patches that we have to > maintain. This is the goal for non-openstack distro sub-project team. > > Understand that we are not able to reduce all the patches for this > particular package, thus we still need to use sRPM instead of binary > RPM. However, the goodness is that we do not need to re-base those > patches when we have to upgrade CentOS next time. The less patches we > carry, the less upgrade effort will be. > > Thanks. - cindy > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Monday, December 3, 2018 10:45 PM > *To:* Chen, Haochuan Z >; Wold, Saul >; Eslimi, Dariush > >; > McKenna, Jason > > *Cc:* Liu, ZhipengS >; Lin, Shuicheng > >; Xie, Cindy > > > *Subject:* RE: review for story 2004211 patch > > I’ll re-post my question here, since it wasn’t answered on the review > itself: > > What is the driver for removing this patch? Are you sure this is safe > to remove in all build environments? Presumably it was added because > of a build failure at the time. Adding Jason to comment on the history > > You don’t seem to be removing all patches from the package, so that we > could switch to just using the binary, so why remove just this one? > > *From:*Chen, Haochuan Z [mailto:haochuan.z.chen at intel.com] > *Sent:* Monday, December 03, 2018 12:40 AM > *To:* Wold, Saul; Eslimi, Dariush; Penney, Don; McKenna, Jason > *Cc:* Liu, ZhipengS; Lin, Shuicheng; Xie, Cindy > *Subject:* review for story 2004211 patch > > Hi folks > > I have submit a patch to eliminate a meta patch, which re-enable make > check in sudo package’s building. > > https://review.openstack.org/#/c/621057/ > > Please help to review. And wait for your opinion. > > Thanks! > > Martin, Chen > > SSG OTC, Software Engineer > > 021-61164330 > From changcheng.liu at intel.com Tue Dec 4 01:11:46 2018 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Tue, 4 Dec 2018 01:11:46 +0000 Subject: [Starlingx-discuss] SCM: repo projects revision control Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F0FC68@SHSMSX104.ccr.corp.intel.com> Hi Scott & Dean, Is there any way for developers get the exactly same stx source code after running below commands? repo init -u https://git.starlingx.io/stx-manifest -m default.xml repo sync Currently, some projects are set to be fixed tag/version in .repo/manifests/default.xml e.g. kubernetes.git 1 +--- 5 lines: -------------------------------- 6 7 8 9 +-- 93 lines: ------ 102 103 104 105 +-- 5 lines: -- However, some projects are always moving ahead without setting "revision" item. B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuyun at jxresearch.com Tue Dec 4 01:30:48 2018 From: xuyun at jxresearch.com (=?gb2312?B?0OzUzA==?=) Date: Tue, 4 Dec 2018 09:30:48 +0800 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> Message-ID: <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> Hello, Thanks for Eric's hint. I doubted this problem is related to OVS bridge setup yesterday controller-0:/var/log/puppet# ovs-vsctl show a98562f6-3223-4eab-a663-68cb4d8e9ece Manager "ptcp:6639:127.0.0.1" is_connected: true Bridge "br-phy0" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "lldpd2a87771-fd" Interface "lldpd2a87771-fd" type: internal Port "phy-br-phy0" Interface "phy-br-phy0" type: patch options: {peer="int-br-phy0"} Port "eth0" Interface "eth0" type: dpdk options: {dpdk-devargs="0000:02:00.1", n_rxq="1"} error: "Error attaching device '0000:02:00.1' to DPDK" Port "br-phy0" Interface "br-phy0" type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "int-br-phy0" Interface "int-br-phy0" type: patch options: {peer="phy-br-phy0"} Port br-int Interface br-int type: internal ovs_version: “2.9.0" I don’t know why ‘eth0’ was added into the br-phy0 bridge, since there is no ‘eth0’ in my machine. I configured an interface named ‘eno4’ for my data plane. [wrsroot at controller-0 log(keystone_admin)]$ system host-if-list -a 1 +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | provider networks | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ | 548477c1-038f-4280-a4b1-d7c11d724b09 | eno4 | data | ethernet | None | [u'eno4'] | [] | [] | MTU=1500,accelerated=False | providernet-a | | 5c4760f5-a32b-49b9-91d1-73a72fdcabfb | eno3 | None | ethernet | None | [u'eno3'] | [] | [] | MTU=1500 | None | | 7b85e213-f435-4693-a13a-364de1e06a35 | eno1 | platform | ethernet | None | [u'eno1'] | [] | [] | MTU=1500 | None | | 96e1e865-2f7c-486e-8e9e-88fcb1c71686 | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | c87ccb52-a1c7-4332-87f9-5966e6599ee6 | eno2 | None | ethernet | None | [u'eno2'] | [] | [] | MTU=1500 | None | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+—————————+ And also the puppet log seems confirm that ovs setup failure because of ‘eth0': controller-0:/var/log/puppet# grep -r /var/log/puppet/ -e Error -e Warning /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.348 Notice: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Port[eth0]/Exec[ovs-add-port: eth0]/returns: ovs-vsctl: Error detected while setting up 'eth0': Error attaching device '0000:02:00.1' to DPDK. See ovs-vswitchd log for details. /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.443 Error: 2018-12-03 16:07:16 +0800 ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.678 Error: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Flow[eth0]/Exec[ovs-add-flow: eth0]/returns: change from notrun to 0 failed: ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.800 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Ceph::Post/File[/var/run/.ceph_started]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.806 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/Service[crond]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.817 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/etc/platform/.initial_compute_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.824 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/var/run/.compute_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.833 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.initial_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.839 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.config_applied]: Skipping because of failed dependencies Any further suggestion how to fix this problem? Thank you. Br, Xu Yun > 在 2018年12月3日,下午11:02,MacDonald, Eric 写道: > > 300.004 is a dependency alarm that should go away once controller-0 is unlocked-enabled. > > Looks like an All-In-One system your trying to provision. > Therefore this could be a controller or compute function configuration error. > > I would look for the words Error and Warning in the puppet logs and see what that shows. > > sudo grep จCr /var/log/puppet จCe Error จCe Warning > > Eric. > > From: ะ์ิฬ [mailto:xuyun at jxresearch.com ] > Sent: Monday, December 03, 2018 4:26 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Unable to bring up controller-0 > > Hi, > > Iกฏm trying to deploy a simplex on a bare metal server. After unlocking controller-0 following the installation_guide, two error events are reported and this node was degraded: > > 200.011 > controller-0 experienced a configuration failure. > host=controller-0 > 300.004 > No enabled compute host with connectivity to provider network. > service=networking.providernet=e542cf30-a07d-41f7-be7b-cc5a4e14b0d7 > > Would you please give me some hints how to debug this problem? Thank you! > > Br, > Xu Yun -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Dec 4 02:13:03 2018 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 4 Dec 2018 02:13:03 +0000 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> Message-ID: Hi xuyun: Eth0 is using for dpdk , which is virtual port , not real NIC name. Could you run “python /usr/share/openvswitch/scripts/dpdk-devbind.py --status” to check what type of eno4 is ? Are you deploy in a virtual machine with libvirt ? or bare metal system ? Thanks. BR Austin Sun. From: 徐蕴 [mailto:xuyun at jxresearch.com] Sent: Tuesday, December 4, 2018 9:31 AM To: MacDonald, Eric Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Unable to bring up controller-0 Hello, Thanks for Eric's hint. I doubted this problem is related to OVS bridge setup yesterday controller-0:/var/log/puppet# ovs-vsctl show a98562f6-3223-4eab-a663-68cb4d8e9ece Manager "ptcp:6639:127.0.0.1" is_connected: true Bridge "br-phy0" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "lldpd2a87771-fd" Interface "lldpd2a87771-fd" type: internal Port "phy-br-phy0" Interface "phy-br-phy0" type: patch options: {peer="int-br-phy0"} Port "eth0" Interface "eth0" type: dpdk options: {dpdk-devargs="0000:02:00.1", n_rxq="1"} error: "Error attaching device '0000:02:00.1' to DPDK" Port "br-phy0" Interface "br-phy0" type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "int-br-phy0" Interface "int-br-phy0" type: patch options: {peer="phy-br-phy0"} Port br-int Interface br-int type: internal ovs_version: “2.9.0" I don’t know why ‘eth0’ was added into the br-phy0 bridge, since there is no ‘eth0’ in my machine. I configured an interface named ‘eno4’ for my data plane. [wrsroot at controller-0 log(keystone_admin)]$ system host-if-list -a 1 +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | provider networks | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ | 548477c1-038f-4280-a4b1-d7c11d724b09 | eno4 | data | ethernet | None | [u'eno4'] | [] | [] | MTU=1500,accelerated=False | providernet-a | | 5c4760f5-a32b-49b9-91d1-73a72fdcabfb | eno3 | None | ethernet | None | [u'eno3'] | [] | [] | MTU=1500 | None | | 7b85e213-f435-4693-a13a-364de1e06a35 | eno1 | platform | ethernet | None | [u'eno1'] | [] | [] | MTU=1500 | None | | 96e1e865-2f7c-486e-8e9e-88fcb1c71686 | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | c87ccb52-a1c7-4332-87f9-5966e6599ee6 | eno2 | None | ethernet | None | [u'eno2'] | [] | [] | MTU=1500 | None | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+—————————+ And also the puppet log seems confirm that ovs setup failure because of ‘eth0': controller-0:/var/log/puppet# grep -r /var/log/puppet/ -e Error -e Warning /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.348 Notice: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Port[eth0]/Exec[ovs-add-port: eth0]/returns: ovs-vsctl: Error detected while setting up 'eth0': Error attaching device '0000:02:00.1' to DPDK. See ovs-vswitchd log for details. /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.443 Error: 2018-12-03 16:07:16 +0800 ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.678 Error: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Flow[eth0]/Exec[ovs-add-flow: eth0]/returns: change from notrun to 0 failed: ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.800 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Ceph::Post/File[/var/run/.ceph_started]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.806 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/Service[crond]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.817 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/etc/platform/.initial_compute_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.824 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/var/run/.compute_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.833 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.initial_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.839 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.config_applied]: Skipping because of failed dependencies Any further suggestion how to fix this problem? Thank you. Br, Xu Yun 在 2018年12月3日,下午11:02,MacDonald, Eric > 写道: 300.004 is a dependency alarm that should go away once controller-0 is unlocked-enabled. Looks like an All-In-One system your trying to provision. Therefore this could be a controller or compute function configuration error. I would look for the words Error and Warning in the puppet logs and see what that shows. sudo grep จCr /var/log/puppet จCe Error จCe Warning Eric. From: ะ์ิฬ [mailto:xuyun at jxresearch.com] Sent: Monday, December 03, 2018 4:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to bring up controller-0 Hi, Iกฏm trying to deploy a simplex on a bare metal server. After unlocking controller-0 following the installation_guide, two error events are reported and this node was degraded: 200.011 controller-0 experienced a configuration failure. host=controller-0 300.004 No enabled compute host with connectivity to provider network. service=networking.providernet=e542cf30-a07d-41f7-be7b-cc5a4e14b0d7 Would you please give me some hints how to debug this problem? Thank you! Br, Xu Yun -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuyun at jxresearch.com Tue Dec 4 02:32:24 2018 From: xuyun at jxresearch.com (=?gb2312?B?0OzUzA==?=) Date: Tue, 4 Dec 2018 10:32:24 +0800 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> Message-ID: <2DAB1002-7DE7-4134-A08F-6BC919FF0B11@jxresearch.com> Hi, Austin, I’m deploying an all in one node in a bare metal (an old DELL server), maybe the NIC doesn’t support DPDK? I cannot see eno4 in the output: controller-0:/var/log/puppet# python /usr/share/openvswitch/scripts/dpdk-devbind.py --status Network devices using DPDK-compatible driver ============================================ 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' drv=vfio-pci unused= Network devices using kernel driver =================================== 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=vfio-pci *Active* 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 unused=vfio-pci 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 unused=vfio-pci Other Network devices ===================== Since the size limitation of the email system, I upload complete log to http://paste.openstack.org/show/736607/ BR, Xu Yun > 在 2018年12月4日,上午10:13,Sun, Austin 写道: > > Hi xuyun: > Eth0 is using for dpdk , which is virtual port , not real NIC name. > > Could you run “python /usr/share/openvswitch/scripts/dpdk-devbind.py --status” to check what type of eno4 is ? > > Are you deploy in a virtual machine with libvirt ? or bare metal system ? > > Thanks. > BR > Austin Sun. > >   <> > From: 徐蕴 [mailto:xuyun at jxresearch.com] > Sent: Tuesday, December 4, 2018 9:31 AM > To: MacDonald, Eric > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Unable to bring up controller-0 > > Hello, > > Thanks for Eric's hint. I doubted this problem is related to OVS bridge setup yesterday > > controller-0:/var/log/puppet# ovs-vsctl show > a98562f6-3223-4eab-a663-68cb4d8e9ece > Manager "ptcp:6639:127.0.0.1" > is_connected: true > Bridge "br-phy0" > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port "lldpd2a87771-fd" > Interface "lldpd2a87771-fd" > type: internal > Port "phy-br-phy0" > Interface "phy-br-phy0" > type: patch > options: {peer="int-br-phy0"} > Port "eth0" > Interface "eth0" > type: dpdk > options: {dpdk-devargs="0000:02:00.1", n_rxq="1"} > error: "Error attaching device '0000:02:00.1' to DPDK" > Port "br-phy0" > Interface "br-phy0" > type: internal > Bridge br-int > Controller "tcp:127.0.0.1:6633" > is_connected: true > fail_mode: secure > Port "int-br-phy0" > Interface "int-br-phy0" > type: patch > options: {peer="phy-br-phy0"} > Port br-int > Interface br-int > type: internal > ovs_version: “2.9.0" > > I don’t know why ‘eth0’ was added into the br-phy0 bridge, since there is no ‘eth0’ in my machine. I configured an interface named ‘eno4’ for my data plane. > [wrsroot at controller-0 log(keystone_admin)]$ system host-if-list -a 1 > +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ > | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | provider networks | > +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ > | 548477c1-038f-4280-a4b1-d7c11d724b09 | eno4 | data | ethernet | None | [u'eno4'] | [] | [] | MTU=1500,accelerated=False | providernet-a | > | 5c4760f5-a32b-49b9-91d1-73a72fdcabfb | eno3 | None | ethernet | None | [u'eno3'] | [] | [] | MTU=1500 | None | > | 7b85e213-f435-4693-a13a-364de1e06a35 | eno1 | platform | ethernet | None | [u'eno1'] | [] | [] | MTU=1500 | None | > | 96e1e865-2f7c-486e-8e9e-88fcb1c71686 | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | > | c87ccb52-a1c7-4332-87f9-5966e6599ee6 | eno2 | None | ethernet | None | [u'eno2'] | [] | [] | MTU=1500 | None | > +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+—————————+ > > And also the puppet log seems confirm that ovs setup failure because of ‘eth0': > > controller-0:/var/log/puppet# grep -r /var/log/puppet/ -e Error -e Warning > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.348 Notice: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Port[eth0]/Exec[ovs-add-port: eth0]/returns: ovs-vsctl: Error detected while setting up 'eth0': Error attaching device '0000:02:00.1' to DPDK. See ovs-vswitchd log for details. > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.443 Error: 2018-12-03 16:07:16 +0800 ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.678 Error: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Flow[eth0]/Exec[ovs-add-flow: eth0]/returns: change from notrun to 0 failed: ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.800 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Ceph::Post/File[/var/run/.ceph_started]: Skipping because of failed dependencies > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.806 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/Service[crond]: Skipping because of failed dependencies > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.817 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/etc/platform/.initial_compute_config_complete]: Skipping because of failed dependencies > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.824 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/var/run/.compute_config_complete]: Skipping because of failed dependencies > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.833 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.initial_config_complete]: Skipping because of failed dependencies > /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.839 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.config_applied]: Skipping because of failed dependencies > > > Any further suggestion how to fix this problem? Thank you. > > Br, > Xu Yun > > > 在 2018年12月3日,下午11:02,MacDonald, Eric > 写道: > > 300.004 is a dependency alarm that should go away once controller-0 is unlocked-enabled. > > Looks like an All-In-One system your trying to provision. > Therefore this could be a controller or compute function configuration error. > > I would look for the words Error and Warning in the puppet logs and see what that shows. > > sudo grep จCr /var/log/puppet จCe Error จCe Warning > > Eric. > > From: ะ์ิฬ [mailto:xuyun at jxresearch.com ] > Sent: Monday, December 03, 2018 4:26 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Unable to bring up controller-0 > > Hi, > > Iกฏm trying to deploy a simplex on a bare metal server. After unlocking controller-0 following the installation_guide, two error events are reported and this node was degraded: > > 200.011 > controller-0 experienced a configuration failure. > host=controller-0 > 300.004 > No enabled compute host with connectivity to provider network. > service=networking.providernet=e542cf30-a07d-41f7-be7b-cc5a4e14b0d7 > > Would you please give me some hints how to debug this problem? Thank you! > > Br, > Xu Yun -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuyun at jxresearch.com Tue Dec 4 02:53:16 2018 From: xuyun at jxresearch.com (=?gb2312?B?0OzUzA==?=) Date: Tue, 4 Dec 2018 10:53:16 +0800 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> <2A55719D-5062-4AB2-B65D-AA3D5914A7FD@jxresearch.com> Message-ID: <6021C31D-F531-4D46-8BA9-BF13DCEBC473@jxresearch.com> Hi Austin, Is there a switch to disable DPDK for OVS? Br, Xu Yun > 在 2018年12月4日,上午10:37,Sun, Austin 写道: > > Hi Xuyun: > BMC5720 is not in DPDK list [1]. You can find whole list from [2]. > > > [1] http://doc.dpdk.org/guides/nics/bnxt.html > [2] https://core.dpdk.org/supported/ > > Thanks. > BR > Austin Sun. > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Dec 4 03:21:57 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 4 Dec 2018 03:21:57 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Infrastructure Containerization Meeting - Dec 3rd Message-ID: Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-containerization Team Meeting Agenda/Notes for Dec 3, 2018 meeting: 1. Overall project status - Major building blocks (k8s, helm, armada, ceph, docker images build recipe, tie-ins ssyinv, initial helm charts) are in place - Initial recipe is available to bring up AIO-SX and team achieved stability on AIO-simplex. 2. All-in-one Simplex: - Current status: Stability reached. - What remaining work is required to enable community member to bring up a single server config with containerized services? - Wiki to bring up containerized config (available off containerization page - not quite ready for community to try out) - Public docker setup: in-progress. Expect better update next week and view to a forecast when can be made available. - Other items --> none identified, just need the above 2 items completed 3. All-in-one Duplex: - Status of CEPH on All-in-one Duplex config? CEPH code is up for gerrit review (4 commits), need to also test bringing up AIO-DX with --kubernetes option and run through integration - When can integration start for this config? Can start as soon as the CEPH gerrit reviews are complete and code is merged. 4. Dedicated storage config: - Current status: stability issues seen and being investigated/fixed: - Swact issues: Need https://review.openstack.org/#/c/621309/ , also with this fix still having issues after switching to new controller (some processes are not coming up, eg: ceph-mgr and patching-agent). Prime: Bob Church - VIM: basic functionality now working (will eventually need to remove some of the manual steps) - Next steps for integration: Resume running through next set of tests. Prime: Chris Friesen, Bob Church, Tyler Smith 5. Opportunities to contribute - Ramp on containerization, then try out wiki on All-in-simplex once it is ready. - 4 SBs created for folks to assist. If interested please contact Frank or Brent and we'll review the SBs with you. 6. How should we track issues found during development and integration before we cutover the project to Containerization? - Should be tasks - Action: Create a main Integration SB and track issues under that SB. Action: Frank to create SB --> Closed - Update: Integration SB created: https://storyboard.openstack.org/#!/story/2004520 7. Open Items: - Once we are ready for cutover, we'll need to update the Docs website with the updated procedure. Plan to work with Abraham on how to achieve this. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Dec 4 06:26:49 2018 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 3 Dec 2018 22:26:49 -0800 Subject: [Starlingx-discuss] patch remove for story 2004280 In-Reply-To: <56829C2A36C2E542B0CCB9854828E4D8508C3251@CDSMSX102.ccr.corp.intel.com> References: <56829C2A36C2E542B0CCB9854828E4D8508C3251@CDSMSX102.ccr.corp.intel.com> Message-ID: <10b28142-9cdd-0031-0b33-be3cb8644bd1@linux.intel.com> +starlingx-discuss Please ensure that you post these to the starlingx-discuss list, we need to have this kind of information in the archives. Also, we really could use a reproducer for when and why the syslog call was blocking and why the existing syslog history is not sufficient, as Dariush notes in the story we can removed but need to ensure we keep the history logs. Sau! On 12/3/18 8:35 PM, Chen, Haochuan Z wrote: > Hi Scott > > For this story, prefer to remove two source patch for bash. > > https://storyboard.openstack.org/#!/story/2004280 > > These two patch seems to enhance bash function for syslog history. Why > introduce these two patch? And what’s the verification case? > > Thanks! > > Martin, Chen > > SSG OTC, Software Engineer > > 021-61164330 > From chenjie.xu at intel.com Tue Dec 4 08:40:29 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 4 Dec 2018 08:40:29 +0000 Subject: [Starlingx-discuss] Analysis of patch c3fa9d9 for StartlingX upstreaming Message-ID: Hi Matt, I'm working on patch c3fa9d9 but I'm struggling on how to use the method update_password added by this patch. Could you please provide the use case on how to use the method update_password? And I have searched the code on Github by the following link: https://github.com/search?q=org%3Astarlingx-staging+update_password&type=Code I can't find code which calls the method update_password in stx-neutronclient. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Tue Dec 4 08:42:33 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 4 Dec 2018 08:42:33 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> It seems just rpm package is released, but srpm is not released yet. I will keep check it recently. Best Regards Shuicheng -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Tuesday, December 4, 2018 1:03 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Centos Distro Direction On 12/3/18 7:43 AM, Xie, Cindy wrote: > Seems like that CentOS 7 just announced 1810 release today (guess this is 7.6): > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=%28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > Yup, timing is! Cindy, can you please put this on the agenda for the next non-openstack Distro meeting. We also have a topic for the TSC (thanks BruceJ) the following morning, TSC members may want to start weighing in here regarding my initial proposal below, which we can talk more about on Thursday. Thanks Sau! > thx. - cindy > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Saturday, December 1, 2018 7:49 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Centos Distro Direction > > > Folks, > > As we move forward into the spring release (Stein based), we will also > be dealing with another CentOS update. RHEL has already released the > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 update shortl, about 30 days after RHEL releases. > > We should do the 7.6 Update as we did the 7.5 Update on a feature branch, it took about 2 months last time (including initial setup, rebasing, and de-fuzzing), I expect it will be shorter this time based on our past learning. > > We should start out with creating the feature branches (I will work with Dean on this) for stx-integ, stx-root, stx-tools, and stx-upstream repos. When we start the work, we need to remember to rebase the feature branches regularly and check for patch fuzzing issues. > > Cindy, can you please put this on your agenda for the next Non-Openstack Distro meeting. > > While on the topic of Cento Distro updates, many of you may have heard > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > CentOS release we should start thinking about that upgrade as it will > be a larger effort as it includes the 4.18 kernel (alas not the 4.19 > LTS > kernel) along with many other upgrades. We should start a feature > branch for CentOS 8 as well to do the updates, This will help reduce > some of the patch load from the backported patches. Since we don't > know exactly when CentOS 8 will be available this should be a > Train-based release target (Fall 2019) (at the earliest) > > [0] > https://www.redhat.com/en/blog/powering-its-future-while-preserving-pr > esent-introducing-red-hat-enterprise-linux-8-beta > > Thanks > Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Tue Dec 4 12:21:07 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 4 Dec 2018 12:21:07 +0000 Subject: [Starlingx-discuss] Analysis of patch c3fa9d9 for StartlingX upstreaming In-Reply-To: References: Message-ID: Hello Chenjie, This commit is no longer required and can be removed since this interface is not used. It was once used by ceilometer when communicating with neutron and was added to handle admin password changes. However, service accounts have replaced the requirement for admin passwords, therefore this method is no longer invoked by ceilometer and can be removed from the neutron client. Regards, Matt From: "Xu, Chenjie" Date: Tuesday, December 4, 2018 at 3:40 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Analysis of patch c3fa9d9 for StartlingX upstreaming Hi Matt, I’m working on patch c3fa9d9 but I’m struggling on how to use the method update_password added by this patch. Could you please provide the use case on how to use the method update_password? And I have searched the code on Github by the following link: https://github.com/search?q=org%3Astarlingx-staging+update_password&type=Code I can’t find code which calls the method update_password in stx-neutronclient. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Tue Dec 4 13:11:32 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 4 Dec 2018 13:11:32 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming In-Reply-To: References: Message-ID: Hi Chenjie, I would add additional use-case information to help with the justification for adding this capability. The detailed quota information is used within the StarlingX distributed cloud solution. The quota information for a given project/user is aggregated across all sub-clouds, therefore having an efficient mechanism to retrieve the quota details of all resources is required. Regards, Matt From: "Xu, Chenjie" Date: Monday, December 3, 2018 at 3:53 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Dec 4 13:15:35 2018 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 4 Dec 2018 13:15:35 +0000 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: <6021C31D-F531-4D46-8BA9-BF13DCEBC473@jxresearch.com> References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> <2A55719D-5062-4AB2-B65D-AA3D5914A7FD@jxresearch.com> <6021C31D-F531-4D46-8BA9-BF13DCEBC473@jxresearch.com> Message-ID: Hi Xuyun: It seems currently only support ovs-dpdk. Just for confirm Is iommu and vt-x enabled in bios ? Thanks. BR Austin Sun. From: 徐蕴 [mailto:xuyun at jxresearch.com] Sent: Tuesday, December 4, 2018 10:53 AM To: Sun, Austin Cc: MacDonald, Eric ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Unable to bring up controller-0 Hi Austin, Is there a switch to disable DPDK for OVS? Br, Xu Yun 在 2018年12月4日,上午10:37,Sun, Austin > 写道: Hi Xuyun: BMC5720 is not in DPDK list [1]. You can find whole list from [2]. [1] http://doc.dpdk.org/guides/nics/bnxt.html [2] https://core.dpdk.org/supported/ Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Tue Dec 4 14:01:23 2018 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Tue, 4 Dec 2018 14:01:23 +0000 Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA3FF5A@ALA-MBD.corp.ad.wrs.com> Hi, I'd like to add Al Bailey as a core reviewer for stx-config. Al is one of the top contributers to stx-config - both in code contribution and in doing many useful reviews: http://stackalytics.com/?project_type=all&release=all&metric=all&module=stx-config I'd like confirmation (or objections) from the existing cores please... Bart Wensley, Member of Technical Staff, Wind River direct 613.963.1385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Dec 4 14:54:59 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 4 Dec 2018 14:54:59 +0000 Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA3FF5A@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA3FF5A@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA403482@ALA-MBD.corp.ad.wrs.com> Ok with me. From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Tuesday, December 04, 2018 9:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config Hi, I'd like to add Al Bailey as a core reviewer for stx-config. Al is one of the top contributers to stx-config - both in code contribution and in doing many useful reviews: http://stackalytics.com/?project_type=all&release=all&metric=all&module=stx-config I'd like confirmation (or objections) from the existing cores please... Bart Wensley, Member of Technical Staff, Wind River direct 613.963.1385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Church at windriver.com Tue Dec 4 14:55:28 2018 From: Robert.Church at windriver.com (Church, Robert) Date: Tue, 4 Dec 2018 14:55:28 +0000 Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA3FF5A@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA3FF5A@ALA-MBD.corp.ad.wrs.com> Message-ID: +1 From: "Wensley, Barton" Date: Tuesday, December 4, 2018 at 8:02 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config Hi, I’d like to add Al Bailey as a core reviewer for stx-config. Al is one of the top contributers to stx-config - both in code contribution and in doing many useful reviews: http://stackalytics.com/?project_type=all&release=all&metric=all&module=stx-config I’d like confirmation (or objections) from the existing cores please… Bart Wensley, Member of Technical Staff, Wind River direct 613.963.1385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marvin.Huang at windriver.com Tue Dec 4 15:09:31 2018 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Tue, 4 Dec 2018 15:09:31 +0000 Subject: [Starlingx-discuss] issues with devstack/stx-config Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE131C4@ALA-MBD.corp.ad.wrs.com> Hi all, I tried to bring up a STX/OpenStack system using Devstack, folloing the general procedure of Devstack and wiki/StarlingX/Devstack/stx-config. I am able to bring up all service except sysin-agent, i.e. remove sysinv-agent from the ENABLED_SERVICES: ENABLED_SERVICES+=,tsconfig,fm-common,fm-api,fm-rest-api,fm-mgr,sysinv-api,sysinv-cond ./stack.sh was able to run to completion successfully, all (total 70+ devstack@*services) are able to run under Systemd, except sysinv-cond failed. Not sure anybody else hit the same or similar issues, so I posted some issues and my temporary ‘fixing’: ISSUE 1: sysinv-api failed to start: ++/opt/stack/stx-config/devstack/lib/stx-config:start_sysinv_api:240 echo 'Waiting for sysinv-api (192.168.56.221:6385) to start...' Waiting for sysinv-api (192.168.56.221:6385) to start... ++/opt/stack/stx-config/devstack/lib/stx-config:start_sysinv_api:241 timeout 60 sh -c 'while ! wget --no-proxy -q -O- http://192.168.56.221:6385/; do sleep 1; done' ++/opt/stack/stx-config/devstack/lib/stx-config:start_sysinv_api:242 die 242 'sysinv-api did not start' ++functions-common:die:187 local exitcode=0 [Call Trace] ./stack.sh:1380:run_phase /opt/stack/devstack/functions-common:1707:run_plugins /opt/stack/devstack/functions-common:1674:source /opt/stack/stx-config/devstack/plugin.sh:28:start_sysinv /opt/stack/stx-config/devstack/lib/stx-config:210:start_sysinv_api /opt/stack/stx-config/devstack/lib/stx-config:242:die [ERROR] /opt/stack/stx-config/devstack/lib/stx-config:242 sysinv-api did not start Error on exit stack at ubuntu16045server1:~/devstack$ root cause: Dec 03 20:01:54 ubuntu16045server1 sysinv-api[23537]: ERROR sysinv File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 2292, in register_opt Dec 03 20:01:54 ubuntu16045server1 sysinv-api[23537]: ERROR sysinv if _is_opt_registered(self._opts, opt): Dec 03 20:01:54 ubuntu16045server1 sysinv-api[23537]: ERROR sysinv File "/usr/local/lib/python2.7/dist-packages/oslo_config/cfg.py", line 356, in _is_opt_registered Dec 03 20:01:54 ubuntu16045server1 sysinv-api[23537]: ERROR sysinv raise DuplicateOptError(opt.name) Dec 03 20:01:54 ubuntu16045server1 sysinv-api[23537]: ERROR sysinv DuplicateOptError: duplicate option: fatal_deprecations Solution to fix: Remove the following lines from ../stx-config/sysinv/sysinv/sysinv/sysinv/openstack/common/log.py 140 @@ -137,9 +137,11 @@ log_opts = [ cfg.BoolOpt('publish_errors', default=False, help='publish error events'), - cfg.BoolOpt('fatal_deprecations', - default=False, - help='make deprecations fatal'), ISSUE 2: sysinv-agent failed to start +functions-common:service_check:1542 for service in '${ENABLED_SERVICES//,/ }' +functions-common:service_check:1544 sudo systemctl is-enabled devstack at sysinv-agent.service enabled +functions-common:service_check:1548 sudo systemctl status devstack at sysinv-agent.service --no-pager ● devstack at sysinv-agent.service - Devstack devstack at sysinv-agent.service Loaded: loaded (/etc/systemd/system/devstack at sysinv-agent.service; enabled; vendor preset: enabled) Active: failed (Result: exit-code) since Mon 2018-12-03 20:19:33 EST; 5s ago Main PID: 32164 (code=exited, status=2) Dec 03 20:19:33 ubuntu16045server1 systemd[1]: Started Devstack devstack at sysinv-agent.service. Dec 03 20:19:33 ubuntu16045server1 systemd[1]: devstack at sysinv-agent.service: Main process exited, code=exited, status=2/INVALIDARGUMENT Dec 03 20:19:33 ubuntu16045server1 systemd[1]: devstack at sysinv-agent.service: Unit entered failed state. Dec 03 20:19:33 ubuntu16045server1 systemd[1]: devstack at sysinv-agent.service: Failed with result 'exit-code'. Root cause: sysinv/sysinv-agent/sysinv-agent is referring various Titanium specific paths and files. e.g.: 18 . /etc/init.d/functions 19 . /etc/build.info 20 21 22 PLATFORM_CONF="/etc/platform/platform.conf" 23 NODETYPE="" 24 DAEMON_NAME="sysinv-agent" 25 SYSINVAGENT="/usr/bin/${DAEMON_NAME}" 26 SYSINV_CONF_DIR="/etc/sysinv" 27 SYSINV_CONF_FILE="${SYSINV_CONF_DIR}/sysinv.conf" 28 SYSINV_CONF_DEFAULT_FILE="/opt/platform/sysinv/${SW_VERSION}/sysinv.conf.default" 29 SYSINV_READY_FLAG=/var/run/.sysinv_ready Solution to fix: There’s no easy fix to this one. It looks the files for service sysinv-agent need to be recreated, in the way similar to sysinv-api. Some minor issues including: Pycrypto is required by controllerconfig/common/dcmanager.py but not installed HOST_IP needs to be defined in very early stage Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Dec 4 15:23:06 2018 From: scott.little at windriver.com (Scott Little) Date: Tue, 4 Dec 2018 10:23:06 -0500 Subject: [Starlingx-discuss] docker hub Message-ID: <194fda2b-386b-a348-9af7-bc51ff84bd14@windriver.com> Whoever created the 'starlingx' user on docker hub, please contact me Scott Little From erich.cordoba.malibran at intel.com Tue Dec 4 16:12:57 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 4 Dec 2018 16:12:57 +0000 Subject: [Starlingx-discuss] SCM: repo projects revision control In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F0FC68@SHSMSX104.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F0FC68@SHSMSX104.ccr.corp.intel.com> Message-ID: <071e4856ef5407ca49f4bcad274de032acac335f.camel@intel.com> Hi Liu, I think there are two ways to checkout a specific revision in all projects: 1) By using a tag/branch: This should exist in the all the repositories. For example, you can get the code for the r/2018.10 release setting the default revision: Which BTW doesn't exist in the default.xml for r/2018.10, we need to fix that. 2) Setting a revision per project: This is painful, but you can set a revision for every repository: Also, you can create/modify your own manifest.xml with the repo tool, using repo start and other subcommands[0]. The only requirement is that the manifest.xml should live in a git repository to use repo init. I hope this can help. -Erich [0] https://source.android.com/setup/develop/repo On Tue, 2018-12-04 at 01:11 +0000, Liu, Changcheng wrote: > Hi Scott & Dean, > Is there any way for developers get the exactly same stx source > code after running below commands? > repo init -u https://git.starlingx.io/stx-manifest -m default.xml > repo sync > > Currently, some projects are set to be fixed tag/version in > .repo/manifests/default.xml e.g. kubernetes.git > 1 +--- 5 lines: -------- > ------------------------ > 6 > 7 > 8 > 9 +-- 93 lines: ------ > 102 > 103 name="kubernetes.git" path="cgcs-root/stx/git/kubernetes"/> > 104 name="dashboard.git" path="cgcs-root/stx/git/kube-dashboard"/> > 105 +-- 5 lines: revision="refs/tags/1.14.8" name="dns.git" path="cgcs- > root/stx/git/kube-dns"/>-- > > However, some projects are always moving ahead without setting > “revision” item. > > B.R. > Changcheng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From abraham.arce.moreno at intel.com Tue Dec 4 16:47:31 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 4 Dec 2018 16:47:31 +0000 Subject: [Starlingx-discuss] API requests: stx-config Message-ID: stx-config team, Based in some time spent now within stx-config and with the objective to align our REST API Documentation with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] The mismatch between documentation [0] and information via API Query is addressed under [3] and [4]. ? Heads Up! The description includes the word "interfaces" however as you will find below, "interfaces is also listed in the documentation but not intuitively found as a REST method under the API query output. More information below. [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-config project: - Our initial "Migration WADL to RST", see history here [1] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:6385/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST METHODs below to include, we are adding in the second column what it seems to be the match for the valid API REST methods: System > isystems Clusters > clusters Interfaces > ? Partitions > ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ Volume Groups > ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ Physical Volumes > ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ Ceph Storage Functions > ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ Profiles > iprofile DNS > idns NTP > intp External OAM > iextoam Infrastructure Subnet > iinfra DRBD Configuration > drbdconfig SNMP Communities > icommunity SNMP Trap Destinations > itrapdest Devices > ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices Service Parameter > service_parameter SDN Controllers > sdn_controller Remote Logging > remotelogging Networks > networks Address Pools > addrpools Addresses > addresses Routes > ! /v1/ihosts/​{host_id}​/routes Storage Backends > storage_backend Storage Tiers > ! storage_tiers Controller Filesystem > controller_fs Ceph Monitors > ceph_mon System Certificate Configuration > ! certificate Custom Firewall Rules > firewallrules ? Are all the names and API REST methods correctly matched? ? Are all the valid API REST method names correct? ? "Interfaces" is listed under v1/ API Version output [2] as an expected service but a REST METHOD match was not found, are we talking about "Interface" one of the following ones: 1) Is it the "interface_networks" REST method? 2) Or found under "Profiles" as described under its description: "...This includes interface profiles..." 3) Or found under "SDN Controllers" as described under its description: "...SDN manager interface..." 4) Or as simple as "Networks" interfaces? [ v1/ ] [ Current Official API Documentation ] The following API REST methods documented under [0] give valid API output: - System - Clusters - DNS - NTP - External OAM - Infrastructure Subnet - DRBD Configuration - SNMP Communities - SNMP Trap Destinations - Service Parameter - SDN Controllers - Remote Logging - Networks - Address Pools - Storage Backends - Storage Tiers - Controller Filesystem - Ceph Monitors - System Certificate Configuration - Custom Firewall Rules - Partitions - Volume Groups - Physical Volumes - Ceph Storage Functions - Devices - Addresses - Routes The following API REST method documented under [0] has an invalid name: - Profiles Documentation pointing to: /v1/iprofiles and a valid v1/ endpoint: http://10.10.10.2:6385/v1/iprofile ? Is this a valid Documentation change from "iprofiles" to "iprofile"? [ v1/ ] [ API Query Output ] Based in our "[Starlingx-discuss] API requests: stx-ha" [3] we learned the following API REST methods from "System Inventory API v1" are assigned to stx-ha: - services - servicenodes - service_groups And in our "[Starlingx-discuss] API requests: stx-metal" [4] the following are assigned to stx-metal: - lldp_neighbours - ihosts - icpu - lldp_agents And now these API REST methods are assigned to stx-config: - isystems - clusters - - ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ - ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ - ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ - ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ - iprofile - idns - intp - iextoam - iinfra - drbdconfig - icommunity - itrapdest - ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices - service_parameter - sdn_controller - remotelogging - networks - addrpools - addresses - ! /v1/ihosts/​{host_id}​/routes - storage_backend - ! storage_tiers - controller_fs - ceph_mon - ! certificate - firewallrules Leaving the following assigned to other StarlingX components, more to come once we review the remaining StarlingX projects: - links - storage_file - storage_lvm - interface_networks - id - ptp - media_types - upgrade - imemory - storage_ceph_external - health - license - storage_ceph - storage_external - iuser - helm_charts - inode ? Do we need another level of review? ? Should we target an update to the documentation in terms of number of services we are documenting comparing the 3 perspectives? ? Is there anything we need to take care of? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-config/index.html [1] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [2] https://docs.starlingx.io/api-ref/stx-config/api-ref-sysinv-v1-config.html?expanded=shows-details-for-configuration-api-v1-detail#api-versions [3]http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/001868.html [4] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/002032.html From ildiko.vancsa at gmail.com Tue Dec 4 18:03:49 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 4 Dec 2018 10:03:49 -0800 Subject: [Starlingx-discuss] Berlin Summit recap on Edge Message-ID: <2861C382-FA21-470C-AF17-239BFF6DB90D@gmail.com> Hi, I hope those of you who came to the Berlin Summit had a great event, a good trip home and got some rest, caught up with work and those who went on vacation had a great time. Hereby I would like to give a short summary to everyone either as a reminder or as a package to help you catch up briefly with what happened around edge in Berlin. As you most probably know we had a dedicated track for Edge Computing with numerous presentations and panel discussions at the conference which were recorded. If you would like to catch up or see some sessions again please visit the OpenStack website[1] for the videos. In parallel to the conference we were having the Forum taking place with 40-minute-long working sessions for developers, operators and users to meet and discuss new requirements, challenges and pain points to address. We had quite a few sessions around edge which you’ll find a brief recap of here. I would like to start with the OSF Edge Computing Group also Edge WG’s sessions, if you are new to the activities of this group you may want to read my notes[2] on the Denver PTG to catch up on the community’s and the group's work on defining reference architectures for edge use cases. During the Forum we continued to discuss the Minimum Viable Product (MVP) architecture topic[3] that we’ve started at the last PTG. As the group and attendees had limited amount of time available for the topic we concluded on some basics and agreed on action items to follow up on. The session attendees agreed that the MVP architecture is an important first step and we will keep its scope limited to the current OpenStack services listed on the wiki capturing the details[4]. While there is interest in adding further services such as Ironic or Qinling we will discuss those in this context in upcoming phases. The Edge WG is actively working on capturing edge computing use cases in order to understand better the requirements and to work together with OpenStack and StarlingX projects on design and implementation work based the input the groups has been collecting[5]. We had a session about use cases[6] to identify which are the ones the group should focus on with immediate actions where we got vRAN and edge cloud, uCPE and industrial control with most interest in the room to work on. The group is actively working on the map the MVP architecture options to the use cases identified by the group and to get more details on the ones we identified during the Forum session. If you are interested in participating in these activities please see the details[7] of the group’s weekly meetings. While the MVP architecture work is focusing on a minimalistic view to provide a reference architecture with the covered services prepared for edge use cases there is work ongoing in parallel in several OpenStack projects. You can find notes on the Forum etherpads[8][9][10] on the progress of projects such as Cinder, Ironic, Kolla-Ansible and TripleO. The general consensus of the project discussions were that the services are in a good shape when edge requirements are concerned and there is a good view on the way forward like improving availability zone functionality or remote management of bare metal nodes. With all the work ongoing in the projects as well as in the Edge WG the expectation is that we will be able to easily move to the next phases with the MVP architectures work when the working group is ready. Both the group and the projects are looking for contributors for both identifying further requirements, use cases or do the implementation and testing work. Testing is an area that will be crucial for edge and we are looking into both cross-project and cross-community collaborations for that for instance with OPNFV and Akraino. While we didn’t have a Keystone specific Forum session for edge this time a small group of people came together to discuss next steps with federation. We are converging towards some generic feature additions to Keystone based on the Athenz plugin from Oath. You can read a Keystone summary[11] for the week in Berlin from Lance Bragsad including plans related to edge. We had a couple of sessions at the Summit about StarlingX both in the conference part as well as the Forum. You can check out videos such as the project update[12] and other relevant sessions[13] among the Summit videos. As the StarlingX community is working closely with the Edge WG as well as the relevant OpenStack project teams at the Forum we had sessions that were focusing on some specific items for planning future work and understanding requirements better for the project. The team had a session on IoT[14] to talk about the list of devices to consider and the requirements systems need to address in this space. The session also identified a collaboration option between StarlingX, IoTronic[15] and Ironic when it comes to realizing and testing use cases. With putting more emphasis on containers at the edge the team also had a session on containerized application requirements[16] with a focus on Kubernetes clusters. During the session we talked about areas like container networking, multi-tenancy, persistent storage and a few more to see what options we have for them and what is missing today to have the particular area covered. The StarlingX community is focusing more on containerization in the upcoming releases for which the feedback and ideas during the session are very important to have. One more session to mention is the ‘Ask me anything about StarlingX’ one at the Forum where experts from the community offered help in general to people who are new and/or have questions about the project. The session was well attended and questions were focusing more on the practical angles like footprint or memory consumption and a few more specific questions that were beyond generic interest and overview of the project. These were the activities in high level around edge without going into too much detail on either of the topics as that would be a way longer e-mail. :) I hope you found interesting topics and useful pointers for more information to catch up on. If you would like to participate in these activities you can dial-in to the Edge WG weekly calls[17] or weekly Use cases calls[18] or check the StarlingX sub-project team calls[19] and further material on the website[20] about how to contribute or jump on IRC for OpenStack project team meetings[21] in the area of your interest. Please let me know if you have any questions to either of the above items. :) Thanks and Best Regards, Ildikó (IRC: ildikov) [1] https://www.openstack.org/videos/ [2] http://lists.openstack.org/pipermail/edge-computing/2018-September/000432.html [3] https://etherpad.openstack.org/p/BER-MVP-architecture-for-edge [4] https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures [5] https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases [6] https://etherpad.openstack.org/p/BER-edge-use-cases-and-requirements [7] https://wiki.openstack.org/wiki/Edge_Computing_Group [8] https://etherpad.openstack.org/p/BER-Cinder_at_the_Edge [9] https://etherpad.openstack.org/p/BER-ironic-edge [10] https://etherpad.openstack.org/p/BER-tripleo-undercloud-edge [11] https://www.lbragstad.com/blog/openstack-summit-berlin-recap [12] https://www.openstack.org/videos/berlin-2018/starlingx-project-update-6-months-in-the-life-of-a-new-open-source-project [13] https://www.openstack.org/videos/search?search=starlingx [14] https://etherpad.openstack.org/p/BER-integrating-iot-device-mgmt-with-edge-cloud [15] https://github.com/openstack/iotronic [16] https://etherpad.openstack.org/p/BER-containerized-app-reqmts-on-kubernetes-at-edge [17] https://www.openstack.org/assets/edge/OSF-Edge-Computing-Group-Weekly-Calls.ics [18] https://www.openstack.org/assets/edge/OSF-Edge-WG-Use-Cases-Weekly-Calls.ics [19] https://wiki.openstack.org/wiki/Starlingx/Meetings [20] https://www.starlingx.io [21] http://eavesdrop.openstack.org From austin.sun at intel.com Tue Dec 4 02:37:00 2018 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 4 Dec 2018 02:37:00 +0000 Subject: [Starlingx-discuss] Unable to bring up controller-0 In-Reply-To: <2A55719D-5062-4AB2-B65D-AA3D5914A7FD@jxresearch.com> References: <840F9C42-9C1E-4E5F-A7D3-65BF54C4B715@jxresearch.com> <3FE39BF2-C7EF-4274-BAB8-6659C6A59835@jxresearch.com> <210898B96CA058408C55992CCAD98676B9F7F4D8@ALA-MBD.corp.ad.wrs.com> <26257BBC-033D-4FAB-90A8-53241C1ED22D@jxresearch.com> <2A55719D-5062-4AB2-B65D-AA3D5914A7FD@jxresearch.com> Message-ID: Hi Xuyun: BMC5720 is not in DPDK list [1]. You can find whole list from [2]. [1] http://doc.dpdk.org/guides/nics/bnxt.html [2] https://core.dpdk.org/supported/ Thanks. BR Austin Sun. From: 徐蕴 [mailto:xuyun at jxresearch.com] Sent: Tuesday, December 4, 2018 10:23 AM To: Sun, Austin Cc: MacDonald, Eric ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Unable to bring up controller-0 Hi Austin, I’m deploying an all in one node in a bare metal (an old DELL server), maybe the NIC doesn’t support DPDK. I cannot see eno4 in the output: controller-0:/var/log/puppet# python /usr/share/openvswitch/scripts/dpdk-devbind.py --status Network devices using DPDK-compatible driver ============================================ 0000:02:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' drv=vfio-pci unused= Network devices using kernel driver =================================== 0000:01:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno1 drv=tg3 unused=vfio-pci *Active* 0000:01:00.1 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno2 drv=tg3 unused=vfio-pci 0000:02:00.0 'NetXtreme BCM5720 Gigabit Ethernet PCIe 165f' if=eno3 drv=tg3 unused=vfio-pci Other Network devices ===================== Crypto devices using DPDK-compatible driver =========================================== Crypto devices using kernel driver ================================== Other Crypto devices ==================== Eventdev devices using DPDK-compatible driver ============================================= Eventdev devices using kernel driver ==================================== Other Eventdev devices ====================== Mempool devices using DPDK-compatible driver ============================================ Mempool devices using kernel driver =================================== Other Mempool devices ===================== controller-0:/var/log/puppet# ip a 1: lo: mtu 1500 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet 127.168.204.3/24 brd 127.168.204.255 scope host lo valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global lo valid_lft forever preferred_lft forever inet 127.168.204.2/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.5/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.6/24 scope host secondary lo valid_lft forever preferred_lft forever inet 127.168.204.8/24 scope host secondary lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether b0:83:fe:d6:78:fb brd ff:ff:ff:ff:ff:ff inet 111.111.111.1/24 brd 111.111.111.255 scope global eno1 valid_lft forever preferred_lft forever inet6 fe80::b283:feff:fed6:78fb/64 scope link valid_lft forever preferred_lft forever 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether b0:83:fe:d6:78:fc brd ff:ff:ff:ff:ff:ff 4: eno3: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether b0:83:fe:d6:78:fd brd ff:ff:ff:ff:ff:ff 6: br-eno4: mtu 1500 qdisc noqueue state DOWN group default qlen 1000 link/ether 00:00:00:00:00:00 brd ff:ff:ff:ff:ff:ff inet6 fe80::b283:feff:fed6:78fe/64 scope link valid_lft forever preferred_lft forever 7: ovs-netdev: mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 3a:66:f9:32:13:a3 brd ff:ff:ff:ff:ff:ff 8: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether e6:e9:8a:8a:8a:4b brd ff:ff:ff:ff:ff:ff inet6 fe80::e4e9:8aff:fe8a:8a4b/64 scope link valid_lft forever preferred_lft forever 9: lldpd2a87771-fd: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 6a:c6:ce:ff:c0:07 brd ff:ff:ff:ff:ff:ff inet6 fe80::68c6:ceff:feff:c007/64 scope link valid_lft forever preferred_lft forever 10: br-int: mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 4a:0c:5c:b2:bf:4e brd ff:ff:ff:ff:ff:ff controller-0:/var/log/puppet# sudo ovs-vsctl show a98562f6-3223-4eab-a663-68cb4d8e9ece Manager "ptcp:6639:127.0.0.1" is_connected: true Bridge "br-phy0" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "lldpd2a87771-fd" Interface "lldpd2a87771-fd" type: internal Port "phy-br-phy0" Interface "phy-br-phy0" type: patch options: {peer="int-br-phy0"} Port "eth0" Interface "eth0" type: dpdk options: {dpdk-devargs="0000:02:00.1", n_rxq="1"} error: "Error attaching device '0000:02:00.1' to DPDK" Port "br-phy0" Interface "br-phy0" type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "int-br-phy0" Interface "int-br-phy0" type: patch options: {peer="phy-br-phy0"} Port br-int Interface br-int type: internal ovs_version: “2.9.0" BR, Xu Yun 在 2018年12月4日,上午10:13,Sun, Austin > 写道: Hi xuyun: Eth0 is using for dpdk , which is virtual port , not real NIC name. Could you run “python /usr/share/openvswitch/scripts/dpdk-devbind.py --status” to check what type of eno4 is ? Are you deploy in a virtual machine with libvirt ? or bare metal system ? Thanks. BR Austin Sun. From: 徐蕴 [mailto:xuyun at jxresearch.com] Sent: Tuesday, December 4, 2018 9:31 AM To: MacDonald, Eric > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Unable to bring up controller-0 Hello, Thanks for Eric's hint. I doubted this problem is related to OVS bridge setup yesterday controller-0:/var/log/puppet# ovs-vsctl show a98562f6-3223-4eab-a663-68cb4d8e9ece Manager "ptcp:6639:127.0.0.1" is_connected: true Bridge "br-phy0" Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "lldpd2a87771-fd" Interface "lldpd2a87771-fd" type: internal Port "phy-br-phy0" Interface "phy-br-phy0" type: patch options: {peer="int-br-phy0"} Port "eth0" Interface "eth0" type: dpdk options: {dpdk-devargs="0000:02:00.1", n_rxq="1"} error: "Error attaching device '0000:02:00.1' to DPDK" Port "br-phy0" Interface "br-phy0" type: internal Bridge br-int Controller "tcp:127.0.0.1:6633" is_connected: true fail_mode: secure Port "int-br-phy0" Interface "int-br-phy0" type: patch options: {peer="phy-br-phy0"} Port br-int Interface br-int type: internal ovs_version: “2.9.0" I don’t know why ‘eth0’ was added into the br-phy0 bridge, since there is no ‘eth0’ in my machine. I configured an interface named ‘eno4’ for my data plane. [wrsroot at controller-0 log(keystone_admin)]$ system host-if-list -a 1 +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | provider networks | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+-------------------+ | 548477c1-038f-4280-a4b1-d7c11d724b09 | eno4 | data | ethernet | None | [u'eno4'] | [] | [] | MTU=1500,accelerated=False | providernet-a | | 5c4760f5-a32b-49b9-91d1-73a72fdcabfb | eno3 | None | ethernet | None | [u'eno3'] | [] | [] | MTU=1500 | None | | 7b85e213-f435-4693-a13a-364de1e06a35 | eno1 | platform | ethernet | None | [u'eno1'] | [] | [] | MTU=1500 | None | | 96e1e865-2f7c-486e-8e9e-88fcb1c71686 | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | c87ccb52-a1c7-4332-87f9-5966e6599ee6 | eno2 | None | ethernet | None | [u'eno2'] | [] | [] | MTU=1500 | None | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+----------------------------+—————————+ And also the puppet log seems confirm that ovs setup failure because of ‘eth0': controller-0:/var/log/puppet# grep -r /var/log/puppet/ -e Error -e Warning /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.348 Notice: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Port[eth0]/Exec[ovs-add-port: eth0]/returns: ovs-vsctl: Error detected while setting up 'eth0': Error attaching device '0000:02:00.1' to DPDK. See ovs-vswitchd log for details. /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.443 Error: 2018-12-03 16:07:16 +0800 ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:16.678 Error: 2018-12-03 16:07:16 +0800 /Stage[main]/Platform::Vswitch::Ovs/Platform::Vswitch::Ovs::Flow[eth0]/Exec[ovs-add-flow: eth0]/returns: change from notrun to 0 failed: ovs-ofctl add-flow br-phy0 dl_dst=01:80:c2:00:00:0e,dl_type=0x88cc,hard_timeout=0,idle_timeout=0,in_port=eth0,actions=output:lldpd2a87771-fd returned 1 instead of one of [0] /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.800 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Ceph::Post/File[/var/run/.ceph_started]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.806 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/Service[crond]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.817 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/etc/platform/.initial_compute_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.824 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Compute::Post/File[/var/run/.compute_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.833 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.initial_config_complete]: Skipping because of failed dependencies /var/log/puppet/2018-12-03-08-05-45_compute/puppet.log:2018-12-03T08:07:28.839 Warning: 2018-12-03 16:07:28 +0800 /Stage[post]/Platform::Config::Post/File[/etc/platform/.config_applied]: Skipping because of failed dependencies Any further suggestion how to fix this problem? Thank you. Br, Xu Yun 在 2018年12月3日,下午11:02,MacDonald, Eric > 写道: 300.004 is a dependency alarm that should go away once controller-0 is unlocked-enabled. Looks like an All-In-One system your trying to provision. Therefore this could be a controller or compute function configuration error. I would look for the words Error and Warning in the puppet logs and see what that shows. sudo grep จCr /var/log/puppet จCe Error จCe Warning Eric. From: ะ์ิฬ [mailto:xuyun at jxresearch.com] Sent: Monday, December 03, 2018 4:26 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Unable to bring up controller-0 Hi, Iกฏm trying to deploy a simplex on a bare metal server. After unlocking controller-0 following the installation_guide, two error events are reported and this node was degraded: 200.011 controller-0 experienced a configuration failure. host=controller-0 300.004 No enabled compute host with connectivity to provider network. service=networking.providernet=e542cf30-a07d-41f7-be7b-cc5a4e14b0d7 Would you please give me some hints how to debug this problem? Thank you! Br, Xu Yun -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaoyan.li at intel.com Tue Dec 4 07:11:42 2018 From: xiaoyan.li at intel.com (Li, Xiaoyan) Date: Tue, 4 Dec 2018 07:11:42 +0000 Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EBB25D46B@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5F8E8F@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5FAA8C@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent and Ovidiu, Could you kindly help to clear following concern? 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can't it work with Kubernetes? Best wishes Lisa From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Tuesday, November 27, 2018 10:30 AM To: Poncea, Ovidiu ; Rowsell, Brent ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, As far as I'm concerned, Cinder image cache is an cache mechanism. So overall, users don't need to clean it manually. Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Monday, November 26, 2018 11:15 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Lisa, Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me. Short comparison of the two: Raw caching Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool. Advantages: - user can select the images it wants to cache - user can monitor the progress and can check used space for each image (cli + dashboard). - on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use. - no wasted space - complete control by user Disadvantages: - There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work. - Ceph only Cinder's caching Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold. Advantages: - already upstream - works with all backends - all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes. - admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation) Disadvantages: 1. it's either globally enabled or disabled => needs sysinv configuration option 2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste 3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important. 4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit. 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) Mitigations of disadvantages above - possible solutions and alternatives: #1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify") [Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html I think it is enough. #2: No workaround comes to my mind - we can probably live with it #3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it) [Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction. As it is cache and should be transparent for users, why do we need users to evict data? #4: Two options comes to mind: 1. To get some control we should not limit the cache size, given that we do propper cleanup in #3. [Li, Xiaoyan] Even we do cleanup, the limit can't be removed. 2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can't it work with Kubernetes? Questions, (maybe if you get time to play with cinder's caching to get a better understanding): 1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel? [Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first and then creates volumes. The primary key is not image_id+backend_storage. It is possible that several entries or volumes will be created in same backend storage. 2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend? [Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled. 3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect? [Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed. 4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user? [Li, Xiaoyan] Admin and shadow tenants can both do cleanup. Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Thursday, November 22, 2018 2:41 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent and Ovidiu, As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache. Please vote whether we can abandon raw cache in StarlingX. StarlingX Create an image cache in ceph when Glance creates an image. And delete the cached image in ceph when deleting the original image in Glance. Cinder: When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache. So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it. Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes. >From my viewpoint, Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement. It is for all Cinder supported backend storage, not just for Ceph. Best wishes Lisa From: Li, Xiaoyan Sent: Monday, November 19, 2018 9:44 AM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, A cached image ( new volume from this image) is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image. All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368 https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82 A cached image is deleted when the configure space for cache is used up. So currently Cinder doesn't delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache. https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117 https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351 Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Friday, November 16, 2018 4:57 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed. Thanks, Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Tuesday, November 13, 2018 9:19 AM To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail. And if I would like to remove the function in StarlingX, there are two methods: 1. Submit a patch to revert the changes in Glance and Cinder. 2. Ignore these patches during upgrading StarlingX/Cinder to new Cinder release. Which way do we prefer to? Best wishes Lisa From: Li, Xiaoyan Sent: Thursday, September 20, 2018 10:17 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, Brent The following are mechanism of Cinder volume cache. Creation of cached volume: It creates a cached volume in the backend storage when creating from an image. 1. Create_from_image: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890 2. Return image cache entry: If not existed, it creates a new entry. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746 3. Create a new image-volume and cache entry for it: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872 Use a cached volume when creating a volume: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes). https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164 Best wishes Lisa From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, September 6, 2018 10:02 AM To: Li, Xiaoyan >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching We would need to review this feature to ensure it provides equivalent functionality first. If it does, great, we can look at reverting and enabling this cinder functionality. Brent From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Wednesday, September 5, 2018 9:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi all, This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage. In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead. The problem is that we need to update Cinder config in StarlingX. Any comments? Best wishes Lisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Dec 4 09:23:39 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 4 Dec 2018 09:23:39 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DED2BE@SHSMSX104.ccr.corp.intel.com> Agenda for 12/5 meeting: 1. CentOS 7.6 upgrade planning (Saul) 2. non-Openstack patch refactoring status (Zhipeng) 3. Qemu 3.0 upgrade status (Ghada/Jim) 4. Ceph upgrade status update (Vivian/Dehao) 5. Opens (all) Please let me know if we'd like to cover other topics as well. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; starlingx-discuss at lists.starlingx.io Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce Moreno, Abraham; Cobbley, David A Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From abraham.arce.moreno at intel.com Tue Dec 4 18:26:54 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 4 Dec 2018 18:26:54 +0000 Subject: [Starlingx-discuss] API requests: stx-fault Message-ID: stx-fault team, Based in some time spent within stx-fault and with the objective to align our REST API Documentation [0] with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] When we look at the name and description reported out by curl -i http://10.10.10.2:18002/ there is a mismatch between documentation [1] and information via API Query: API Documentation: Name: stx-fault API Description: StarlingX Fault API allows for the management of physical servers. This includes inventory collection and configuration of hosts, ports, interfaces, CPUs, disk, memory, and system configuration. The API also supports the configuration of the cloud's SNMP interface. Source Code via API Query: Name: Fault Management API Description: Fault Management is an OpenStack project which provides REST API services for alarms and logs. ? Can you please let us know where the modifications are required? API Documentation or Source Code? [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-metal project: - Our initial "Migration WADL to RST", see history here [2] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:18002/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST methods below to include, we are adding in the second column what it seems to be the match for the valid API endpoint name: Alarms > alarms Event Log > event_log Event Suppression > event_suppression ? Are all the names and API nodes correctly matched? >From this same output, "links" has a reference to: "http://www.windriver.com/developer/fm/dev/api-spec-v1.html" ? Does his reference needs to change to: https://docs.starlingx.io/api-ref/stx-fault/index.html [ v1/ ] [ Current Official API Documentation ] Current Official API documentation [3] includes the following 3 REST API methods under "API Versions" v1/ details: - Alarms: http://10.10.10.2:6385/v1/alarms - Event Log: http://10.10.10.2:6385/v1/event_log - Event Suppression: http://10.10.10.2:6385/v1/event_suppression [ v1/ ] [ API Query Output ] API queries output shows these API REST methods: - alarms - event_log - event_suppression ? Do we need another level of review? ? Is there anything we need to take care of? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-fault/ [1] https://docs.starlingx.io/api-ref/stx-fault/api-ref-fm-v1-fault.html?expanded=lists-information-about-fault-management-api-versions-detail#api-versions [2] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [3] https://docs.starlingx.io/api-ref/stx-fault/api-ref-fm-v1-fault.html?expanded=lists-information-about-fault-management-api-versions-detail,shows-details-for-fault-management-api-v1-detail#api-versions From scott.little at windriver.com Tue Dec 4 18:29:12 2018 From: scott.little at windriver.com (Scott Little) Date: Tue, 4 Dec 2018 13:29:12 -0500 Subject: [Starlingx-discuss] [Container] Public docker registry Message-ID: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> Here is my proposal for the StarlingX docker repository. **Docker repository location** - hub.docker.com, as a public set of repositories under the organization 'starlingx'** ** *Build frequency* - On demand for release/milestone branches - Will probably start with daily for master branch.  Perhaps when things stabilize we'll reduce build frequency, or even use commit driven builds. *Retention policy * - Perhaps two weeks for master branch builds?  but always one 'stable' build (see below) - Will start with daily for master branch.  Perhaps when things stabilize we'll reduce build frequency, or even use commit driven builds. *Image naming schema* =/: =starlingx =stx--- = | [-] =centos | ubuntu | clear-linux =pike | queens | rocky ... =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... = | latest | stable =master | r2018.10 | r2018.10.0 | ... Note: we can't have the '/' or ':' character in a branch name. So r/2018.10 would have to be shortened to 'r2018.10'. However i think it's better to use the tag to allow for rebuilds of a release '2018.10.0'. My only concern here is that our current git tagging convention doesn't distinguish release from milestone.  I would prefer a 'r' or 'm' prefix on our git tags. Note: the 'latest' or 'stable' qualifiers would be aliases to the timestamped image.  'Stable' might be over selling it on master branch... perhaps some other term... 'tested', 'usable'? e.g. starlingx/stx-centos-pike-nova:master-20181201 starlingx/stx-centos-pike-nova:master-20181202 starlingx/stx-centos-pike-nova:master-20181203 starlingx/stx-centos-pike-nova:master-latest -> master-20181203 starlingx/stx-centos-pike-nova:master-stable -> master-20181201 starlingx/stx-centos-pike-nova:r2018.10.0 starlingx/stx-centos-pike-nova:r2018.10.1 starlingx/stx-centos-pike-nova:r2018.10-latest -> r2018.10.1 Comments? Scott -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Tue Dec 4 20:05:44 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Tue, 4 Dec 2018 20:05:44 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DED2BE@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DED2BE@SHSMSX104.ccr.corp.intel.com> Message-ID: <1DD0799E-8600-4268-9996-5BA81092C6CF@windriver.com> Cindy, Thank you for sending the agenda. As a reminder for agenda item #1, I would like to determine if we want to do a kernel update separate from the CentOS 7.6 update or do it at the same time. Regards, Ken Y On 2018-12-04, 1:16 PM, "Xie, Cindy" wrote: Agenda for 12/5 meeting: 1. CentOS 7.6 upgrade planning (Saul) 2. non-Openstack patch refactoring status (Zhipeng) 3. Qemu 3.0 upgrade status (Ghada/Jim) 4. Ceph upgrade status update (Vivian/Dehao) 5. Opens (all) Please let me know if we'd like to cover other topics as well. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; starlingx-discuss at lists.starlingx.io Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce Moreno, Abraham; Cobbley, David A Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cesar.lara at intel.com Tue Dec 4 21:12:24 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Tue, 4 Dec 2018 21:12:24 +0000 Subject: [Starlingx-discuss] [build] [meetings] Build team meeting minutes 11/29/2018 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710576DC1@fmsmsx104.amr.corp.intel.com> Build team meeting 11/29/2018 Attendees Saul, Ken, Scott, Jason, Chuy, Victor, Memo, Marcela, Mario, Abraham, Eric, Cesar Agenda - Cengn Artifact and ISO files generation cadence - Cengn tooling required - Golang implementations - Opens Notes Cengn Artifact and ISO files generation cadence We stablished a daily cadence for generation of ISO files, Ken will take the AR to get the validation team involved to run sanity test on daily builds at Cengn. Also there is a need to generate a dashboard to monitor the health of ISO files being generated, whether they pass or not testing or the bugs associated to any given ISO file. We made the following decisions - StarlingX build team will retain ISO files for 14 days - we will also retain the build artifacts like the installer for only 7 days We still have pending the tooling for cleaning the files and the update of the wiki/documentation on this topic and we are discussing if there is a necessity to have some kind of a landing page along with the dashboard to have the availability of ISO files in a more user friendly manner Q. Do we need to freeze some ISO files that are bound to a big release or an specific milestone? What's the retention policy for that? We agreed to flush out the milestone/release process and what to expect out of that, as well as figuring out the retention AR. Saul. Saul will investigate with Dean about this Milestone/Release and write an spec regarding this. Q. what's a good retention time for monthly builds? We talked about to retrain those specific ISO files between 3 to 6 months Cengn tooling required Not a lot required right now, Scott will ask for help as he believe is necessary Golang implementation We decided to go for option number 3 [1] while we figure out the best approach by benchmarking what other distribution are doing with their own implementation of GO language Opens Q. Do we need to update the centos installer to 7.5 or go directly to centos 7.6 installer We decided to go after the 7.6 directly, VictorR assured that 7.5 will not work AR. Victor to send a follow up email to let the team know the why. We had a brief demo of Linux builder tool, this tool will allow us to generate different ISO files based on RPMs/Deb repositories and create working versions of modified OS with our customizations on those. This tool is one if the many pieces we need in place for MultiOS efforts. Kudos to Chuy for preparing the demo. AR. Chuy to send a follow-up email with location of Linux builder code for exploration. [1] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/001966.html Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From jim.somerville at windriver.com Tue Dec 4 21:13:15 2018 From: jim.somerville at windriver.com (Jim Somerville) Date: Tue, 4 Dec 2018 16:13:15 -0500 Subject: [Starlingx-discuss] Qemu 3.0.0 is ready for your consideration Message-ID: <0add847a-267d-1706-e57f-5ee5a6bd34b5@windriver.com> Hi Dean, Like with libvirt, I de-squashed all of the patches (around 100 of them) and they are on the end. Note that they consist of 3 parts, namely any fixes I had to further bring in from upstream (1 in this case), followed by the CentOS patches against qemu that we normally bring forward, followed by STX specific patches. The CentOS patches against qemu are good candidates to scrutinize for potentially dropping in the future as part of a non-rebasing exercise. My approach was to limit chaos in this upstream rebasing exercise by not dropping any patches unless made obsolete by the rebasing itself. It's hard enough to debug this complex stuff without adding more uncertainties to it. Extensive testing was done using first our regular sanity test suite, followed by our much larger nova regression test suite. Unexpected failures were found and fixed. The update to 3.0.0 consists of two parts, a pull request for the stx-qemu repo, and the piece in stx-integ which deals with the compilation. Like last time with libvirt, we have to commit these two parts at the same time. The new qemu is here, and I will push a new branch and issue a pull request to it once I'm done dealing with feedback. https://github.com/jsomervi/stx-qemu/commits/working-3.0.0-noavp-12 The stx-integ part for review is here: https://review.openstack.org/#/c/622583/ I also cc'ed anybody who I think is a directly interested party rather than just relying on the discussion list. -Jim From abraham.arce.moreno at intel.com Tue Dec 4 21:25:20 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Tue, 4 Dec 2018 21:25:20 +0000 Subject: [Starlingx-discuss] API requests: stx-nfv Message-ID: stx-nfv team, As a result of time spent within stx-nfv and with the objective to align our REST API Documentation [0] with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate: $ curl -i -X POST http://10.10.10.2:5000/v2.0/tokens $ export TOKEN=... - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] When we look at the name and description reported out by curl -i http://10.10.10.2:4545/ we have the same name and description between documentation [1] and information via API Query: Name: nfv-vim Description: NFV - Virtual Infrastructure Manager ? Anything to add / change to the name and / or description? [ /api ] Here we are showing 3 different views of what we are seeing within stx-nfv project: - Our initial "Migration WADL to RST", see history here [2] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:4545/api/... [ /api ] [ Migration WADL to RST ] FYI Only. Migration from WADL to RST format requested us to move "NFV VIM API v1" (NFV VIM Service REST API) into stx-nfv repository, see [2] for the history. [ /api ] [ Current Official API Documentation ] Current Official API documentation [1] includes the following REST API methods under "API Versions" details: - / - /api - /api/orchestration - /api/orchestration/sw-patch - /api/orchestration/sw-upgrade And the only documented API REST methods documented are: - [3] Patch Strategy - [4] Upgrade Strategy ? Is "orchestration" not expected to be documented even if we have the GET method available? [ /api ] [ API Query Output ] API queries output shows these API REST methods: - api/orchestration - api/openstack - api/openstack/heat - api/virtualised-resources - api/virtualised-resources/computes - api/virtualised-resources/networks - api/virtualised-resources/images - api/virtualised-resources/volumes ? Our "Current Official API Documentation" does not have "openstack" and "virtualised-resources", should they be added? [ Project Repository ] [ Directory nfv-doc ] We took a look at the project repository and we found the "nfv-doc" directory [5] with the following categories: - Software Image Management - Virtualised Network Resource - Virtualised Storage Resource - Virtualised Compute Resource ? Since we have our "Current Official API Documentation", should we put a patch to remove this directory? Any reason to keep it? [ Project Repository ] [ Directory nfv-tests ] Looking this nfv-tests [6] it includes 3 categories: - nfv_api_tests - nfv_scenario_tests - nfv_unit_tests ? Is there any restructure required in this nfv-tests directory? ? Is there any need to think about a general test strategy which includes all StarlingX projects moved its execution into another place? e.g. Zuul ? Is this directory still valid? If not should we put a patch to remove it? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-nfv [1] https://docs.starlingx.io/api-ref/stx-nfv/api-ref-nfv-vim-v1.html?expanded=#api-versions [2] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [3] https://docs.starlingx.io/api-ref/stx-nfv/api-ref-nfv-vim-v1.html?expanded=#patch-strategy [4] https://docs.starlingx.io/api-ref/stx-nfv/api-ref-nfv-vim-v1.html?expanded=#upgrade-strategy [5] http://git.openstack.org/cgit/openstack/stx-nfv/tree/nfv/nfv-docs [6] http://git.openstack.org/cgit/openstack/stx-nfv/tree/nfv/nfv-tests From scott.little at windriver.com Tue Dec 4 21:26:47 2018 From: scott.little at windriver.com (Scott Little) Date: Tue, 4 Dec 2018 16:26:47 -0500 Subject: [Starlingx-discuss] [Container] Public docker registry In-Reply-To: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> References: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> Message-ID: <97ea7b9b-449a-8d21-8135-9d63b9feebf3@windriver.com> An alternate schema, and the one in current use, places the os and openstack-release under the tag section. This has the advantage of lower administrative overhead.  It takes 'admin' powers to create a new , whereas anyone with write permissions can create a new . Lets call this version 2. * * *Image naming schema* =/: =starlingx =stx- =--[-] =centos | ubuntu | clear-linux =pike | queens | rocky ... =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... = | latest | stable =dev | r2018.10 | r2018.10.0 | ... Note: 'dev' replaces 'master' On 18-12-04 01:29 PM, Scott Little wrote: > > Here is my proposal for the StarlingX docker repository. > > **Docker repository location** > > - hub.docker.com, as a public set of repositories under the > organization 'starlingx'** > ** > > *Build frequency* > > - On demand for release/milestone branches > > - Will probably start with daily for master branch.  Perhaps when > things stabilize we'll reduce build frequency, or even use commit > driven builds. > > *Retention policy > * > > - Perhaps two weeks for master branch builds?  but always one 'stable' > build (see below) > > - Will start with daily for master branch.  Perhaps when things > stabilize we'll reduce build frequency, or even use commit driven builds. > > *Image naming schema* > > =/: > =starlingx > =stx--- > > = | [-] > > =centos | ubuntu | clear-linux > > =pike | queens | rocky ... > =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... > > = | latest | stable > > =master | r2018.10 | r2018.10.0 | ... > > Note: we can't have the '/' or ':' character in a branch name. So > r/2018.10 would have to be shortened to 'r2018.10'. > However i think it's better to use the tag to allow for rebuilds of a > release '2018.10.0'. My only concern here is that our current git > tagging convention doesn't distinguish release from milestone.  I > would prefer a 'r' or 'm' prefix on our git tags. > > Note: the 'latest' or 'stable' qualifiers would be aliases to the > timestamped image.  'Stable' might be over selling it on master > branch... perhaps some other term... 'tested', 'usable'? > > > e.g. > > starlingx/stx-centos-pike-nova:master-20181201 > starlingx/stx-centos-pike-nova:master-20181202 > starlingx/stx-centos-pike-nova:master-20181203 > starlingx/stx-centos-pike-nova:master-latest -> master-20181203 > starlingx/stx-centos-pike-nova:master-stable -> master-20181201 > > starlingx/stx-centos-pike-nova:r2018.10.0 > starlingx/stx-centos-pike-nova:r2018.10.1 > starlingx/stx-centos-pike-nova:r2018.10-latest -> r2018.10.1 > > Comments? > > Scott > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Dec 4 22:54:46 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 4 Dec 2018 16:54:46 -0600 Subject: [Starlingx-discuss] [Container] Public docker registry In-Reply-To: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> References: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> Message-ID: On Tue, Dec 4, 2018 at 12:31 PM Scott Little wrote: > However i think it's better to use the tag to allow for rebuilds of a release '2018.10.0'. My only concern here is that our current git tagging convention doesn't distinguish release from milestone. I would prefer a 'r' or 'm' prefix on our git tags. For other reasons (mostly to do with the change to consume upstream OpenStack from master) I am thinking we should adjust how we implement milestones. The TSC has already talked about adjusting our release schedule, and thus the milestone schedule, to align closer to the OpenStack cadence (The release team is going to dive in to this in more detail so final proposal TBD). If we do this the following are the changes I am anticipating: * do not branch milestones, just tag master * follow the OpenStack process of appending a suffix to the milestone tag to identify which milestone (ie 'b1' for milestone 1, etc: NNNNb1) The major problem with this, and why I didn't adopt it from the start, is that we are using date-based release tags rather than semantic versioning (semver, the X.Y.Z we all know and love) so the value of the next release tag can be anticipated but not certain. For example, until a short time ago we had anticipated the next release to be 2018.03, now it is more likely to be 2018.05. That makes it hard to tag a milestone in January and have it all make sense. dt -- Dean Troyer dtroyer at gmail.com From cindy.xie at intel.com Wed Dec 5 00:08:17 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 5 Dec 2018 00:08:17 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <1DD0799E-8600-4268-9996-5BA81092C6CF@windriver.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DED2BE@SHSMSX104.ccr.corp.intel.com> <1DD0799E-8600-4268-9996-5BA81092C6CF@windriver.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DEF0ED@SHSMSX104.ccr.corp.intel.com> Thanks Ken, let's me put kernel upgrade as a separate topics here. Thx. - cindy -----Original Message----- From: Young, Ken [mailto:Ken.Young at windriver.com] Sent: Wednesday, December 5, 2018 4:06 AM To: Xie, Cindy ; Wold, Saul ; Rowsell, Brent ; Lin, Shuicheng ; Liu, ZhipengS ; Zhu, Vivian ; Shang, Dehao ; Troyer, Dean ; starlingx-discuss at lists.starlingx.io; Khalil, Ghada ; Somerville, Jim Subject: Re: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting Cindy, Thank you for sending the agenda. As a reminder for agenda item #1, I would like to determine if we want to do a kernel update separate from the CentOS 7.6 update or do it at the same time. Regards, Ken Y On 2018-12-04, 1:16 PM, "Xie, Cindy" wrote: Agenda for 12/5 meeting: 1. CentOS 7.6 upgrade planning (Saul) 2. non-Openstack patch refactoring status (Zhipeng) 3. Qemu 3.0 upgrade status (Ghada/Jim) 4. Ceph upgrade status update (Vivian/Dehao) 5. Opens (all) Please let me know if we'd like to cover other topics as well. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; starlingx-discuss at lists.starlingx.io Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce Moreno, Abraham; Cobbley, David A Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From changcheng.liu at intel.com Wed Dec 5 01:14:23 2018 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Wed, 5 Dec 2018 09:14:23 +0800 Subject: [Starlingx-discuss] SCM: repo projects revision control Message-ID: <20181205011422.GA70451@nstcloud.sh.intel.com> Hi Erich, Do you know which is latest stable version that Ceph could work well on it? I need set the "revision" item for every project as many as possible to fix code repository base. B.R. Changcheng On 00:12 Wed 05 Dec, Cordoba Malibran, Erich wrote: > Hi Liu, > > I think there are two ways to checkout a specific revision in all > projects: > > 1) By using a tag/branch: This should exist in the all the > repositories. For example, you can get the code for the r/2018.10 > release setting the default revision: > > > > Which BTW doesn't exist in the default.xml for r/2018.10, we need to > fix that. > > 2) Setting a revision per project: This is painful, but you can set a > revision for every repository: > > > > Also, you can create/modify your own manifest.xml with the repo tool, > using repo start and other subcommands[0]. The only requirement is that > the manifest.xml should live in a git repository to use repo init. > > I hope this can help. > > -Erich > > [0] https://source.android.com/setup/develop/repo > > > > On Tue, 2018-12-04 at 01:11 +0000, Liu, Changcheng wrote: > > Hi Scott & Dean, > > Is there any way for developers get the exactly same stx source > > code after running below commands? > > repo init -u https://git.starlingx.io/stx-manifest -m default.xml > > repo sync > > > > Currently, some projects are set to be fixed tag/version in > > .repo/manifests/default.xml e.g. kubernetes.git > > 1 +--- 5 lines: -------- > > ------------------------ > > 6 > > 7 > > 8 > > 9 +-- 93 lines: ------ > > 102 > > 103 > name="kubernetes.git" path="cgcs-root/stx/git/kubernetes"/> > > 104 > name="dashboard.git" path="cgcs-root/stx/git/kube-dashboard"/> > > 105 +-- 5 lines: > revision="refs/tags/1.14.8" name="dns.git" path="cgcs- > > root/stx/git/kube-dns"/>-- > > > > However, some projects are always moving ahead without setting > > “revision” item. > > > > B.R. > > Changcheng > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chenjie.xu at intel.com Wed Dec 5 03:16:11 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 5 Dec 2018 03:16:11 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming In-Reply-To: References: Message-ID: Hi Matt, Thank you for your reply! Looking forward to the additional use-case information. Best Regards From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, December 4, 2018 9:12 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Chenjie, I would add additional use-case information to help with the justification for adding this capability. The detailed quota information is used within the StarlingX distributed cloud solution. The quota information for a given project/user is aggregated across all sub-clouds, therefore having an efficient mechanism to retrieve the quota details of all resources is required. Regards, Matt From: "Xu, Chenjie" > Date: Monday, December 3, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Wed Dec 5 02:00:31 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 5 Dec 2018 02:00:31 +0000 Subject: [Starlingx-discuss] TSC meeting minutes - Nov 29th In-Reply-To: <1A270B85-BB91-4C02-B59A-BE6C80E25647@windriver.com> References: <1A270B85-BB91-4C02-B59A-BE6C80E25647@windriver.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA493A57@ALA-MBD.corp.ad.wrs.com> Hello TSC members, Regarding the action for Bruce and myself, we have captured our discussion and our proposal at: https://etherpad.openstack.org/p/stx-releases I am also attaching a high level diagram that shows the relationship between the StarlingX branches and the openstack branches. (The diagram does not cover monthly milestones right now. Dean is thinking about how to handle those moving forward). We can discuss further in the next TSC meeting. Any community feedback is more than welcome as well. Regards, Ghada & Bruce From: Jolliffe, Ian [mailto:Ian.Jolliffe at windriver.com] Sent: Sunday, December 02, 2018 5:47 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] TSC meeting minutes - Nov 29th The meeting agenda is here: https://etherpad.openstack.org/p/stx-cores Feel free to propose agenda items, also please recognize we may not get to all items each week. Standing topics: Pending reviews for stx-governance https://review.openstack.org/#/q/project:openstack/stx-governance,n,z Pending reviews for stx-specs https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-specs - Call to TSC members we need some more reviews, please have a look. - We agreed that a simple majority is acceptable and we would wait 48 hours prior to merging after achieving the simple majority. 11/29/18 Distro.openstack project - not active currently. Keep it? Disband it? Use it for driving/tracking work on patch upstreaming and code refactoring? -- brucej (5 min) Proposal is to use this sub-project to track patch elimination and guide this work until we get to master. Rebase would not be part of this project. Agreed by TSC Need to appoint PL/TL for the project - review candidates at next TSC meeting Candidates for PL – call to community members for volunteer Candidates for TL – call to community members for volunteer Can all TSC members attend the proposed Jan 15-16 meetup in Phoenix? -- Bruce (5 min) Looks good for most TSC members, Eventbrite to follow for formal registration for community. Release cadence - changes needed? align with upstream OpenStack? Time based or content based? - Bruce & Ghada (15 min) Recommend we follow the milestone of OpenStack and have our own check list, potentially with some offset (initially 6 weeks and revisit in the future) Proposal is to align to two releases a year along with OpenStack. We could do "dot" release for bugs in between. dot release is defined already and can be done. Can pick up an OpenStack stable as required. Action: Bruce and Ghada will refine the proposal and provide update at next week’s meeting. Update from Brent on patch elimination as discussed on community call (10 min) Brent shared charts - will send out after the call Please provide feedback Update from Miguel - cross project discussions (Neutron/Nova) (5 min) http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000138.html Nova response: http://lists.openstack.org/pipermail/openstack-discuss/2018-November/000150.html Generally positive response from cross project teams Neutron - Miguel attended the weekly meeting with the STX networking team and the major spec agreed to at the PTG has merged (https://review.openstack.org/#/c/599980 Network Segment Range Management ) Nova - add Stein Numa aware live migration spec - https://review.openstack.org/#/c/599587/ Next release priorities - Ian ( 20 min ) https://ethercalc.openstack.org/fafyo2729fnr Agreed we need a focus meeting on this – sending a proposed time to the TSC – once the meeting time is confirmed it will be posted to the mailing list Action: PL/TL from sub-projects – please review all items tagged as high priority and provide input on whether or not this initiative is staffed for your sub-project or needs to be discussed. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stx-branch-logistics.pdf Type: application/pdf Size: 99282 bytes Desc: stx-branch-logistics.pdf URL: From austin.sun at intel.com Wed Dec 5 07:46:29 2018 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 5 Dec 2018 07:46:29 +0000 Subject: [Starlingx-discuss] Support IBRS cpu model. In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB327507@ALA-MBD.corp.ad.wrs.com> References: <08e3677f-39dc-bc99-2791-c88468e17c4a@windriver.com> <2588653EBDFFA34B982FAF00F1B4844EBB327507@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent , Chris and all : With cherry-picking qemu 3.0 (qemu-img version 3.0.0 (qemu-kvm-ev-3.0.0-0.tis.97)) change [1] and [2] , VM with Skylake-Server-IBRS , Skylake-Server and passthrough cpu-model can be successfully created. The PR [3] and [4] are for nova and glance(flavor) changes. [1] https://github.com/jsomervi/stx-qemu/commits/working-3.0.0-noavp-12 [2] https://review.openstack.org/#/c/622583/ [3] https://github.com/starlingx-staging/stx-nova/pull/17 [4] https://github.com/starlingx-staging/stx-glance/pull/5 Thanks. BR Austin Sun. -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, November 28, 2018 7:38 PM To: Sun, Austin ; Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Support IBRS cpu model. Just a fyi. QEMU is in the process of being up versioned to 3.0 Brent -----Original Message----- From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Wednesday, November 28, 2018 1:45 AM To: Friesen, Chris ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Support IBRS cpu model. Hi Chris and All: More info for Qemu to support IBRS. Qemu 2.10 is used in StarlingX. From [1] and [2], Qemu 2.12.1 will start supporting spec-ctrl/IBRS chip support. [1] https://www.qemu.org/2018/02/14/qemu-2-11-1-and-spectre-update/ [2] https://wiki.qemu.org/ChangeLog/2.12 Thanks. BR Austin Sun. -----Original Message----- From: Sun, Austin Sent: Tuesday, November 27, 2018 5:18 PM To: 'Chris Friesen' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Support IBRS cpu model. Hi Chris: Thanks your suggestion. I have done some more test . Here is the result. Compute CPU: Skylake-IBRS, If we created VM with non-IBRS CPU model(Skylake-server, Broadwell etc), it worked fine with patch. But when create VM with IBRS CPU model, Qemu will report below error "2018-11-27T08:42:23.764924Z qemu-kvm: can't apply global Broadwell-x86_64-cpu.spec-ctrl=on: Property '.spec-ctrl' not found" Or "qemu-kvm: can't apply global Skylake-Server-x86_64-cpu.spec-ctrl=on: Property '.spec-ctrl' not found" From link[1]: it seems we have to upgrade qemu to check if we can fully support *IBRS cpu model. Currently we can only create non-IBRS CPU model or passthrough with those two patches [2] and [3] [1] https://bugzilla.redhat.com/show_bug.cgi?id=1532401 [2] https://github.com/starlingx-staging/stx-nova/pull/17 [3] https://github.com/starlingx-staging/stx-glance/pull/5 Thanks. BR Austin Sun. -----Original Message----- From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Friday, November 23, 2018 11:08 PM To: Sun, Austin ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Support IBRS cpu model. In order to ensure that we don't break existing code, I think we'd also need to test by booting VMs built using at least a flavor that specifies a different IBRS CPU model and a different non-IBRS CPU model. Chris On 11/23/2018 12:16 AM, Sun, Austin wrote: > Hi Chris: > Due to resource limitation , I only deployed in Skylake-IBIS server , create VM enabling cpu-passthrough. > > Thanks. > BR > Austin Sun. > -----Original Message----- > From: Chris Friesen [mailto:chris.friesen at windriver.com] > Sent: Friday, November 23, 2018 1:42 AM > To: Sun, Austin ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Support IBRS cpu model. > > On 11/21/2018 11:46 PM, Sun, Austin wrote: >> Hi Chris: >> Very appreciate your review >> The new PRs are >> https://github.com/starlingx-staging/stx-nova/pull/17 >> https://github.com/starlingx-staging/stx-glance/pull/5 > > What testing have you done on these changes? > > Chris > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From huifeng.le at intel.com Wed Dec 5 09:20:13 2018 From: huifeng.le at intel.com (Le, Huifeng) Date: Wed, 5 Dec 2018 09:20:13 +0000 Subject: [Starlingx-discuss] Questions about patch 87a8c625 upstreaming Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D548D70@SHSMSX104.ccr.corp.intel.com> Matt, I am looking at patch #87a8c625 (US86444: patching scripts for neutron processes) which includes 2 parts, could you please help to clarify below question? * Script to support neutron service restart: This is STX special script and no need for upstream * Metadata-proxy service lifecycle management: close metadata proxy (e.g. haproxy) process (if it is managed by dhcp client) when neutron-dhcp-agent stopped To my understanding, metadata proxy (e.g. haproxy) provides support for VM instance to query its metadata information from Neutron which is in data-path plane, neutron-dhcp-agent is responsible for configuring dhcp process (e.g. dnsmasq) or metadata proxy process which is lie in control-path plane. So it seems by design: whether the metadata-proxy process is alive is determined by the network/port status change instead of whether dhcp agent alive. Are there any special use cases which requires to stop metadata proxy process when dhcp agent stopped? In my test: * if metadata proxy is managed by l3 agent (default setting) Stopping(/Restarting) l3 agent will not stop(/restart) haproxy process and clear iptables rule, router can still work (in data path) and metadata information can still be queried inside VM instance * if metadata proxy is managed by dhcp agent (set "force_metadata = true" in /etc/neutron/dhcp_agent.ini) Stopping(/Restarting) dhcp agent will not stop(/restart) dnsmasq process and haproxy process, dhcp server can still work (in data path) and metadata information can still be queried inside VM instance. If applying the patch, Stopping dhcp agent will make the metadata query within VM instance fail while the DHCP server can still work (or does it also need to be stopped?) and metadata bridge information (e.g. 169.254.169.254/16) is also valid in dhcp interface, does it expected behavior? Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From John.Kung at windriver.com Wed Dec 5 14:22:07 2018 From: John.Kung at windriver.com (Kung, John) Date: Wed, 5 Dec 2018 14:22:07 +0000 Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config Message-ID: +1 , Ok with me to add Al Bailey as core reviewer for stx-config Thanks, John Kung From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Tuesday, December 04, 2018 9:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config Hi, I'd like to add Al Bailey as a core reviewer for stx-config. Al is one of the top contributers to stx-config - both in code contribution and in doing many useful reviews: http://stackalytics.com/?project_type=all&release=all&metric=all&module=stx-config I'd like confirmation (or objections) from the existing cores please... Bart Wensley, Member of Technical Staff, Wind River direct 613.963.1385 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Dec 5 14:34:36 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 5 Dec 2018 14:34:36 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF048E@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 12/5 meeting: 1. minor kernel version upgrade to 3.10.0.957 (Ken) https://bugs.launchpad.net/starlingx/+bug/1805759 security concerns regarding current version, want to upgrade the kernel to 957. Want to get the upgraded kernel to master sooner. Requirement is to get to the kernel version soon in StarlingX master. This minor kernel version is the one in CentOS 7.1810 release: https://storyboard.openstack.org/#!/story/2004521 from implemenation: kernel upgrade in higher priority and needs to be done in master. Shuicheng: only rpm version available, sRPM not available yet - AR: Shuicheng to monitor the CentOS package page. out of tree driver Kernel drivers upgrade: not sure yet if those kernel drivers (drdb, QAT, etc) have been upgraded to 1810 yet. Can seperate this work as other workitem. Victor: shouldn't we re-build with the new kernel or not? The answer is YES. 2. CentOS 7.6 upgrade planning (Saul) - get a list of sRPM/RPM packages that being upgrade in CentOS7.6, size the work item (AR: Shuicheng) - Feature branch creation (AR to Dean, needs to get the rights from Dean to create feature branch. AR for Shuicheng to contact Dean. Rebase from master on weekly basis. ) CentOS7.1810 was annouced on Dec 3rd. storyboard created: https://storyboard.openstack.org/#!/story/2004522, will add tasks once sRPM identified. 3. non-Openstack patch refactoring status (Zhipeng) finished most of the tasks for init/config. Only one patch still pending for RabitMQ for final review. Saul will fire more storyboards when he finds more init/config issue. last week we find other related stories, 7 stories finished coding, 2 merged, 3 under review, 2 is invalid. 167 patch reduction in total now out of 402 patches - good job! 4. Qemu 3.0 upgrade status (Ghada/Jim) Qemu basically ready to push. AR: Saul to review the patches pushed up by Jim on staging branch. Work w/ Dean to review and if it's good to go, will push. pre-push validation has been done already. 5. Ceph upgrade status update (Vivian/Dehao) Story and task created @ storyboard. Detail doc created and including every patches. Break down big commits into small patches, new PR sent. 6 WR patches added to this PR. Waiting for Dean's review. Scott provided some comments on Ceph upgrade in openstack Gerrit - patch updated to address the comments. ISO image can be built and under testing. 32 patches rebased and included in this new ISO image. dedicated storage node environment setting from Dev side. If you need help, contact Ada... 6. Opens (all) 6.1 kernel driver upgrade testing status update (Ada/Ricardo/Shuicheng) We have successfully installed the Lewis Hill, QAT 8970 Adapter, X16,OEM,S,DUAL-CC - J39967-001, after some issues with our infrastructure, we finally started the proper installation of duplex configuration, today we expect to run some of the tests to see if some issue are seen or not. We are using the latest ISO provided by Shuicheng. -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; starlingx-discuss at lists.starlingx.io Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce Moreno, Abraham; Cobbley, David A Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From austin.sun at intel.com Wed Dec 5 14:39:26 2018 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 5 Dec 2018 14:39:26 +0000 Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config In-Reply-To: References: Message-ID: +1, AI gave a lot of good suggestion when he reviewed changes. Thanks. BR Austin Sun. From: Kung, John [mailto:John.Kung at windriver.com] Sent: Wednesday, December 5, 2018 10:22 PM To: Wensley, Barton Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config +1 , Ok with me to add Al Bailey as core reviewer for stx-config Thanks, John Kung From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Tuesday, December 04, 2018 9:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Propose to add Al Bailey to core reviewers for stx-config Hi, I'd like to add Al Bailey as a core reviewer for stx-config. Al is one of the top contributers to stx-config - both in code contribution and in doing many useful reviews: http://stackalytics.com/?project_type=all&release=all&metric=all&module=stx-config I'd like confirmation (or objections) from the existing cores please... Bart Wensley, Member of Technical Staff, Wind River direct 613.963.1385 -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Dec 5 14:51:05 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 5 Dec 2018 08:51:05 -0600 Subject: [Starlingx-discuss] Qemu 3.0.0 is ready for your consideration In-Reply-To: <0add847a-267d-1706-e57f-5ee5a6bd34b5@windriver.com> References: <0add847a-267d-1706-e57f-5ee5a6bd34b5@windriver.com> Message-ID: On Tue, Dec 4, 2018 at 3:13 PM Jim Somerville wrote: > The update to 3.0.0 consists of two parts, a pull request for the > stx-qemu repo, and the piece in stx-integ which deals with the > compilation. Like last time with libvirt, we have to commit these two > parts at the same time. Since the 3.0 work is going in to a new branch I think we are OK to go ahead and commit that to stx-qemu before the stx-integ change unless I am forgetting something? It will be the change to stx-manifest switching to the new branch that will need to be coordinated with 622583 in stx-integ and that can be done with a Depends-On footer. > The new qemu is here, and I will push a new branch and issue a pull > request to it once I'm done dealing with feedback. > https://github.com/jsomervi/stx-qemu/commits/working-3.0.0-noavp-12 I have created stx-qemu branch stx/v3.0.0 from upstream qemu at sha 38441756b70eec5807b5f60dad11a93a91199866 "Update version for v3.0.0 release", matching what you have in [0]. Target your PRs at that and we should be good to go. > The stx-integ part for review is here: > https://review.openstack.org/#/c/622583/ As I mentioned above, we should also queue up a review to stx-manifest that adds revision="stx/v3.0.0" to stx-qemu. This should depend on 622583 and be +W before 622583, it will go through the gate test but be blocked from merging until 622583 merges so the window between them merging will be fairly small. dt [0] https://review.openstack.org/#/c/622583/1/virt/qemu/centos/build_srpm.data -- Dean Troyer dtroyer at gmail.com From Ovidiu.Poncea at windriver.com Wed Dec 5 07:41:38 2018 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Wed, 5 Dec 2018 07:41:38 +0000 Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EBB25D46B@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5F8E8F@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5FAA8C@ALA-MBD.corp.ad.wrs.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D608275@ALA-MBD.corp.ad.wrs.com> Hi Li, Thanks for providing clarifications! So, for our use cases, main problem is that glance’s raw caching is more controllable that cinder’s. If it’s not enough we need to improve it, if we can live with it then at a minimum it needs to be enabled though sysinv configuration and then remove the raw-caching from glance. See inline comments plus bellow summary and proposal, we need Brent’s input on this: I see two main solutions to the problem: A. Always enable cache, for any backend, but only cache glance images that have a certain attribute – this needs a cinder upstream change. Cache limit has to be removed (another cinder upstream change). We may also need a way to kick-start the caching in cinder & clean up cache (periodically and/or user triggered should be enough). B. Make enabling cache storage backend specific and configurable (through sysinv). Once cinder’s cache is enabled for a backend, cache everything. Size of the cache should be configurable. I would go for B. as it, most likely, doesn’t need upstream changes. Summary of problems, TBD if we can live with them: · Images are not cached on creation – if we can’t live with it we may need a trigger to cinder on image creation or a way to manually kick-start the caching process. · Since first volume creation is slow for larger volumes this may timeout (keystone token expiration) – we had a customer using 200GB qcow2 windows images that would timeout on conversion. I don’t see a workaround for it, just ask him to manually do the conversion when importing very large images to glance. · we can’t provide a 100% guarantee that, once converted, successive creation won’t need to get converted again due to cache exhaustion. Can we live with it? Users may intermittently see slowdowns and wonder what’s going on. · cache will waste space, if original images no longer exists there is no automated way to remove them from the cache – admin can clean up the cache manually if he so desires. We can either: 1. Live with it – assume that the space allocated to the cache is for the cache only or users can clean up cache by themselves. 2. Clean up cache through a cron job (although this is a cache, some caches are supposed to clean themselves up if cached data is no longer present). 3. Implement another mechanism to clean the cache when an image is deleted not at a later time (this is way too complex to upstream). · What happens with images that users don’t want to cache? Should we add a filter (glance property)? I vote for #2 as it does not seems too hard to implement. A once a day cron task can free up wasted space. Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Regards, Ovidiu From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Tuesday, November 27, 2018 4:30 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, As far as I’m concerned, Cinder image cache is an cache mechanism. So overall, users don’t need to clean it manually. Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Monday, November 26, 2018 11:15 PM To: Li, Xiaoyan ; Rowsell, Brent ; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Lisa, Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me. Short comparison of the two: Raw caching Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool. Advantages: - user can select the images it wants to cache - user can monitor the progress and can check used space for each image (cli + dashboard). - on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use. - no wasted space - complete control by user Disadvantages: - There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work. - Ceph only Cinder's caching Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold. Advantages: - already upstream - works with all backends - all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes. - admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation) Disadvantages: 1. it's either globally enabled or disabled => needs sysinv configuration option 2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste 3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important. 4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit. 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) Mitigations of disadvantages above - possible solutions and alternatives: #1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify") [Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html I think it is enough. [Ovi] Nice, we need a configuration option per backend in sysinv to enable it. (most likely in the capabilities fields of storage-backends table. See ‘system storage-backend-*’ commands). #2: No workaround comes to my mind - we can probably live with it #3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it) [Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction. As it is cache and should be transparent for users, why do we need users to evict data? [Ovi] If we conclude than this is enough from data usage perspectives then we are ok with it. #4: Two options comes to mind: 1. To get some control we should not limit the cache size, given that we do propper cleanup in #3. [Li, Xiaoyan] Even we do cleanup, the limit can’t be removed. [Ovi] We may need to enhance this. 2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can’t it work with Kubernetes? [Ovi] I did not say it won’t work with kubernates ☺ What I said is that we need to provision the shadow tenant automatically when the feature is enabled. Questions, (maybe if you get time to play with cinder's caching to get a better understanding): 1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel? [Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first and then creates volumes. The primary key is not image_id+backend_storage. It is possible that several entries or volumes will be created in same backend storage. [Ovi] So, only the first volume creation is going to be slow? If that’s the case then parallel volume creation will work ok as only first volume creation will be slow. 2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend? [Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled. [Ovi] Then we need to enable it and configure cache size per backend, I guess. 3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect? [Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed. [Ovi] So after we make the changes, we re-apply the manifests and restart the services (reload the helm charts for k8s deployments) 4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user? [Li, Xiaoyan] Admin and shadow tenants can both do cleanup. Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Thursday, November 22, 2018 2:41 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent and Ovidiu, As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache. Please vote whether we can abandon raw cache in StarlingX. StarlingX Create an image cache in ceph when Glance creates an image. And delete the cached image in ceph when deleting the original image in Glance. Cinder: When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache. So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it. Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes. From my viewpoint, Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement. It is for all Cinder supported backend storage, not just for Ceph. Best wishes Lisa From: Li, Xiaoyan Sent: Monday, November 19, 2018 9:44 AM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, A cached image ( new volume from this image) is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image. All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368 https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82 A cached image is deleted when the configure space for cache is used up. So currently Cinder doesn’t delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache. https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117 https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351 Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Friday, November 16, 2018 4:57 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed. Thanks, Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Tuesday, November 13, 2018 9:19 AM To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail. And if I would like to remove the function in StarlingX, there are two methods: 1. Submit a patch to revert the changes in Glance and Cinder. 2. Ignore these patches during upgrading StarlingX/Cinder to new Cinder release. Which way do we prefer to? Best wishes Lisa From: Li, Xiaoyan Sent: Thursday, September 20, 2018 10:17 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, Brent The following are mechanism of Cinder volume cache. Creation of cached volume: It creates a cached volume in the backend storage when creating from an image. 1. Create_from_image: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890 2. Return image cache entry: If not existed, it creates a new entry. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746 3. Create a new image-volume and cache entry for it: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872 Use a cached volume when creating a volume: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes). https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164 Best wishes Lisa From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, September 6, 2018 10:02 AM To: Li, Xiaoyan >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching We would need to review this feature to ensure it provides equivalent functionality first. If it does, great, we can look at reverting and enabling this cinder functionality. Brent From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Wednesday, September 5, 2018 9:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi all, This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage. In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead. The problem is that we need to update Cinder config in StarlingX. Any comments? Best wishes Lisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Dec 5 15:43:09 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 5 Dec 2018 15:43:09 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35DF048E@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF0655@SHSMSX104.ccr.corp.intel.com> Victor, Your comments well received, updated minutes here: https://etherpad.openstack.org/p/stx-distro-other + starlingX ML for the correction. thx. - cindy -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, December 5, 2018 11:28 PM To: Xie, Cindy Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 On Wed, Dec 5, 2018 at 8:35 AM Xie, Cindy wrote: > > Agenda & Notes for 12/5 meeting: > 1. minor kernel version upgrade to 3.10.0.957 (Ken) > https://bugs.launchpad.net/starlingx/+bug/1805759 > security concerns regarding current version, want to upgrade the kernel to 957. Want to get the upgraded kernel to master sooner. Requirement is to get to the kernel version soon in StarlingX master. > This minor kernel version is the one in CentOS 7.1810 release: https://storyboard.openstack.org/#!/story/2004521 > from implemenation: kernel upgrade in higher priority and needs to be done in master. > Shuicheng: only rpm version available, sRPM not available yet - AR: Shuicheng to monitor the CentOS package page. > out of tree driver Kernel drivers upgrade: not sure yet if those kernel drivers (drdb, QAT, etc) have been upgraded to 1810 yet. Can seperate this work as other workitem. > Victor: shouldn't we re-build with the new kernel or not? The answer is YES. I said : Should we rebuild the drivers with the new kernel , please send correction to notes All the drivers shoudl be build agains the kernel source code otherwise the ELF file will not patch with the symbols > 2. CentOS 7.6 upgrade planning (Saul) > - get a list of sRPM/RPM packages that being upgrade in CentOS7.6, size the work item (AR: Shuicheng) > - Feature branch creation (AR to Dean, needs to get the rights from Dean to create feature branch. AR for Shuicheng to contact Dean. Rebase from master on weekly basis. ) > CentOS7.1810 was annouced on Dec 3rd. > storyboard created: https://storyboard.openstack.org/#!/story/2004522, will add tasks once sRPM identified. > > 3. non-Openstack patch refactoring status (Zhipeng) > finished most of the tasks for init/config. Only one patch still pending for RabitMQ for final review. > Saul will fire more storyboards when he finds more init/config issue. > last week we find other related stories, 7 stories finished coding, 2 merged, 3 under review, 2 is invalid. > 167 patch reduction in total now out of 402 patches - good job! > > 4. Qemu 3.0 upgrade status (Ghada/Jim) > Qemu basically ready to push. > AR: Saul to review the patches pushed up by Jim on staging branch. Work w/ Dean to review and if it's good to go, will push. > pre-push validation has been done already. > > 5. Ceph upgrade status update (Vivian/Dehao) > Story and task created @ storyboard. Detail doc created and including every patches. Break down big commits into small patches, new PR sent. 6 WR patches added to this PR. Waiting for Dean's review. > Scott provided some comments on Ceph upgrade in openstack Gerrit - patch updated to address the comments. > ISO image can be built and under testing. 32 patches rebased and included in this new ISO image. > dedicated storage node environment setting from Dev side. If you need help, contact Ada... > > 6. Opens (all) > 6.1 kernel driver upgrade testing status update (Ada/Ricardo/Shuicheng) We have successfully installed the Lewis Hill, QAT 8970 Adapter, X16,OEM,S,DUAL-CC - J39967-001, after some issues with our infrastructure, we finally started the proper installation of duplex configuration, today we expect to run some of the tests to see if some issue are seen or not. We are using the latest ISO provided by Shuicheng. > > -----Original Appointment----- > From: Xie, Cindy > Sent: Monday, November 5, 2018 2:27 PM > To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, > Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; > Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; > starlingx-discuss at lists.starlingx.io > Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, > Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez > Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; > 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce > Moreno, Abraham; Cobbley, David A > Subject: Weekly StarlingX non-OpenStack Distro meeting > When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). > Where: https://zoom.us/j/342730236 > > > . Cadence and time slot: > o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . > Call Details: > o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: > o Dial(for higher quality, dial a number based on your current > location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 > 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ > . Meeting Agenda and Minutes: > o https://etherpad.openstack.org/p/stx-distro-other > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Wed Dec 5 15:34:01 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 5 Dec 2018 15:34:01 +0000 Subject: [Starlingx-discuss] Community meeting Dec 5th agenda and notes Message-ID: <9A85D2917C58154C960D95352B22818BB1EC29CC@fmsmsx117.amr.corp.intel.com> Agenda and notes - Dec 5th call * Please register for the January community meeting so we can get a count for logistics (meals, etc...): https://starlingx_jan2019meetup.eventbrite.com * Discuss planning process and preparation for the community meet-up. o Should the agenda be focused on what the teams are doing? On what features we want to deliver? Or both? ? Ian - split the agenda into: Short term items, big long term items, CI/CD/test, Process & Governance ? Bruce - my goal for the meeting is to finalize the release plan - content and schedule ? Ian - another goal is for the TSC to align on the vision for the project * How do we manage / do planning for features that span sub-project? How do we track the progress of large complex multi-story features? ? Adopting a higher level planning tool - e.g. Jira, Trello, etc...? ? For now using multiple Etherpads to manage and track complex projects o Discuss alignment with OpenStack milestones and the planning assumptions built into that model. Release plan to be reviewed with TSC tomorrow. * Discuss Multi-OS and Zuul project team status since we ran out of time in the Nov 28th meeting. o Multi-OS status: (Bruce) ? Spec review status * Specs still pending: https://review.openstack.org/#/q/status:open+AND+project:%255Eopenstack/stx-specs o https://review.openstack.org/#/c/619801/ o https://review.openstack.org/#/c/621033/ ? Victor to fix the tox failures (trailing white space/etc...). ? DevStack status * https://review.openstack.org/#/c/620988/ - needs reviewers * https://review.openstack.org/#/c/620806/ - Has self WF-1 from Yi while he runs some last minute checks * https://review.openstack.org/#/c/616402/ - Mingyuan to respond to Dean's feedback ? Zuul status (Cesar) * Has been a lower priority. Has been some background activity. Team is not meeting. Cesar is looking for volunteers to join the team! * Should this be part of the Test team? * We should add Test Strategy to the community meeting agenda and have a comprehensive discussion. * Question from the team working on the FlexRAN demo: By default, Ironic services are NOT enabled in StarlingX system. What's the reasons in the background at the first place? BTW, they were not enabled by default in Titanium R5, according to the feedback from FlexRAN team. o This was a decision made at the seed code release to make it an optional feature to cut down on foot print. Only useful for bare metal workloads. This will all be changing in the next release with the new container architecture which will reduce the need for bare metal support (use containers instead). Bruce to follow up with the team to make sure they are OK with manual steps. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Dec 5 15:49:19 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 5 Dec 2018 15:49:19 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 References: <2FD5DDB5A04D264C80D42CA35194914F35DF048E@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF0691@SHSMSX104.ccr.corp.intel.com> BTW, Victor, for the current build system, because we are building from source for every build, this is not a problem - it will be guaranteed. However, you brought up a good points that this needs to be kept in mind once our build system moving to Koji, and RPM will be retained in Koji repo (like today's mirror). In this case, we have to keep in mind that when certain key components got upgraded (e.g kernel, toolchain, etc), then we have to re-build the impacted pkgs. Thx. - cindy -----Original Message----- From: Xie, Cindy Sent: Wednesday, December 5, 2018 11:43 PM To: 'Victor Rodriguez' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 Victor, Your comments well received, updated minutes here: https://etherpad.openstack.org/p/stx-distro-other + starlingX ML for the correction. thx. - cindy -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, December 5, 2018 11:28 PM To: Xie, Cindy Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 On Wed, Dec 5, 2018 at 8:35 AM Xie, Cindy wrote: > > Agenda & Notes for 12/5 meeting: > 1. minor kernel version upgrade to 3.10.0.957 (Ken) > https://bugs.launchpad.net/starlingx/+bug/1805759 > security concerns regarding current version, want to upgrade the kernel to 957. Want to get the upgraded kernel to master sooner. Requirement is to get to the kernel version soon in StarlingX master. > This minor kernel version is the one in CentOS 7.1810 release: https://storyboard.openstack.org/#!/story/2004521 > from implemenation: kernel upgrade in higher priority and needs to be done in master. > Shuicheng: only rpm version available, sRPM not available yet - AR: Shuicheng to monitor the CentOS package page. > out of tree driver Kernel drivers upgrade: not sure yet if those kernel drivers (drdb, QAT, etc) have been upgraded to 1810 yet. Can seperate this work as other workitem. > Victor: shouldn't we re-build with the new kernel or not? The answer is YES. I said : Should we rebuild the drivers with the new kernel , please send correction to notes All the drivers shoudl be build agains the kernel source code otherwise the ELF file will not patch with the symbols > 2. CentOS 7.6 upgrade planning (Saul) > - get a list of sRPM/RPM packages that being upgrade in CentOS7.6, size the work item (AR: Shuicheng) > - Feature branch creation (AR to Dean, needs to get the rights from Dean to create feature branch. AR for Shuicheng to contact Dean. Rebase from master on weekly basis. ) > CentOS7.1810 was annouced on Dec 3rd. > storyboard created: https://storyboard.openstack.org/#!/story/2004522, will add tasks once sRPM identified. > > 3. non-Openstack patch refactoring status (Zhipeng) > finished most of the tasks for init/config. Only one patch still pending for RabitMQ for final review. > Saul will fire more storyboards when he finds more init/config issue. > last week we find other related stories, 7 stories finished coding, 2 merged, 3 under review, 2 is invalid. > 167 patch reduction in total now out of 402 patches - good job! > > 4. Qemu 3.0 upgrade status (Ghada/Jim) > Qemu basically ready to push. > AR: Saul to review the patches pushed up by Jim on staging branch. Work w/ Dean to review and if it's good to go, will push. > pre-push validation has been done already. > > 5. Ceph upgrade status update (Vivian/Dehao) > Story and task created @ storyboard. Detail doc created and including every patches. Break down big commits into small patches, new PR sent. 6 WR patches added to this PR. Waiting for Dean's review. > Scott provided some comments on Ceph upgrade in openstack Gerrit - patch updated to address the comments. > ISO image can be built and under testing. 32 patches rebased and included in this new ISO image. > dedicated storage node environment setting from Dev side. If you need help, contact Ada... > > 6. Opens (all) > 6.1 kernel driver upgrade testing status update (Ada/Ricardo/Shuicheng) We have successfully installed the Lewis Hill, QAT 8970 Adapter, X16,OEM,S,DUAL-CC - J39967-001, after some issues with our infrastructure, we finally started the proper installation of duplex configuration, today we expect to run some of the tests to see if some issue are seen or not. We are using the latest ISO provided by Shuicheng. > > -----Original Appointment----- > From: Xie, Cindy > Sent: Monday, November 5, 2018 2:27 PM > To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, > Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; > Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; > starlingx-discuss at lists.starlingx.io > Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, > Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez > Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; > 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce > Moreno, Abraham; Cobbley, David A > Subject: Weekly StarlingX non-OpenStack Distro meeting > When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). > Where: https://zoom.us/j/342730236 > > > . Cadence and time slot: > o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . > Call Details: > o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: > o Dial(for higher quality, dial a number based on your current > location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 > 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ > . Meeting Agenda and Minutes: > o https://etherpad.openstack.org/p/stx-distro-other > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jim.somerville at windriver.com Wed Dec 5 16:17:38 2018 From: jim.somerville at windriver.com (Jim Somerville) Date: Wed, 5 Dec 2018 11:17:38 -0500 Subject: [Starlingx-discuss] Qemu 3.0.0 is ready for your consideration In-Reply-To: References: <0add847a-267d-1706-e57f-5ee5a6bd34b5@windriver.com> Message-ID: <4a538cb6-2d94-5799-70dd-9d0cca2e6682@windriver.com> On 2018-12-05 9:51 a.m., Dean Troyer wrote: > On Tue, Dec 4, 2018 at 3:13 PM Jim Somerville > wrote: >> The update to 3.0.0 consists of two parts, a pull request for the >> stx-qemu repo, and the piece in stx-integ which deals with the >> compilation. Like last time with libvirt, we have to commit these two >> parts at the same time. > > Since the 3.0 work is going in to a new branch I think we are OK to go > ahead and commit that to stx-qemu before the stx-integ change unless I > am forgetting something? It will be the change to stx-manifest > switching to the new branch that will need to be coordinated with > 622583 in stx-integ and that can be done with a Depends-On footer. That's correct, the branch creation is fine, it is the switching over to it that has to be coordinated. > >> The new qemu is here, and I will push a new branch and issue a pull >> request to it once I'm done dealing with feedback. >> https://github.com/jsomervi/stx-qemu/commits/working-3.0.0-noavp-12 > > I have created stx-qemu branch stx/v3.0.0 from upstream qemu at sha > 38441756b70eec5807b5f60dad11a93a91199866 "Update version for v3.0.0 > release", matching what you have in [0]. Target your PRs at that and > we should be good to go. Thanks, will do. > >> The stx-integ part for review is here: >> https://review.openstack.org/#/c/622583/ > > As I mentioned above, we should also queue up a review to stx-manifest > that adds revision="stx/v3.0.0" to stx-qemu. This should depend on > 622583 and be +W before 622583, it will go through the gate test but > be blocked from merging until 622583 merges so the window between them > merging will be fairly small. OK, I'll go ahead and do that. -Jim > > dt > > [0] https://review.openstack.org/#/c/622583/1/virt/qemu/centos/build_srpm.data > From Brent.Rowsell at windriver.com Wed Dec 5 16:32:32 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 5 Dec 2018 16:32:32 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DF0691@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DF048E@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DF0691@SHSMSX104.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB343903@ALA-MBD.corp.ad.wrs.com> I would think our build system moving forward would still have the ability rebuild dependences as it does today. Can you clarify the second sentence ? Thanks, Brent -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, December 5, 2018 10:49 AM To: Victor Rodriguez ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 BTW, Victor, for the current build system, because we are building from source for every build, this is not a problem - it will be guaranteed. However, you brought up a good points that this needs to be kept in mind once our build system moving to Koji, and RPM will be retained in Koji repo (like today's mirror). In this case, we have to keep in mind that when certain key components got upgraded (e.g kernel, toolchain, etc), then we have to re-build the impacted pkgs. Thx. - cindy -----Original Message----- From: Xie, Cindy Sent: Wednesday, December 5, 2018 11:43 PM To: 'Victor Rodriguez' ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 Victor, Your comments well received, updated minutes here: https://etherpad.openstack.org/p/stx-distro-other + starlingX ML for the correction. thx. - cindy -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, December 5, 2018 11:28 PM To: Xie, Cindy Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 On Wed, Dec 5, 2018 at 8:35 AM Xie, Cindy wrote: > > Agenda & Notes for 12/5 meeting: > 1. minor kernel version upgrade to 3.10.0.957 (Ken) > https://bugs.launchpad.net/starlingx/+bug/1805759 > security concerns regarding current version, want to upgrade the kernel to 957. Want to get the upgraded kernel to master sooner. Requirement is to get to the kernel version soon in StarlingX master. > This minor kernel version is the one in CentOS 7.1810 release: https://storyboard.openstack.org/#!/story/2004521 > from implemenation: kernel upgrade in higher priority and needs to be done in master. > Shuicheng: only rpm version available, sRPM not available yet - AR: Shuicheng to monitor the CentOS package page. > out of tree driver Kernel drivers upgrade: not sure yet if those kernel drivers (drdb, QAT, etc) have been upgraded to 1810 yet. Can seperate this work as other workitem. > Victor: shouldn't we re-build with the new kernel or not? The answer is YES. I said : Should we rebuild the drivers with the new kernel , please send correction to notes All the drivers shoudl be build agains the kernel source code otherwise the ELF file will not patch with the symbols > 2. CentOS 7.6 upgrade planning (Saul) > - get a list of sRPM/RPM packages that being upgrade in CentOS7.6, size the work item (AR: Shuicheng) > - Feature branch creation (AR to Dean, needs to get the rights from Dean to create feature branch. AR for Shuicheng to contact Dean. Rebase from master on weekly basis. ) > CentOS7.1810 was annouced on Dec 3rd. > storyboard created: https://storyboard.openstack.org/#!/story/2004522, will add tasks once sRPM identified. > > 3. non-Openstack patch refactoring status (Zhipeng) > finished most of the tasks for init/config. Only one patch still pending for RabitMQ for final review. > Saul will fire more storyboards when he finds more init/config issue. > last week we find other related stories, 7 stories finished coding, 2 merged, 3 under review, 2 is invalid. > 167 patch reduction in total now out of 402 patches - good job! > > 4. Qemu 3.0 upgrade status (Ghada/Jim) > Qemu basically ready to push. > AR: Saul to review the patches pushed up by Jim on staging branch. Work w/ Dean to review and if it's good to go, will push. > pre-push validation has been done already. > > 5. Ceph upgrade status update (Vivian/Dehao) > Story and task created @ storyboard. Detail doc created and including every patches. Break down big commits into small patches, new PR sent. 6 WR patches added to this PR. Waiting for Dean's review. > Scott provided some comments on Ceph upgrade in openstack Gerrit - patch updated to address the comments. > ISO image can be built and under testing. 32 patches rebased and included in this new ISO image. > dedicated storage node environment setting from Dev side. If you need help, contact Ada... > > 6. Opens (all) > 6.1 kernel driver upgrade testing status update (Ada/Ricardo/Shuicheng) We have successfully installed the Lewis Hill, QAT 8970 Adapter, X16,OEM,S,DUAL-CC - J39967-001, after some issues with our infrastructure, we finally started the proper installation of duplex configuration, today we expect to run some of the tests to see if some issue are seen or not. We are using the latest ISO provided by Shuicheng. > > -----Original Appointment----- > From: Xie, Cindy > Sent: Monday, November 5, 2018 2:27 PM > To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, > Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; > Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; > starlingx-discuss at lists.starlingx.io > Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, > Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez > Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; > 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce > Moreno, Abraham; Cobbley, David A > Subject: Weekly StarlingX non-OpenStack Distro meeting > When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). > Where: https://zoom.us/j/342730236 > > > . Cadence and time slot: > o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . > Call Details: > o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: > o Dial(for higher quality, dial a number based on your current > location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 > 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ > . Meeting Agenda and Minutes: > o https://etherpad.openstack.org/p/stx-distro-other > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Dec 5 16:58:48 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 5 Dec 2018 10:58:48 -0600 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB343903@ALA-MBD.corp.ad.wrs.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DF048E@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DF0691@SHSMSX104.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB343903@ALA-MBD.corp.ad.wrs.com> Message-ID: On Wed, Dec 5, 2018 at 10:33 AM Rowsell, Brent wrote: > > I would think our build system moving forward would still have the ability rebuild dependences as it does today. > Can you clarify the second sentence ? > > Thanks, > > Brent > You are right Brent, despite what direction we move in the future the system will still have the ability rebuild dependences as it does today. > -----Original Message----- > From: Xie, Cindy [mailto:cindy.xie at intel.com] > Sent: Wednesday, December 5, 2018 10:49 AM > To: Victor Rodriguez ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 > > BTW, Victor, for the current build system, because we are building from source for every build, this is not a problem - it will be guaranteed. > This is not what I mean, look Kernel modules need to be compiled a bit differently from regular userspace apps. for example, a simple makefile for modules could be obj-m += hello-1.o all: make -C /lib/modules/$(shell uname -r)/build M=$(PWD) modules where you need to have the source code of the kernel as reference, that was my question and it was clear in the meeting that it will be done > However, you brought up a good points that this needs to be kept in mind once our build system moving to Koji, and RPM will be retained in Koji repo (like today's mirror). In this case, we have to keep in mind that when certain key components got upgraded (e.g kernel, toolchain, etc), then we have to re-build the impacted pkgs. We can treat this in a separate mail to do not make more noise on this meeting notes. On build meeting we clarify this point of generating the necessary tools to deal with dependencies if in the future we choose koji option sorry for confusion Brent and others > > Thx. - cindy > > -----Original Message----- > From: Xie, Cindy > Sent: Wednesday, December 5, 2018 11:43 PM > To: 'Victor Rodriguez' ; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 > > Victor, > Your comments well received, updated minutes here: > https://etherpad.openstack.org/p/stx-distro-other > > + starlingX ML for the correction. > > thx. - cindy > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Wednesday, December 5, 2018 11:28 PM > To: Xie, Cindy > Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5 > > On Wed, Dec 5, 2018 at 8:35 AM Xie, Cindy wrote: > > > > Agenda & Notes for 12/5 meeting: > > 1. minor kernel version upgrade to 3.10.0.957 (Ken) > > https://bugs.launchpad.net/starlingx/+bug/1805759 > > security concerns regarding current version, want to upgrade the kernel to 957. Want to get the upgraded kernel to master sooner. Requirement is to get to the kernel version soon in StarlingX master. > > This minor kernel version is the one in CentOS 7.1810 release: https://storyboard.openstack.org/#!/story/2004521 > > from implemenation: kernel upgrade in higher priority and needs to be done in master. > > Shuicheng: only rpm version available, sRPM not available yet - AR: Shuicheng to monitor the CentOS package page. > > out of tree driver Kernel drivers upgrade: not sure yet if those kernel drivers (drdb, QAT, etc) have been upgraded to 1810 yet. Can seperate this work as other workitem. > > Victor: shouldn't we re-build with the new kernel or not? The answer is YES. > I said : > > Should we rebuild the drivers with the new kernel , please send correction to notes > > All the drivers shoudl be build agains the kernel source code otherwise the ELF file will not patch with the symbols > > > 2. CentOS 7.6 upgrade planning (Saul) > > - get a list of sRPM/RPM packages that being upgrade in CentOS7.6, size the work item (AR: Shuicheng) > > - Feature branch creation (AR to Dean, needs to get the rights from Dean to create feature branch. AR for Shuicheng to contact Dean. Rebase from master on weekly basis. ) > > CentOS7.1810 was annouced on Dec 3rd. > > storyboard created: https://storyboard.openstack.org/#!/story/2004522, will add tasks once sRPM identified. > > > > 3. non-Openstack patch refactoring status (Zhipeng) > > finished most of the tasks for init/config. Only one patch still pending for RabitMQ for final review. > > Saul will fire more storyboards when he finds more init/config issue. > > last week we find other related stories, 7 stories finished coding, 2 merged, 3 under review, 2 is invalid. > > 167 patch reduction in total now out of 402 patches - good job! > > > > 4. Qemu 3.0 upgrade status (Ghada/Jim) > > Qemu basically ready to push. > > AR: Saul to review the patches pushed up by Jim on staging branch. Work w/ Dean to review and if it's good to go, will push. > > pre-push validation has been done already. > > > > 5. Ceph upgrade status update (Vivian/Dehao) > > Story and task created @ storyboard. Detail doc created and including every patches. Break down big commits into small patches, new PR sent. 6 WR patches added to this PR. Waiting for Dean's review. > > Scott provided some comments on Ceph upgrade in openstack Gerrit - patch updated to address the comments. > > ISO image can be built and under testing. 32 patches rebased and included in this new ISO image. > > dedicated storage node environment setting from Dev side. If you need help, contact Ada... > > > > 6. Opens (all) > > 6.1 kernel driver upgrade testing status update (Ada/Ricardo/Shuicheng) We have successfully installed the Lewis Hill, QAT 8970 Adapter, X16,OEM,S,DUAL-CC - J39967-001, after some issues with our infrastructure, we finally started the proper installation of duplex configuration, today we expect to run some of the tests to see if some issue are seen or not. We are using the latest ISO provided by Shuicheng. > > > > -----Original Appointment----- > > From: Xie, Cindy > > Sent: Monday, November 5, 2018 2:27 PM > > To: Xie, Cindy; Wold, Saul; 'Rowsell, Brent'; Jones, Bruce E; Troyer, > > Dean; 'Khalil, Ghada'; Waheed, Numan; Lin, Shuicheng; Zhu, Vivian; > > Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; Somerville, Jim; > > starlingx-discuss at lists.starlingx.io > > Cc: Perez Carranza, Jose; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, > > Jacky'; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez > > Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; 'Seiler, Glenn'; > > 'Eslimi, Dariush'; Gomez, Juan P; Lara, Cesar; 'Young, Ken'; Arce > > Moreno, Abraham; Cobbley, David A > > Subject: Weekly StarlingX non-OpenStack Distro meeting > > When: Wednesday, December 5, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). > > Where: https://zoom.us/j/342730236 > > > > > > . Cadence and time slot: > > o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . > > Call Details: > > o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: > > o Dial(for higher quality, dial a number based on your current > > location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 > > 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ > > . Meeting Agenda and Minutes: > > o https://etherpad.openstack.org/p/stx-distro-other > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From erich.cordoba.malibran at intel.com Wed Dec 5 17:47:52 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 5 Dec 2018 17:47:52 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> Message-ID: BTW, I created this story to update the installer from CentOS 7.4 to CentOS 7.6, in case anyone wants to participate. https://storyboard.openstack.org/#!/story/2004516 -Erich On Tue, 2018-12-04 at 08:42 +0000, Lin, Shuicheng wrote: > It seems just rpm package is released, but srpm is not released yet. > I will keep check it recently. > > Best Regards > Shuicheng > > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Tuesday, December 4, 2018 1:03 AM > To: Xie, Cindy ; starlingx-discuss at lists.starlin > gx.io > Subject: Re: [Starlingx-discuss] Centos Distro Direction > > > > On 12/3/18 7:43 AM, Xie, Cindy wrote: > > Seems like that CentOS 7 just announced 1810 release today (guess > > this is 7.6): > > > > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=% > 28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > > > > Yup, timing is! > > Cindy, can you please put this on the agenda for the next non- > openstack Distro meeting. > > We also have a topic for the TSC (thanks BruceJ) the following > morning, TSC members may want to start weighing in here regarding my > initial proposal below, which we can talk more about on Thursday. > > > Thanks > Sau! > > > > thx. - cindy > > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Saturday, December 1, 2018 7:49 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Centos Distro Direction > > > > > > Folks, > > > > As we move forward into the spring release (Stein based), we will > > also > > be dealing with another CentOS update. RHEL has already released > > the > > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 > > update shortl, about 30 days after RHEL releases. > > > > We should do the 7.6 Update as we did the 7.5 Update on a feature > > branch, it took about 2 months last time (including initial setup, > > rebasing, and de-fuzzing), I expect it will be shorter this time > > based on our past learning. > > > > We should start out with creating the feature branches (I will work > > with Dean on this) for stx-integ, stx-root, stx-tools, and stx- > > upstream repos. When we start the work, we need to remember to > > rebase the feature branches regularly and check for patch fuzzing > > issues. > > > > Cindy, can you please put this on your agenda for the next Non- > > Openstack Distro meeting. > > > > While on the topic of Cento Distro updates, many of you may have > > heard > > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > > CentOS release we should start thinking about that upgrade as it > > will > > be a larger effort as it includes the 4.18 kernel (alas not the > > 4.19 > > LTS > > kernel) along with many other upgrades. We should start a feature > > branch for CentOS 8 as well to do the updates, This will help > > reduce > > some of the patch load from the backported patches. Since we > > don't > > know exactly when CentOS 8 will be available this should be a > > Train-based release target (Fall 2019) (at the earliest) > > > > [0] > > https://www.redhat.com/en/blog/powering-its-future-while-preserving > > -pr > > esent-introducing-red-hat-enterprise-linux-8-beta > > > > Thanks > > Sau! > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Dec 5 17:48:38 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 5 Dec 2018 09:48:38 -0800 Subject: [Starlingx-discuss] [Container] Public docker registry In-Reply-To: <97ea7b9b-449a-8d21-8135-9d63b9feebf3@windriver.com> References: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> <97ea7b9b-449a-8d21-8135-9d63b9feebf3@windriver.com> Message-ID: <2f7ec742-c9be-f5ed-5b68-6dab7063582b@linux.intel.com> On 12/4/18 1:26 PM, Scott Little wrote: > An alternate schema, and the one in current use, places the os and > openstack-release under the tag section. > > This has the advantage of lower administrative overhead.  It takes > 'admin' powers to create a new , whereas anyone with write > permissions can create a new . > > Lets call this version 2. > Seems like a good plan with one suggestion below. > * > * > > *Image naming schema* > > =/: > > =starlingx > > =stx- > > =--[-] > I am wondering if it would make more sense to keep the item in the part. It's going to be more stable over time than the other parts. my 2 cents Sau! > =centos | ubuntu | clear-linux > > =pike | queens | rocky ... > > =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... > > = | latest | stable > > =dev | r2018.10 | r2018.10.0 | ... > > Note: 'dev' replaces 'master' > > > On 18-12-04 01:29 PM, Scott Little wrote: >> >> Here is my proposal for the StarlingX docker repository. >> >> **Docker repository location** >> >> - hub.docker.com, as a public set of repositories under the >> organization 'starlingx'** >> ** >> >> *Build frequency* >> >> - On demand for release/milestone branches >> >> - Will probably start with daily for master branch.  Perhaps when >> things stabilize we'll reduce build frequency, or even use commit >> driven builds. >> >> *Retention policy >> * >> >> - Perhaps two weeks for master branch builds?  but always one 'stable' >> build (see below) >> >> - Will start with daily for master branch.  Perhaps when things >> stabilize we'll reduce build frequency, or even use commit driven builds. >> >> *Image naming schema* >> >> =/: >> =starlingx >> =stx--- >> >> = | [-] >> >> =centos | ubuntu | clear-linux >> >> =pike | queens | rocky ... >> =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... >> >> = | latest | stable >> >> =master | r2018.10 | r2018.10.0 | ... >> >> Note: we can't have the '/' or ':' character in a branch name. So >> r/2018.10 would have to be shortened to 'r2018.10'. >> However i think it's better to use the tag to allow for rebuilds of a >> release '2018.10.0'. My only concern here is that our current git >> tagging convention doesn't distinguish release from milestone.  I >> would prefer a 'r' or 'm' prefix on our git tags. >> >> Note: the 'latest' or 'stable' qualifiers would be aliases to the >> timestamped image.  'Stable' might be over selling it on master >> branch... perhaps some other term... 'tested', 'usable'? >> >> >> e.g. >> >> starlingx/stx-centos-pike-nova:master-20181201 >> starlingx/stx-centos-pike-nova:master-20181202 >> starlingx/stx-centos-pike-nova:master-20181203 >> starlingx/stx-centos-pike-nova:master-latest -> master-20181203 >> starlingx/stx-centos-pike-nova:master-stable -> master-20181201 >> >> starlingx/stx-centos-pike-nova:r2018.10.0 >> starlingx/stx-centos-pike-nova:r2018.10.1 >> starlingx/stx-centos-pike-nova:r2018.10-latest -> r2018.10.1 >> >> Comments? >> >> Scott >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Don.Penney at windriver.com Wed Dec 5 17:55:26 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 5 Dec 2018 17:55:26 +0000 Subject: [Starlingx-discuss] [Container] Public docker registry In-Reply-To: <2f7ec742-c9be-f5ed-5b68-6dab7063582b@linux.intel.com> References: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> <97ea7b9b-449a-8d21-8135-9d63b9feebf3@windriver.com> <2f7ec742-c9be-f5ed-5b68-6dab7063582b@linux.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA405710@ALA-MBD.corp.ad.wrs.com> Having the openstack release and os as components of the tag vs the image name is consistent with loci and openstackhelm. I don't know if there are upgrades or patching considerations to keeping the same image name, but that could also be an argument. -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, December 05, 2018 12:49 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Container] Public docker registry On 12/4/18 1:26 PM, Scott Little wrote: > An alternate schema, and the one in current use, places the os and > openstack-release under the tag section. > > This has the advantage of lower administrative overhead.  It takes > 'admin' powers to create a new , whereas anyone with write > permissions can create a new . > > Lets call this version 2. > Seems like a good plan with one suggestion below. > * > * > > *Image naming schema* > > =/: > > =starlingx > > =stx- > > =--[-] > I am wondering if it would make more sense to keep the item in the part. It's going to be more stable over time than the other parts. my 2 cents Sau! > =centos | ubuntu | clear-linux > > =pike | queens | rocky ... > > =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... > > = | latest | stable > > =dev | r2018.10 | r2018.10.0 | ... > > Note: 'dev' replaces 'master' > > > On 18-12-04 01:29 PM, Scott Little wrote: >> >> Here is my proposal for the StarlingX docker repository. >> >> **Docker repository location** >> >> - hub.docker.com, as a public set of repositories under the >> organization 'starlingx'** >> ** >> >> *Build frequency* >> >> - On demand for release/milestone branches >> >> - Will probably start with daily for master branch.  Perhaps when >> things stabilize we'll reduce build frequency, or even use commit >> driven builds. >> >> *Retention policy >> * >> >> - Perhaps two weeks for master branch builds?  but always one 'stable' >> build (see below) >> >> - Will start with daily for master branch.  Perhaps when things >> stabilize we'll reduce build frequency, or even use commit driven builds. >> >> *Image naming schema* >> >> =/: >> =starlingx >> =stx--- >> >> = | [-] >> >> =centos | ubuntu | clear-linux >> >> =pike | queens | rocky ... >> =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... >> >> = | latest | stable >> >> =master | r2018.10 | r2018.10.0 | ... >> >> Note: we can't have the '/' or ':' character in a branch name. So >> r/2018.10 would have to be shortened to 'r2018.10'. >> However i think it's better to use the tag to allow for rebuilds of a >> release '2018.10.0'. My only concern here is that our current git >> tagging convention doesn't distinguish release from milestone.  I >> would prefer a 'r' or 'm' prefix on our git tags. >> >> Note: the 'latest' or 'stable' qualifiers would be aliases to the >> timestamped image.  'Stable' might be over selling it on master >> branch... perhaps some other term... 'tested', 'usable'? >> >> >> e.g. >> >> starlingx/stx-centos-pike-nova:master-20181201 >> starlingx/stx-centos-pike-nova:master-20181202 >> starlingx/stx-centos-pike-nova:master-20181203 >> starlingx/stx-centos-pike-nova:master-latest -> master-20181203 >> starlingx/stx-centos-pike-nova:master-stable -> master-20181201 >> >> starlingx/stx-centos-pike-nova:r2018.10.0 >> starlingx/stx-centos-pike-nova:r2018.10.1 >> starlingx/stx-centos-pike-nova:r2018.10-latest -> r2018.10.1 >> >> Comments? >> >> Scott >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Dec 5 19:07:52 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 5 Dec 2018 13:07:52 -0600 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> Message-ID: On Wed, Dec 5, 2018 at 11:48 AM Cordoba Malibran, Erich wrote: > > BTW, I created this story to update the installer from CentOS 7.4 to > CentOS 7.6, in case anyone wants to participate. > > https://storyboard.openstack.org/#!/story/2004516 > Thanks for making this Erich , this is a topic that we check on last week meeting and I had the AR to send the mail about the need of 7.6 installers instead of 7.4 . I am happy to see you share my point :) and take the initiative to create the story ( good team work ) Now to clarify the other point of why Erich create the sotory with this justification : The installer is using CentOS 7.4 files, the upgrade into 7.6 is needed. if we read the document of https://git.starlingx.io/cgit/stx-metal/tree/installer/initrd/README we can see There are three prebuilt files that we can update when we need to make changes to the installer: - vmlinuz - The kernel - initrd.img - Initial initrd loaded when the installer boots. Has kernel modules, etc, and loads the squashfs.img - squashfs.img - Provides the rootfs for the installer, which includes components like anaconda When we update the kernel and kernel modules for the installer, we need to update the initrd.img. This is a manual procedure currently and must be done. The initrd is an initial root file system that is mounted prior to when the real root file system is available. The initrd is bound to the kernel and loaded as part of the kernel boot procedure. The kernel then mounts this initrd as part of the two-stage boot process to load the modules to make the real file systems available and get at the real root file system. If we don't do this the symbols will not link correctly [0] The initrd contains a minimal set of directories and executables to achieve this, such as the insmodtool to install kernel modules into the kernel. My concern las t build meeting was that by using initrd.img wih modules linked to kernel of centos 7.5 and latter on boot installer try to load kernel of centos 7.6 it might gave us some problems hard to debug, it might be clean to use centos 7.6 with its own installer [0] https://www.ibm.com/developerworks/library/l-initrd/index.html Regards Victor Rodriguez > -Erich > > On Tue, 2018-12-04 at 08:42 +0000, Lin, Shuicheng wrote: > > It seems just rpm package is released, but srpm is not released yet. > > I will keep check it recently. > > > > Best Regards > > Shuicheng > > > > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Tuesday, December 4, 2018 1:03 AM > > To: Xie, Cindy ; starlingx-discuss at lists.starlin > > gx.io > > Subject: Re: [Starlingx-discuss] Centos Distro Direction > > > > > > > > On 12/3/18 7:43 AM, Xie, Cindy wrote: > > > Seems like that CentOS 7 just announced 1810 release today (guess > > > this is 7.6): > > > > > > > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=% > > 28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > > > > > > > Yup, timing is! > > > > Cindy, can you please put this on the agenda for the next non- > > openstack Distro meeting. > > > > We also have a topic for the TSC (thanks BruceJ) the following > > morning, TSC members may want to start weighing in here regarding my > > initial proposal below, which we can talk more about on Thursday. > > > > > > Thanks > > Sau! > > > > > > > thx. - cindy > > > > > > -----Original Message----- > > > From: Saul Wold [mailto:sgw at linux.intel.com] > > > Sent: Saturday, December 1, 2018 7:49 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] Centos Distro Direction > > > > > > > > > Folks, > > > > > > As we move forward into the spring release (Stein based), we will > > > also > > > be dealing with another CentOS update. RHEL has already released > > > the > > > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 > > > update shortl, about 30 days after RHEL releases. > > > > > > We should do the 7.6 Update as we did the 7.5 Update on a feature > > > branch, it took about 2 months last time (including initial setup, > > > rebasing, and de-fuzzing), I expect it will be shorter this time > > > based on our past learning. > > > > > > We should start out with creating the feature branches (I will work > > > with Dean on this) for stx-integ, stx-root, stx-tools, and stx- > > > upstream repos. When we start the work, we need to remember to > > > rebase the feature branches regularly and check for patch fuzzing > > > issues. > > > > > > Cindy, can you please put this on your agenda for the next Non- > > > Openstack Distro meeting. > > > > > > While on the topic of Cento Distro updates, many of you may have > > > heard > > > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > > > CentOS release we should start thinking about that upgrade as it > > > will > > > be a larger effort as it includes the 4.18 kernel (alas not the > > > 4.19 > > > LTS > > > kernel) along with many other upgrades. We should start a feature > > > branch for CentOS 8 as well to do the updates, This will help > > > reduce > > > some of the patch load from the backported patches. Since we > > > don't > > > know exactly when CentOS 8 will be available this should be a > > > Train-based release target (Fall 2019) (at the earliest) > > > > > > [0] > > > https://www.redhat.com/en/blog/powering-its-future-while-preserving > > > -pr > > > esent-introducing-red-hat-enterprise-linux-8-beta > > > > > > Thanks > > > Sau! > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > > s > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Wed Dec 5 22:27:25 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 5 Dec 2018 22:27:25 +0000 Subject: [Starlingx-discuss] [ Test ] Discussion of the Testing strategy Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7C37A44E@fmsmsx104.amr.corp.intel.com> Hello, This meeting is for having a healthy discussion about the testing strategy for StarlingX. Everyone is welcome Ada Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2248 bytes Desc: not available URL: From claire at openstack.org Wed Dec 5 22:31:51 2018 From: claire at openstack.org (Claire Massey) Date: Wed, 5 Dec 2018 16:31:51 -0600 Subject: [Starlingx-discuss] [Newsletter] What's Happening in the Open Infrastructure Community Message-ID: <6EDAA5ED-07E5-47C5-A2DD-A3D03B1ED5BB@openstack.org> Hi everyone, This week, the OSF team distributed the first Open Infrastructure Community Newsletter. Our goal with the bi-weekly newsletter is to provide a digest of the latest developments and activities across open infrastructure projects, events and users. This week, the newsletter highlighted the Berlin Summit as well as brief updates from StarlingX and other OSF projects - OpenStack, Airship, Kata Containers and Zuul. You can checkout the full newsletter on Superuser [1], and if you are interested in receiving the upcoming newsletters, you can subscribe here [2]. If you would like to contribute to a future newsletter or have feedback, please reach out to community at openstack.org . Thanks, Claire [1] http://superuser.openstack.org/articles/inside-open-infrastructure-newsletter1 [2] https://www.openstack.org/community/email-signup -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Wed Dec 5 22:52:36 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 5 Dec 2018 22:52:36 +0000 Subject: [Starlingx-discuss] [build][meetings] Build team meeting Agenda 12/6/2018 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710578584@fmsmsx104.amr.corp.intel.com> Build team meeting Agenda 12/6/2018 - Change logs and release notes - Follow up on ISO for releases and milestones - StarlingX Docker repository - opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Dec 6 00:22:52 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 5 Dec 2018 16:22:52 -0800 Subject: [Starlingx-discuss] [Container] Public docker registry In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA405710@ALA-MBD.corp.ad.wrs.com> References: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> <97ea7b9b-449a-8d21-8135-9d63b9feebf3@windriver.com> <2f7ec742-c9be-f5ed-5b68-6dab7063582b@linux.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA405710@ALA-MBD.corp.ad.wrs.com> Message-ID: <891ce20f-f2ab-d6fd-3469-93c8823ceeac@linux.intel.com> On 12/5/18 9:55 AM, Penney, Don wrote: > Having the openstack release and os as components of the tag vs the image name is consistent with loci and openstackhelm. I don't know if there are upgrades or patching considerations to keeping the same image name, but that could also be an argument. > Ok, if there is an existing suggested format, I am OK with that. Will this be put into Specification format for TSC approval? Since we have some updated tag suggestion from Dean, that will affect the existing publishing specification also. Sau! > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, December 05, 2018 12:49 PM > To: Little, Scott; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Container] Public docker registry > > > > On 12/4/18 1:26 PM, Scott Little wrote: >> An alternate schema, and the one in current use, places the os and >> openstack-release under the tag section. >> >> This has the advantage of lower administrative overhead.  It takes >> 'admin' powers to create a new , whereas anyone with write >> permissions can create a new . >> >> Lets call this version 2. >> > Seems like a good plan with one suggestion below. > >> * >> * >> >> *Image naming schema* >> >> =/: >> >> =starlingx >> >> =stx- >> >> =--[-] >> > I am wondering if it would make more sense to keep the > item in the part. It's going to be more > stable over time than the other parts. > > my 2 cents > > Sau! > >> =centos | ubuntu | clear-linux >> >> =pike | queens | rocky ... >> >> =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... >> >> = | latest | stable >> >> =dev | r2018.10 | r2018.10.0 | ... >> >> Note: 'dev' replaces 'master' >> >> >> On 18-12-04 01:29 PM, Scott Little wrote: >>> >>> Here is my proposal for the StarlingX docker repository. >>> >>> **Docker repository location** >>> >>> - hub.docker.com, as a public set of repositories under the >>> organization 'starlingx'** >>> ** >>> >>> *Build frequency* >>> >>> - On demand for release/milestone branches >>> >>> - Will probably start with daily for master branch.  Perhaps when >>> things stabilize we'll reduce build frequency, or even use commit >>> driven builds. >>> >>> *Retention policy >>> * >>> >>> - Perhaps two weeks for master branch builds?  but always one 'stable' >>> build (see below) >>> >>> - Will start with daily for master branch.  Perhaps when things >>> stabilize we'll reduce build frequency, or even use commit driven builds. >>> >>> *Image naming schema* >>> >>> =/: >>> =starlingx >>> =stx--- >>> >>> = | [-] >>> >>> =centos | ubuntu | clear-linux >>> >>> =pike | queens | rocky ... >>> =aodh | ceilometer| cinder | glance | gnocchi | heat | horizon | ironic | keystone | libvirt | magnum | murano | neutron | nova-api-proxy | nova | panko ... >>> >>> = | latest | stable >>> >>> =master | r2018.10 | r2018.10.0 | ... >>> >>> Note: we can't have the '/' or ':' character in a branch name. So >>> r/2018.10 would have to be shortened to 'r2018.10'. >>> However i think it's better to use the tag to allow for rebuilds of a >>> release '2018.10.0'. My only concern here is that our current git >>> tagging convention doesn't distinguish release from milestone.  I >>> would prefer a 'r' or 'm' prefix on our git tags. >>> >>> Note: the 'latest' or 'stable' qualifiers would be aliases to the >>> timestamped image.  'Stable' might be over selling it on master >>> branch... perhaps some other term... 'tested', 'usable'? >>> >>> >>> e.g. >>> >>> starlingx/stx-centos-pike-nova:master-20181201 >>> starlingx/stx-centos-pike-nova:master-20181202 >>> starlingx/stx-centos-pike-nova:master-20181203 >>> starlingx/stx-centos-pike-nova:master-latest -> master-20181203 >>> starlingx/stx-centos-pike-nova:master-stable -> master-20181201 >>> >>> starlingx/stx-centos-pike-nova:r2018.10.0 >>> starlingx/stx-centos-pike-nova:r2018.10.1 >>> starlingx/stx-centos-pike-nova:r2018.10-latest -> r2018.10.1 >>> >>> Comments? >>> >>> Scott >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From bruce.e.jones at intel.com Wed Dec 5 22:08:02 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 5 Dec 2018 22:08:02 +0000 Subject: [Starlingx-discuss] "Brent's Patch Reduction Plan" - Networking items Message-ID: <9A85D2917C58154C960D95352B22818BB1EC2DAC@fmsmsx117.amr.corp.intel.com> I have been working on turning Brent's plan for patch reduction into actionable work items. As part of this effort, I discussed with Brent, Bill and Derek ways we can improve our tracking. We agreed to push all tracking out into the open, and until a better tool becomes available, to use Etherpads. You can find the Etherpad for the Networking part of the Patch Reduction Plan here: https://etherpad.openstack.org/p/stx-openstack-patch-refactoring-neutron. Please review and update as needed. I have updated this with the latest status I have from Forrest. I do not have the latest status from the Networking team beyond that, so there may be additional updates needed. I don't know if we want to track this work within the Networking sub-project or within the distro.openstack sub-project. I'm setting up a global Etherpad [0] with links to the sub-project pads, so as long as we agree to use these pads I'm not sure it matters which sub-project owns the work. We will be able to track all of it across the sub-projects. We should decide, of course. Feedback graciously welcomed! Brucej [0] https://etherpad.openstack.org/p/stx-openstack-patch-refactoring -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Wed Dec 5 23:06:13 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 5 Dec 2018 23:06:13 +0000 Subject: [Starlingx-discuss] "Brent's Patch Reduction Plan" - Networking items In-Reply-To: <9A85D2917C58154C960D95352B22818BB1EC2DAC@fmsmsx117.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BB1EC2DAC@fmsmsx117.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4941BD@ALA-MBD.corp.ad.wrs.com> Hi Bruce, I added information in the etherpad for the following items: Title: Host State Management Title: DHCP agent rescheduling / rebalancing Title:L3 agent rescheduling / rebalancing Title: Modeling the provider networks in sysinv Title: Patching script rework I also moved the last two items to the re-factoring section since they involve STX development versus upstream neutron. Question: I assume the "Current Date" field means forecast date. So to start, it would be the same as the Planned Date and then gets updated if there are changes to the plan. Is that correct? Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, December 05, 2018 5:08 PM To: Khalil, Ghada; Rowsell, Brent; Zhao, Forrest; Peters, Matt; Guo, Ruijing; Rowsell, Brent; Legacy, Allain; Webster, Steven; Richard, Joseph; Ho, Teresa; Bonnell, Patrick; Qin, Kailun; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Chilcote Bacco, Derek A; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: "Brent's Patch Reduction Plan" - Networking items I have been working on turning Brent's plan for patch reduction into actionable work items. As part of this effort, I discussed with Brent, Bill and Derek ways we can improve our tracking. We agreed to push all tracking out into the open, and until a better tool becomes available, to use Etherpads. You can find the Etherpad for the Networking part of the Patch Reduction Plan here: https://etherpad.openstack.org/p/stx-openstack-patch-refactoring-neutron. Please review and update as needed. I have updated this with the latest status I have from Forrest. I do not have the latest status from the Networking team beyond that, so there may be additional updates needed. I don't know if we want to track this work within the Networking sub-project or within the distro.openstack sub-project. I'm setting up a global Etherpad [0] with links to the sub-project pads, so as long as we agree to use these pads I'm not sure it matters which sub-project owns the work. We will be able to track all of it across the sub-projects. We should decide, of course. Feedback graciously welcomed! Brucej [0] https://etherpad.openstack.org/p/stx-openstack-patch-refactoring -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Dec 5 23:27:14 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 5 Dec 2018 23:27:14 +0000 Subject: [Starlingx-discuss] "Brent's Patch Reduction Plan" - Networking items In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA4941BD@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BB1EC2DAC@fmsmsx117.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4941BD@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BB1EC2E98@fmsmsx117.amr.corp.intel.com> > Question: I assume the "Current Date" field means forecast date. So to start, it would be the same as the Planned Date and then gets updated if there are changes to the plan. Is that correct? Yes, that is correct. I added those at Bill's suggestion so we can see if we're on track. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Wednesday, December 5, 2018 3:06 PM To: Jones, Bruce E ; Rowsell, Brent ; Zhao, Forrest ; Peters, Matt ; Guo, Ruijing ; Rowsell, Brent ; Legacy, Allain ; Webster, Steven ; Richard, Joseph ; Ho, Teresa ; Bonnell, Patrick ; Qin, Kailun ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Chilcote Bacco, Derek A ; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: RE: "Brent's Patch Reduction Plan" - Networking items Hi Bruce, I added information in the etherpad for the following items: Title: Host State Management Title: DHCP agent rescheduling / rebalancing Title:L3 agent rescheduling / rebalancing Title: Modeling the provider networks in sysinv Title: Patching script rework I also moved the last two items to the re-factoring section since they involve STX development versus upstream neutron. Question: I assume the "Current Date" field means forecast date. So to start, it would be the same as the Planned Date and then gets updated if there are changes to the plan. Is that correct? Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, December 05, 2018 5:08 PM To: Khalil, Ghada; Rowsell, Brent; Zhao, Forrest; Peters, Matt; Guo, Ruijing; Rowsell, Brent; Legacy, Allain; Webster, Steven; Richard, Joseph; Ho, Teresa; Bonnell, Patrick; Qin, Kailun; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Chilcote Bacco, Derek A; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: "Brent's Patch Reduction Plan" - Networking items I have been working on turning Brent's plan for patch reduction into actionable work items. As part of this effort, I discussed with Brent, Bill and Derek ways we can improve our tracking. We agreed to push all tracking out into the open, and until a better tool becomes available, to use Etherpads. You can find the Etherpad for the Networking part of the Patch Reduction Plan here: https://etherpad.openstack.org/p/stx-openstack-patch-refactoring-neutron. Please review and update as needed. I have updated this with the latest status I have from Forrest. I do not have the latest status from the Networking team beyond that, so there may be additional updates needed. I don't know if we want to track this work within the Networking sub-project or within the distro.openstack sub-project. I'm setting up a global Etherpad [0] with links to the sub-project pads, so as long as we agree to use these pads I'm not sure it matters which sub-project owns the work. We will be able to track all of it across the sub-projects. We should decide, of course. Feedback graciously welcomed! Brucej [0] https://etherpad.openstack.org/p/stx-openstack-patch-refactoring -------------- next part -------------- An HTML attachment was scrubbed... URL: From wei.w.hu at intel.com Thu Dec 6 01:59:51 2018 From: wei.w.hu at intel.com (Hu, Wei W) Date: Thu, 6 Dec 2018 01:59:51 +0000 Subject: [Starlingx-discuss] StarlingX public roadmap and latest test resutls Message-ID: Hi, We are helping one partner in China to evaluate StarlingX. They have asked if there is any public StarlingX roadmap and latest test results. I find some test results on link: https://wiki.openstack.org/wiki/StarlingX/Test#Latest_ISO_image_Sanity_Summary and project plan on link: https://wiki.openstack.org/wiki/StarlingX/Project_Priorities (Is there any place we put a roadmap?) -Wei Hu -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaoyan.li at intel.com Thu Dec 6 05:52:26 2018 From: xiaoyan.li at intel.com (Li, Xiaoyan) Date: Thu, 6 Dec 2018 05:52:26 +0000 Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19E9D608275@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB25D46B@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5F8E8F@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5FAA8C@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D608275@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent, Please give your suggestions. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, December 5, 2018 3:42 PM To: Li, Xiaoyan ; Rowsell, Brent ; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank ; Church, Robert Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Thanks for providing clarifications! So, for our use cases, main problem is that glance’s raw caching is more controllable that cinder’s. If it’s not enough we need to improve it, if we can live with it then at a minimum it needs to be enabled though sysinv configuration and then remove the raw-caching from glance. See inline comments plus bellow summary and proposal, we need Brent’s input on this: I see two main solutions to the problem: A. Always enable cache, for any backend, but only cache glance images that have a certain attribute – this needs a cinder upstream change. Cache limit has to be removed (another cinder upstream change). We may also need a way to kick-start the caching in cinder & clean up cache (periodically and/or user triggered should be enough). B. Make enabling cache storage backend specific and configurable (through sysinv). Once cinder’s cache is enabled for a backend, cache everything. Size of the cache should be configurable. I would go for B. as it, most likely, doesn’t need upstream changes. [Li, Xiaoyan] Agree with B. But it doesn’t conflict with the requirements to set a property of an image like disable_cache, with this property Cinder won’t cache this image. I am concerned what kind of scenario/image it is suitable for? Summary of problems, TBD if we can live with them: · Images are not cached on creation – if we can’t live with it we may need a trigger to cinder on image creation or a way to manually kick-start the caching process. · Since first volume creation is slow for larger volumes this may timeout (keystone token expiration) – we had a customer using 200GB qcow2 windows images that would timeout on conversion. I don’t see a workaround for it, just ask him to manually do the conversion when importing very large images to glance. · we can’t provide a 100% guarantee that, once converted, successive creation won’t need to get converted again due to cache exhaustion. Can we live with it? Users may intermittently see slowdowns and wonder what’s going on. [Li, Xiaoyan] How about we can add a properties to this image/volume, Cinder will at last evict the cached image when cache exhausted. This need a cinder upstream to respect the property. · cache will waste space, if original images no longer exists there is no automated way to remove them from the cache – admin can clean up the cache manually if he so desires. We can either: 1. Live with it – assume that the space allocated to the cache is for the cache only or users can clean up cache by themselves. 2. Clean up cache through a cron job (although this is a cache, some caches are supposed to clean themselves up if cached data is no longer present). 3. Implement another mechanism to clean the cache when an image is deleted not at a later time (this is way too complex to upstream). · What happens with images that users don’t want to cache? Should we add a filter (glance property)? [Li, Xiaoyan] Allow users to add a property of the image. And need cinder upstream to respect the property. I vote for #2 as it does not seems too hard to implement. A once a day cron task can free up wasted space. [Li, Xiaoyan] This cron task probably can’t be included in Cinder. Is it OK? Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Regards, Ovidiu From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Tuesday, November 27, 2018 4:30 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, As far as I’m concerned, Cinder image cache is an cache mechanism. So overall, users don’t need to clean it manually. Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Monday, November 26, 2018 11:15 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Lisa, Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me. Short comparison of the two: Raw caching Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool. Advantages: - user can select the images it wants to cache - user can monitor the progress and can check used space for each image (cli + dashboard). - on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use. - no wasted space - complete control by user Disadvantages: - There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work. - Ceph only Cinder's caching Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold. Advantages: - already upstream - works with all backends - all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes. - admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation) Disadvantages: 1. it's either globally enabled or disabled => needs sysinv configuration option 2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste 3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important. 4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit. 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) Mitigations of disadvantages above - possible solutions and alternatives: #1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify") [Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html I think it is enough. [Ovi] Nice, we need a configuration option per backend in sysinv to enable it. (most likely in the capabilities fields of storage-backends table. See ‘system storage-backend-*’ commands). #2: No workaround comes to my mind - we can probably live with it #3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it) [Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction. As it is cache and should be transparent for users, why do we need users to evict data? [Ovi] If we conclude than this is enough from data usage perspectives then we are ok with it. #4: Two options comes to mind: 1. To get some control we should not limit the cache size, given that we do propper cleanup in #3. [Li, Xiaoyan] Even we do cleanup, the limit can’t be removed. [Ovi] We may need to enhance this. 2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can’t it work with Kubernetes? [Ovi] I did not say it won’t work with kubernates ☺ What I said is that we need to provision the shadow tenant automatically when the feature is enabled. Questions, (maybe if you get time to play with cinder's caching to get a better understanding): 1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel? [Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first and then creates volumes. The primary key is not image_id+backend_storage. It is possible that several entries or volumes will be created in same backend storage. [Ovi] So, only the first volume creation is going to be slow? If that’s the case then parallel volume creation will work ok as only first volume creation will be slow. 2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend? [Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled. [Ovi] Then we need to enable it and configure cache size per backend, I guess. 3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect? [Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed. [Ovi] So after we make the changes, we re-apply the manifests and restart the services (reload the helm charts for k8s deployments) 4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user? [Li, Xiaoyan] Admin and shadow tenants can both do cleanup. Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Thursday, November 22, 2018 2:41 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent and Ovidiu, As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache. Please vote whether we can abandon raw cache in StarlingX. StarlingX Create an image cache in ceph when Glance creates an image. And delete the cached image in ceph when deleting the original image in Glance. Cinder: When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache. So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it. Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes. From my viewpoint, Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement. It is for all Cinder supported backend storage, not just for Ceph. Best wishes Lisa From: Li, Xiaoyan Sent: Monday, November 19, 2018 9:44 AM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, A cached image ( new volume from this image) is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image. All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368 https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82 A cached image is deleted when the configure space for cache is used up. So currently Cinder doesn’t delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache. https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117 https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351 Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Friday, November 16, 2018 4:57 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed. Thanks, Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Tuesday, November 13, 2018 9:19 AM To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail. And if I would like to remove the function in StarlingX, there are two methods: 1. Submit a patch to revert the changes in Glance and Cinder. 2. Ignore these patches during upgrading StarlingX/Cinder to new Cinder release. Which way do we prefer to? Best wishes Lisa From: Li, Xiaoyan Sent: Thursday, September 20, 2018 10:17 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, Brent The following are mechanism of Cinder volume cache. Creation of cached volume: It creates a cached volume in the backend storage when creating from an image. 1. Create_from_image: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890 2. Return image cache entry: If not existed, it creates a new entry. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746 3. Create a new image-volume and cache entry for it: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872 Use a cached volume when creating a volume: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes). https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164 Best wishes Lisa From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, September 6, 2018 10:02 AM To: Li, Xiaoyan >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching We would need to review this feature to ensure it provides equivalent functionality first. If it does, great, we can look at reverting and enabling this cinder functionality. Brent From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Wednesday, September 5, 2018 9:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi all, This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage. In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead. The problem is that we need to update Cinder config in StarlingX. Any comments? Best wishes Lisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaoyan.li at intel.com Thu Dec 6 06:05:42 2018 From: xiaoyan.li at intel.com (Li, Xiaoyan) Date: Thu, 6 Dec 2018 06:05:42 +0000 Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching References: <2588653EBDFFA34B982FAF00F1B4844EBB25D46B@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5F8E8F@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5FAA8C@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D608275@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Brent, Please give your suggestions. And thank Ovidiu with the detailed summary! One correction here: With Cinder image cache, image_volume_cache_max_size_gb and image_volume_cache_max_count can be set 0, which means unlimited for both cache capacity and number of cached images. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, December 5, 2018 3:42 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank >; Church, Robert > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Thanks for providing clarifications! So, for our use cases, main problem is that glance’s raw caching is more controllable that cinder’s. If it’s not enough we need to improve it, if we can live with it then at a minimum it needs to be enabled though sysinv configuration and then remove the raw-caching from glance. See inline comments plus bellow summary and proposal, we need Brent’s input on this: I see two main solutions to the problem: A. Always enable cache, for any backend, but only cache glance images that have a certain attribute – this needs a cinder upstream change. Cache limit has to be removed (another cinder upstream change). We may also need a way to kick-start the caching in cinder & clean up cache (periodically and/or user triggered should be enough). B. Make enabling cache storage backend specific and configurable (through sysinv). Once cinder’s cache is enabled for a backend, cache everything. Size of the cache should be configurable. I would go for B. as it, most likely, doesn’t need upstream changes. [Li, Xiaoyan] Agree with B. But it doesn’t conflict with the requirements to set a property of an image like disable_cache, with this property Cinder won’t cache this image. I am concerned what kind of scenario/image it is suitable for? Summary of problems, TBD if we can live with them: · Images are not cached on creation – if we can’t live with it we may need a trigger to cinder on image creation or a way to manually kick-start the caching process. · Since first volume creation is slow for larger volumes this may timeout (keystone token expiration) – we had a customer using 200GB qcow2 windows images that would timeout on conversion. I don’t see a workaround for it, just ask him to manually do the conversion when importing very large images to glance. · we can’t provide a 100% guarantee that, once converted, successive creation won’t need to get converted again due to cache exhaustion. Can we live with it? Users may intermittently see slowdowns and wonder what’s going on. [Li, Xiaoyan] How about we can add a properties to this image/volume, Cinder will at last evict the cached image when cache exhausted. This need a cinder upstream to respect the property. · cache will waste space, if original images no longer exists there is no automated way to remove them from the cache – admin can clean up the cache manually if he so desires. We can either: 1. Live with it – assume that the space allocated to the cache is for the cache only or users can clean up cache by themselves. 2. Clean up cache through a cron job (although this is a cache, some caches are supposed to clean themselves up if cached data is no longer present). 3. Implement another mechanism to clean the cache when an image is deleted not at a later time (this is way too complex to upstream). · What happens with images that users don’t want to cache? Should we add a filter (glance property)? [Li, Xiaoyan] Allow users to add a property of the image. And need cinder upstream to respect the property. I vote for #2 as it does not seems too hard to implement. A once a day cron task can free up wasted space. [Li, Xiaoyan] This cron task probably can’t be included in Cinder. Is it OK? Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Regards, Ovidiu From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Tuesday, November 27, 2018 4:30 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, As far as I’m concerned, Cinder image cache is an cache mechanism. So overall, users don’t need to clean it manually. Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Monday, November 26, 2018 11:15 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Lisa, Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me. Short comparison of the two: Raw caching Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool. Advantages: - user can select the images it wants to cache - user can monitor the progress and can check used space for each image (cli + dashboard). - on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use. - no wasted space - complete control by user Disadvantages: - There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work. - Ceph only Cinder's caching Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold. Advantages: - already upstream - works with all backends - all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes. - admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation) Disadvantages: 1. it's either globally enabled or disabled => needs sysinv configuration option 2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste 3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important. 4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit. 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) Mitigations of disadvantages above - possible solutions and alternatives: #1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify") [Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html I think it is enough. [Ovi] Nice, we need a configuration option per backend in sysinv to enable it. (most likely in the capabilities fields of storage-backends table. See ‘system storage-backend-*’ commands). #2: No workaround comes to my mind - we can probably live with it #3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it) [Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction. As it is cache and should be transparent for users, why do we need users to evict data? [Ovi] If we conclude than this is enough from data usage perspectives then we are ok with it. #4: Two options comes to mind: 1. To get some control we should not limit the cache size, given that we do propper cleanup in #3. [Li, Xiaoyan] Even we do cleanup, the limit can’t be removed. [Ovi] We may need to enhance this. :q 2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can’t it work with Kubernetes? [Ovi] I did not say it won’t work with kubernates ☺ What I said is that we need to provision the shadow tenant automatically when the feature is enabled. Questions, (maybe if you get time to play with cinder's caching to get a better understanding): 1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel? [Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first and then creates volumes. The primary key is not image_id+backend_storage. It is possible that several entries or volumes will be created in same backend storage. [Ovi] So, only the first volume creation is going to be slow? If that’s the case then parallel volume creation will work ok as only first volume creation will be slow. 2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend? [Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled. [Ovi] Then we need to enable it and configure cache size per backend, I guess. 3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect? [Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed. [Ovi] So after we make the changes, we re-apply the manifests and restart the services (reload the helm charts for k8s deployments) 4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user? [Li, Xiaoyan] Admin and shadow tenants can both do cleanup. Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Thursday, November 22, 2018 2:41 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent and Ovidiu, As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache. Please vote whether we can abandon raw cache in StarlingX. StarlingX Create an image cache in ceph when Glance creates an image. And delete the cached image in ceph when deleting the original image in Glance. Cinder: When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache. So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it. Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes. From my viewpoint, Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement. It is for all Cinder supported backend storage, not just for Ceph. Best wishes Lisa From: Li, Xiaoyan Sent: Monday, November 19, 2018 9:44 AM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, A cached image ( new volume from this image) is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image. All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368 https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82 A cached image is deleted when the configure space for cache is used up. So currently Cinder doesn’t delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache. https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117 https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351 Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Friday, November 16, 2018 4:57 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed. Thanks, Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Tuesday, November 13, 2018 9:19 AM To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail. And if I would like to remove the function in StarlingX, there are two methods: 1. Submit a patch to revert the changes in Glance and Cinder. 2. Ignore these patches during upgrading StarlingX/Cinder to new Cinder release. Which way do we prefer to? Best wishes Lisa From: Li, Xiaoyan Sent: Thursday, September 20, 2018 10:17 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, Brent The following are mechanism of Cinder volume cache. Creation of cached volume: It creates a cached volume in the backend storage when creating from an image. 1. Create_from_image: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890 2. Return image cache entry: If not existed, it creates a new entry. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746 3. Create a new image-volume and cache entry for it: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872 Use a cached volume when creating a volume: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes). https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164 Best wishes Lisa From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, September 6, 2018 10:02 AM To: Li, Xiaoyan >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching We would need to review this feature to ensure it provides equivalent functionality first. If it does, great, we can look at reverting and enabling this cinder functionality. Brent From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Wednesday, September 5, 2018 9:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi all, This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage. In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead. The problem is that we need to update Cinder config in StarlingX. Any comments? Best wishes Lisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From huang.shuquan at 99cloud.net Thu Dec 6 07:59:31 2018 From: huang.shuquan at 99cloud.net (Shuquan Huang) Date: Thu, 06 Dec 2018 15:59:31 +0800 Subject: [Starlingx-discuss] StarlingX public roadmap and latest test resutls Message-ID: <34031AF3-52FB-41EE-B440-7F5A799BC18D@99cloud.net> Hi Wei, You can find the latest priorities here. https://ethercalc.openstack.org/fafyo2729fnr From: on behalf of "Hu, Wei W" Date: Thursday, December 6, 2018 at 10:00 AM To: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] StarlingX public roadmap and latest test resutls Hi, We are helping one partner in China to evaluate StarlingX. They have asked if there is any public StarlingX roadmap and latest test results. I find some test results on link: https://wiki.openstack.org/wiki/StarlingX/Test#Latest_ISO_image_Sanity_Summary and project plan on link: https://wiki.openstack.org/wiki/StarlingX/Project_Priorities (Is there any place we put a roadmap?) -Wei Hu _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Dec 6 12:00:27 2018 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 6 Dec 2018 12:00:27 +0000 Subject: [Starlingx-discuss] "Brent's Patch Reduction Plan" - Networking items In-Reply-To: <9A85D2917C58154C960D95352B22818BB1EC2E98@fmsmsx117.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BB1EC2DAC@fmsmsx117.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4941BD@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BB1EC2E98@fmsmsx117.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0032730@ALA-MBD.corp.ad.wrs.com> Hi Bruce, This looks good - definitely lots of information. As we discussed, it'd also be good to have a view that rolls everything up into a single view that show us where everything is. I volunteer to take that on - not sure if I'll try to do it in EtherCalc or what, but I'll take a stab at it. Bill... From: Jones, Bruce E Sent: Wednesday, December 5, 2018 6:27 PM To: Khalil, Ghada ; Rowsell, Brent ; Zhao, Forrest ; Peters, Matt ; Guo, Ruijing ; Rowsell, Brent ; Legacy, Allain ; Webster, Steven ; Richard, Joseph ; Ho, Teresa ; Bonnell, Patrick ; Qin, Kailun ; Le, Huifeng ; Xu, Chenjie ; Zhao, Forrest ; Chilcote Bacco, Derek A ; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: RE: "Brent's Patch Reduction Plan" - Networking items > Question: I assume the "Current Date" field means forecast date. So to start, it would be the same as the Planned Date and then gets updated if there are changes to the plan. Is that correct? Yes, that is correct. I added those at Bill's suggestion so we can see if we're on track. brucej From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Wednesday, December 5, 2018 3:06 PM To: Jones, Bruce E >; Rowsell, Brent >; Zhao, Forrest >; Peters, Matt >; Guo, Ruijing >; Rowsell, Brent >; Legacy, Allain >; Webster, Steven >; Richard, Joseph >; Ho, Teresa >; Bonnell, Patrick >; Qin, Kailun >; Le, Huifeng >; Xu, Chenjie >; Zhao, Forrest >; Chilcote Bacco, Derek A >; Zvonar, Bill > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: "Brent's Patch Reduction Plan" - Networking items Hi Bruce, I added information in the etherpad for the following items: Title: Host State Management Title: DHCP agent rescheduling / rebalancing Title:L3 agent rescheduling / rebalancing Title: Modeling the provider networks in sysinv Title: Patching script rework I also moved the last two items to the re-factoring section since they involve STX development versus upstream neutron. Question: I assume the "Current Date" field means forecast date. So to start, it would be the same as the Planned Date and then gets updated if there are changes to the plan. Is that correct? Thanks, Ghada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, December 05, 2018 5:08 PM To: Khalil, Ghada; Rowsell, Brent; Zhao, Forrest; Peters, Matt; Guo, Ruijing; Rowsell, Brent; Legacy, Allain; Webster, Steven; Richard, Joseph; Ho, Teresa; Bonnell, Patrick; Qin, Kailun; Le, Huifeng; Xu, Chenjie; Zhao, Forrest; Chilcote Bacco, Derek A; Zvonar, Bill Cc: starlingx-discuss at lists.starlingx.io Subject: "Brent's Patch Reduction Plan" - Networking items I have been working on turning Brent's plan for patch reduction into actionable work items. As part of this effort, I discussed with Brent, Bill and Derek ways we can improve our tracking. We agreed to push all tracking out into the open, and until a better tool becomes available, to use Etherpads. You can find the Etherpad for the Networking part of the Patch Reduction Plan here: https://etherpad.openstack.org/p/stx-openstack-patch-refactoring-neutron. Please review and update as needed. I have updated this with the latest status I have from Forrest. I do not have the latest status from the Networking team beyond that, so there may be additional updates needed. I don't know if we want to track this work within the Networking sub-project or within the distro.openstack sub-project. I'm setting up a global Etherpad [0] with links to the sub-project pads, so as long as we agree to use these pads I'm not sure it matters which sub-project owns the work. We will be able to track all of it across the sub-projects. We should decide, of course. Feedback graciously welcomed! Brucej [0] https://etherpad.openstack.org/p/stx-openstack-patch-refactoring -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Dec 6 13:38:30 2018 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 6 Dec 2018 13:38:30 +0000 Subject: [Starlingx-discuss] API requests: stx-nfv In-Reply-To: References: Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA41AB8@ALA-MBD.corp.ad.wrs.com> Abraham, Good analysis - see my replies in your email below… Bart -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: December 4, 2018 4:25 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] API requests: stx-nfv stx-nfv team, As a result of time spent within stx-nfv and with the objective to align our REST API Documentation [0] with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate: $ curl -i -X POST http://10.10.10.2:5000/v2.0/tokens $ export TOKEN=... - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] When we look at the name and description reported out by curl -i http://10.10.10.2:4545/ we have the same name and description between documentation [1] and information via API Query: Name: nfv-vim Description: NFV - Virtual Infrastructure Manager ? Anything to add / change to the name and / or description? [Bart] The current response seems OK to me. [ /api ] Here we are showing 3 different views of what we are seeing within stx-nfv project: - Our initial "Migration WADL to RST", see history here [2] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:4545/api/... [ /api ] [ Migration WADL to RST ] FYI Only. Migration from WADL to RST format requested us to move "NFV VIM API v1" (NFV VIM Service REST API) into stx-nfv repository, see [2] for the history. [ /api ] [ Current Official API Documentation ] Current Official API documentation [1] includes the following REST API methods under "API Versions" details: - / - /api - /api/orchestration - /api/orchestration/sw-patch - /api/orchestration/sw-upgrade And the only documented API REST methods documented are: - [3] Patch Strategy - [4] Upgrade Strategy ? Is "orchestration" not expected to be documented even if we have the GET method available? [Bart] The orchestration level is just a grouping for sw-patch and sw-upgrade. The GET method just returns the links to those and that is documented in [1] - what else would you want to add? [ /api ] [ API Query Output ] API queries output shows these API REST methods: - api/orchestration - api/openstack - api/openstack/heat - api/virtualised-resources - api/virtualised-resources/computes - api/virtualised-resources/networks - api/virtualised-resources/images - api/virtualised-resources/volumes ? Our "Current Official API Documentation" does not have "openstack" and "virtualised-resources", should they be added? [Bart] Good question. We have never officially supported the openstack or virtualised-resources APIs and we know that some of them don’t work. I would be open to removing these from our API if that would be less confusing. [ Project Repository ] [ Directory nfv-doc ] We took a look at the project repository and we found the "nfv-doc" directory [5] with the following categories: - Software Image Management - Virtualised Network Resource - Virtualised Storage Resource - Virtualised Compute Resource ? Since we have our "Current Official API Documentation", should we put a patch to remove this directory? Any reason to keep it? [Bart] I think we should remove the directory. [ Project Repository ] [ Directory nfv-tests ] Looking this nfv-tests [6] it includes 3 categories: - nfv_api_tests - nfv_scenario_tests - nfv_unit_tests ? Is there any restructure required in this nfv-tests directory? [Bart] No ? Is there any need to think about a general test strategy which includes all StarlingX projects moved its execution into another place? e.g. Zuul [Bart] Once the basic devstack setup is working for stx-nfv, we can look at adding new testcases to be executed in that environment. This won’t replace the existing testcases, but supplement them. ? Is this directory still valid? If not should we put a patch to remove it? [Bart] The directory is still valid. Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-nfv [1] https://docs.starlingx.io/api-ref/stx-nfv/api-ref-nfv-vim-v1.html?expanded=#api-versions [2] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [3] https://docs.starlingx.io/api-ref/stx-nfv/api-ref-nfv-vim-v1.html?expanded=#patch-strategy [4] https://docs.starlingx.io/api-ref/stx-nfv/api-ref-nfv-vim-v1.html?expanded=#upgrade-strategy [5] http://git.openstack.org/cgit/openstack/stx-nfv/tree/nfv/nfv-docs [6] http://git.openstack.org/cgit/openstack/stx-nfv/tree/nfv/nfv-tests _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Thu Dec 6 15:11:24 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 6 Dec 2018 15:11:24 +0000 Subject: [Starlingx-discuss] Questions about patch 87a8c625 upstreaming In-Reply-To: <76647BD697F40748B1FA4F56DA02AA0B4D548D70@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D548D70@SHSMSX104.ccr.corp.intel.com> Message-ID: Hello Huifeng, I believe the use-case was having the ability to restart the metadata proxy in order to provide the ability to patch the software without a reboot. Therefore, in order to restart and/or reconfigure the metadata proxies that are managed by the DHCP agent, then they needed to be restarted when the agent was restarted since there was no direct way to restart the individual proxies managed by that service. The DHCP agent will automatically configure and launch the metadata proxy if there are no virtual routers currently serving that network. This is controlled by the parameter enable_isolated_metadata and is configured to be True by default for StarlingX. The force_metadata option can be used to always have the DHCP agent configure the metadata access but is not enabled by default (as you indicated). -Matt From: "Le, Huifeng" Date: Wednesday, December 5, 2018 at 4:20 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Questions about patch 87a8c625 upstreaming Matt, I am looking at patch #87a8c625 (US86444: patching scripts for neutron processes) which includes 2 parts, could you please help to clarify below question? * Script to support neutron service restart: This is STX special script and no need for upstream * Metadata-proxy service lifecycle management: close metadata proxy (e.g. haproxy) process (if it is managed by dhcp client) when neutron-dhcp-agent stopped To my understanding, metadata proxy (e.g. haproxy) provides support for VM instance to query its metadata information from Neutron which is in data-path plane, neutron-dhcp-agent is responsible for configuring dhcp process (e.g. dnsmasq) or metadata proxy process which is lie in control-path plane. So it seems by design: whether the metadata-proxy process is alive is determined by the network/port status change instead of whether dhcp agent alive. Are there any special use cases which requires to stop metadata proxy process when dhcp agent stopped? In my test: * if metadata proxy is managed by l3 agent (default setting) Stopping(/Restarting) l3 agent will not stop(/restart) haproxy process and clear iptables rule, router can still work (in data path) and metadata information can still be queried inside VM instance * if metadata proxy is managed by dhcp agent (set “force_metadata = true” in /etc/neutron/dhcp_agent.ini) Stopping(/Restarting) dhcp agent will not stop(/restart) dnsmasq process and haproxy process, dhcp server can still work (in data path) and metadata information can still be queried inside VM instance. If applying the patch, Stopping dhcp agent will make the metadata query within VM instance fail while the DHCP server can still work (or does it also need to be stopped?) and metadata bridge information (e.g. 169.254.169.254/16) is also valid in dhcp interface, does it expected behavior? Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Thu Dec 6 15:14:29 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Thu, 6 Dec 2018 15:14:29 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> Message-ID: <19D3C745-77D3-4FC3-982C-353FF96738B0@windriver.com> Erich, Wouldn’t the installer build script just use the new items or are significant changes required? /KenY On 2018-12-05, 12:49 PM, "Cordoba Malibran, Erich" wrote: BTW, I created this story to update the installer from CentOS 7.4 to CentOS 7.6, in case anyone wants to participate. https://storyboard.openstack.org/#!/story/2004516 -Erich On Tue, 2018-12-04 at 08:42 +0000, Lin, Shuicheng wrote: > It seems just rpm package is released, but srpm is not released yet. > I will keep check it recently. > > Best Regards > Shuicheng > > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Tuesday, December 4, 2018 1:03 AM > To: Xie, Cindy ; starlingx-discuss at lists.starlin > gx.io > Subject: Re: [Starlingx-discuss] Centos Distro Direction > > > > On 12/3/18 7:43 AM, Xie, Cindy wrote: > > Seems like that CentOS 7 just announced 1810 release today (guess > > this is 7.6): > > > > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=% > 28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > > > > Yup, timing is! > > Cindy, can you please put this on the agenda for the next non- > openstack Distro meeting. > > We also have a topic for the TSC (thanks BruceJ) the following > morning, TSC members may want to start weighing in here regarding my > initial proposal below, which we can talk more about on Thursday. > > > Thanks > Sau! > > > > thx. - cindy > > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Saturday, December 1, 2018 7:49 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Centos Distro Direction > > > > > > Folks, > > > > As we move forward into the spring release (Stein based), we will > > also > > be dealing with another CentOS update. RHEL has already released > > the > > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 > > update shortl, about 30 days after RHEL releases. > > > > We should do the 7.6 Update as we did the 7.5 Update on a feature > > branch, it took about 2 months last time (including initial setup, > > rebasing, and de-fuzzing), I expect it will be shorter this time > > based on our past learning. > > > > We should start out with creating the feature branches (I will work > > with Dean on this) for stx-integ, stx-root, stx-tools, and stx- > > upstream repos. When we start the work, we need to remember to > > rebase the feature branches regularly and check for patch fuzzing > > issues. > > > > Cindy, can you please put this on your agenda for the next Non- > > Openstack Distro meeting. > > > > While on the topic of Cento Distro updates, many of you may have > > heard > > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > > CentOS release we should start thinking about that upgrade as it > > will > > be a larger effort as it includes the 4.18 kernel (alas not the > > 4.19 > > LTS > > kernel) along with many other upgrades. We should start a feature > > branch for CentOS 8 as well to do the updates, This will help > > reduce > > some of the patch load from the backported patches. Since we > > don't > > know exactly when CentOS 8 will be available this should be a > > Train-based release target (Fall 2019) (at the earliest) > > > > [0] > > https://www.redhat.com/en/blog/powering-its-future-while-preserving > > -pr > > esent-introducing-red-hat-enterprise-linux-8-beta > > > > Thanks > > Sau! > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ken.Young at windriver.com Thu Dec 6 15:16:50 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Thu, 6 Dec 2018 15:16:50 +0000 Subject: [Starlingx-discuss] [build][meetings] Build team meeting Agenda 12/6/2018 In-Reply-To: <0B566C62EC792145B40E29EFEBF1AB4710578584@fmsmsx104.amr.corp.intel.com> References: <0B566C62EC792145B40E29EFEBF1AB4710578584@fmsmsx104.amr.corp.intel.com> Message-ID: 2 adds from me: * Public static analysis – what is the status * Bug triage – is there anything urgent? Are they all staffed? /KenY From: "Lara, Cesar" Date: Wednesday, December 5, 2018 at 5:53 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] [build][meetings] Build team meeting Agenda 12/6/2018 Build team meeting Agenda 12/6/2018 - Change logs and release notes - Follow up on ISO for releases and milestones - StarlingX Docker repository - opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Thu Dec 6 16:24:42 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 6 Dec 2018 16:24:42 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: <19D3C745-77D3-4FC3-982C-353FF96738B0@windriver.com> References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> <19D3C745-77D3-4FC3-982C-353FF96738B0@windriver.com> Message-ID: Hi Ken, In the best case, I think it would be just matter of point to the new files and everything should work. In the worst case the packages that are installed by us in the installer conflicts with some other packages already there. But this is just a guess. I can run the experiment to see how it goes. -Erich On 12/6/18, 9:14 AM, "Young, Ken" wrote: Erich, Wouldn’t the installer build script just use the new items or are significant changes required? /KenY On 2018-12-05, 12:49 PM, "Cordoba Malibran, Erich" wrote: BTW, I created this story to update the installer from CentOS 7.4 to CentOS 7.6, in case anyone wants to participate. https://storyboard.openstack.org/#!/story/2004516 -Erich On Tue, 2018-12-04 at 08:42 +0000, Lin, Shuicheng wrote: > It seems just rpm package is released, but srpm is not released yet. > I will keep check it recently. > > Best Regards > Shuicheng > > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Tuesday, December 4, 2018 1:03 AM > To: Xie, Cindy ; starlingx-discuss at lists.starlin > gx.io > Subject: Re: [Starlingx-discuss] Centos Distro Direction > > > > On 12/3/18 7:43 AM, Xie, Cindy wrote: > > Seems like that CentOS 7 just announced 1810 release today (guess > > this is 7.6): > > > > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=% > 28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > > > > Yup, timing is! > > Cindy, can you please put this on the agenda for the next non- > openstack Distro meeting. > > We also have a topic for the TSC (thanks BruceJ) the following > morning, TSC members may want to start weighing in here regarding my > initial proposal below, which we can talk more about on Thursday. > > > Thanks > Sau! > > > > thx. - cindy > > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Saturday, December 1, 2018 7:49 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Centos Distro Direction > > > > > > Folks, > > > > As we move forward into the spring release (Stein based), we will > > also > > be dealing with another CentOS update. RHEL has already released > > the > > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 > > update shortl, about 30 days after RHEL releases. > > > > We should do the 7.6 Update as we did the 7.5 Update on a feature > > branch, it took about 2 months last time (including initial setup, > > rebasing, and de-fuzzing), I expect it will be shorter this time > > based on our past learning. > > > > We should start out with creating the feature branches (I will work > > with Dean on this) for stx-integ, stx-root, stx-tools, and stx- > > upstream repos. When we start the work, we need to remember to > > rebase the feature branches regularly and check for patch fuzzing > > issues. > > > > Cindy, can you please put this on your agenda for the next Non- > > Openstack Distro meeting. > > > > While on the topic of Cento Distro updates, many of you may have > > heard > > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > > CentOS release we should start thinking about that upgrade as it > > will > > be a larger effort as it includes the 4.18 kernel (alas not the > > 4.19 > > LTS > > kernel) along with many other upgrades. We should start a feature > > branch for CentOS 8 as well to do the updates, This will help > > reduce > > some of the patch load from the backported patches. Since we > > don't > > know exactly when CentOS 8 will be available this should be a > > Train-based release target (Fall 2019) (at the earliest) > > > > [0] > > https://www.redhat.com/en/blog/powering-its-future-while-preserving > > -pr > > esent-introducing-red-hat-enterprise-linux-8-beta > > > > Thanks > > Sau! > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From guillermo.a.ponce.castaneda at intel.com Thu Dec 6 16:56:04 2018 From: guillermo.a.ponce.castaneda at intel.com (Ponce Castaneda, Guillermo A) Date: Thu, 6 Dec 2018 16:56:04 +0000 Subject: [Starlingx-discuss] Release notes/change log creation script Message-ID: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> Hello everybody, I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. It requires two parameters, the old manifest.xml and the new one and it should be run like this: $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml This will give you the change log on stdout. On our Jenkins script we save a file with this and e-mail it to the team afterwards. Please let me know what you all think about, feedback is really appreciated. - Guillermo Ponce From abraham.arce.moreno at intel.com Thu Dec 6 17:33:27 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 6 Dec 2018 17:33:27 +0000 Subject: [Starlingx-discuss] StarlingX public roadmap and latest test resutls In-Reply-To: <34031AF3-52FB-41EE-B440-7F5A799BC18D@99cloud.net> References: <34031AF3-52FB-41EE-B440-7F5A799BC18D@99cloud.net> Message-ID: > From: Shuquan Huang [mailto:huang.shuquan at 99cloud.net] > > You can find the latest priorities here. > https://ethercalc.openstack.org/fafyo2729fnr Thanks Shuquan! > We are helping one partner in China to evaluate StarlingX. They have asked if > there is any public StarlingX roadmap and latest test results. Wei, Please also take a look at our 2018 October release [0] [1] including data of Test Plan [2] and Test Summary [3]. [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-October/001621.html [1] https://docs.starlingx.io/releasenotes/index.html [2] https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan [3] https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_TestingSummary From scott.little at windriver.com Thu Dec 6 17:58:19 2018 From: scott.little at windriver.com (Scott Little) Date: Thu, 6 Dec 2018 12:58:19 -0500 Subject: [Starlingx-discuss] [Container] Public docker registry In-Reply-To: References: <38d5e899-8741-18f9-88a7-47153c34f982@windriver.com> Message-ID: <6d9dd9ff-97ff-edcb-3107-160216bbaf35@windriver.com> Personally I don't like embedding a predicted release date in the milestone.  As we've seen they are subject to change.  If the milestone tag is to live on master branch, I'd prefer using the date the tag is made. So the 'b#' suffix would be unique to the milestone builds?  I could live with that.  My main ask is that there be a machine parsable way to distinguish a milestone tag from a release tag. The easiest to parse would be a prefix. Want to see all release tags ?    git tag | grep '^' Want latest milestone ?    git tag | grep '^' | sort --unique | tail -n 1 On 18-12-04 05:54 PM, Dean Troyer wrote: > On Tue, Dec 4, 2018 at 12:31 PM Scott Little wrote: >> However i think it's better to use the tag to allow for rebuilds of a release '2018.10.0'. My only concern here is that our current git tagging convention doesn't distinguish release from milestone. I would prefer a 'r' or 'm' prefix on our git tags. > For other reasons (mostly to do with the change to consume upstream > OpenStack from master) I am thinking we should adjust how we implement > milestones. The TSC has already talked about adjusting our release > schedule, and thus the milestone schedule, to align closer to the > OpenStack cadence (The release team is going to dive in to this in > more detail so final proposal TBD). If we do this the following are > the changes I am anticipating: > > * do not branch milestones, just tag master > * follow the OpenStack process of appending a suffix to the milestone > tag to identify which milestone (ie 'b1' for milestone 1, etc: NNNNb1) > > The major problem with this, and why I didn't adopt it from the start, > is that we are using date-based release tags rather than semantic > versioning (semver, the X.Y.Z we all know and love) so the value of > the next release tag can be anticipated but not certain. For example, > until a short time ago we had anticipated the next release to be > 2018.03, now it is more likely to be 2018.05. That makes it hard to > tag a milestone in January and have it all make sense. > > dt > From jim.somerville at windriver.com Thu Dec 6 18:34:00 2018 From: jim.somerville at windriver.com (Jim Somerville) Date: Thu, 6 Dec 2018 13:34:00 -0500 Subject: [Starlingx-discuss] Qemu 3.0.0 is ready for your consideration In-Reply-To: <4a538cb6-2d94-5799-70dd-9d0cca2e6682@windriver.com> References: <0add847a-267d-1706-e57f-5ee5a6bd34b5@windriver.com> <4a538cb6-2d94-5799-70dd-9d0cca2e6682@windriver.com> Message-ID: On 2018-12-05 11:17 a.m., Jim Somerville wrote: > > > On 2018-12-05 9:51 a.m., Dean Troyer wrote: >> On Tue, Dec 4, 2018 at 3:13 PM Jim Somerville >> wrote: >>> The update to 3.0.0 consists of two parts, a pull request for the >>> stx-qemu repo, and the piece in stx-integ which deals with the >>> compilation.  Like last time with libvirt, we have to commit these two >>> parts at the same time. >> >> Since the 3.0 work is going in to a new branch I think we are OK to go >> ahead and commit that to stx-qemu before the stx-integ change unless I >> am forgetting something?  It will be the change to stx-manifest >> switching to the new branch that will need to be coordinated with >> 622583 in stx-integ and that can be done with a Depends-On footer. > > That's correct, the branch creation is fine, it is the switching over to > it that has to be coordinated. > >> >>> The new qemu is here, and I will push a new branch and issue a pull >>> request to it once I'm done dealing with feedback. >>> https://github.com/jsomervi/stx-qemu/commits/working-3.0.0-noavp-12 >> >> I have created stx-qemu branch stx/v3.0.0 from upstream qemu at sha >> 38441756b70eec5807b5f60dad11a93a91199866 "Update version for v3.0.0 >> release", matching what you have in [0].  Target your PRs at that and >> we should be good to go. > > Thanks, will do. I have issued the pull request on github. -Jim > >> >>> The stx-integ part for review is here: >>> https://review.openstack.org/#/c/622583/ >> >> As I mentioned above, we should also queue up a review to stx-manifest >> that adds revision="stx/v3.0.0" to stx-qemu.  This should depend on >> 622583 and be +W before 622583, it will go through the gate test but >> be blocked from merging until 622583 merges so the window between them >> merging will be fairly small. > > OK, I'll go ahead and do that. > > -Jim > >> >> dt >> >> [0] >> https://review.openstack.org/#/c/622583/1/virt/qemu/centos/build_srpm.data >> >> From Eric.MacDonald at windriver.com Thu Dec 6 20:50:34 2018 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Thu, 6 Dec 2018 20:50:34 +0000 Subject: [Starlingx-discuss] starlingX mtce-common build failure Message-ID: <210898B96CA058408C55992CCAD98676B9F83034@ALA-MBD.corp.ad.wrs.com> The following update that merged earlier today removed code that now results in a compile error due to the declaration of an unused variable. Merged Update: https://review.openstack.org/#/c/623149/ This update has been reverted and +2/+1. Please repo sync once it gets merged. Cheers, Eric MacDonald, MTS, Engineering, Wind River direct 613.963.1387  fax: 613.492.7870 skype: eric.r.macdonald 350 Terry Fox Drive, Suite 200, Kanata, ON K2K 2W5 From abraham.arce.moreno at intel.com Thu Dec 6 21:36:36 2018 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 6 Dec 2018 21:36:36 +0000 Subject: [Starlingx-discuss] StarlingX @ Production, Update Basics Message-ID: Hi StarlingX Community, In today's "Build Team" meeting [0] one of the topics discussed was "ISO for release and milestones" and a couple of basic questions around StarlingX in a production environment came to my mind. I am looking for community guidance in any form how this phase will look like in near future, please consider I am a "early stage cloud knowledge in process" person trying to go beyond the "pip install --upgrade" [ Install ] I am looking for an Edge Computing solution, I found about StarlingX, get the latest official ISO and deployed in a configuration suitable for my needs. Cool! I have it up and running. [ Update ] Sometime later another release or milestone comes and some basic questions and own limited answers start popping up: - Should I do the upgrade or not? This depends entirely if the new release offers a specific feature I need. Any other reason to upgrade? - I have decided that new release is needed, grabbing a new ISO image and deploying could make sense in "All In One" deployment but not in deployments of 2 digits nodes. Is there a preferred way to update? - Here is a list of possible methods to get latest version for OpenStack components and StarlingX Services: - What would be that typical scenario for each of the methods below might be used? - Do we have a priority for each of them? - Are we targeting only one method? Two? Methods: Software Upgrade through stx-update RPM Upgrade Docker Container Any other? Thanks for your comments. [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-December/002129.html From Tao.Liu at windriver.com Thu Dec 6 23:09:06 2018 From: Tao.Liu at windriver.com (Liu, Tao) Date: Thu, 6 Dec 2018 23:09:06 +0000 Subject: [Starlingx-discuss] API requests: stx-fault Message-ID: <7242A3DC72E453498E3D783BBB134C3E9DDA2D7D@ALA-MBD.corp.ad.wrs.com> Hi Abraham, Thank you for your detailed analysis of fault API documents, see my replies in your email below… -----Original Message----- From: "Arce Moreno, Abraham" To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] API requests: stx-fault Message-ID: Content-Type: text/plain; charset="us-ascii" stx-fault team, Based in some time spent within stx-fault and with the objective to align our REST API Documentation [0] with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] When we look at the name and description reported out by curl -i http://10.10.10.2:18002/ there is a mismatch between documentation [1] and information via API Query: API Documentation: Name: stx-fault API Description: StarlingX Fault API allows for the management of physical servers. This includes inventory collection and configuration of hosts, ports, interfaces, CPUs, disk, memory, and system configuration. The API also supports the configuration of the cloud's SNMP interface. Source Code via API Query: Name: Fault Management API Description: Fault Management is an OpenStack project which provides REST API services for alarms and logs. ? Can you please let us know where the modifications are required? API Documentation or Source Code? The API document should be modified to match the description returned from API query. [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-metal project: - Our initial "Migration WADL to RST", see history here [2] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:18002/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST methods below to include, we are adding in the second column what it seems to be the match for the valid API endpoint name: Alarms > alarms Event Log > event_log Event Suppression > event_suppression ? Are all the names and API nodes correctly matched? Yes. >From this same output, "links" has a reference to: "http://www.windriver.com/developer/fm/dev/api-spec-v1.html" ? Does his reference needs to change to: https://docs.starlingx.io/api-ref/stx-fault/index.html Yes, you are correct. [ v1/ ] [ Current Official API Documentation ] Current Official API documentation [3] includes the following 3 REST API methods under "API Versions" v1/ details: - Alarms: http://10.10.10.2:6385/v1/alarms - Event Log: http://10.10.10.2:6385/v1/event_log - Event Suppression: http://10.10.10.2:6385/v1/event_suppression The port number is wrong here, typo? [ v1/ ] [ API Query Output ] API queries output shows these API REST methods: - alarms - event_log - event_suppression ? Do we need another level of review? No. ? Is there anything we need to take care of? No. Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-fault/ [1] https://docs.starlingx.io/api-ref/stx-fault/api-ref-fm-v1-fault.html?expanded=lists-information-about-fault-management-api-versions-detail#api-versions [2] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [3] https://docs.starlingx.io/api-ref/stx-fault/api-ref-fm-v1-fault.html?expanded=lists-information-about-fault-management-api-versions-detail,shows-details-for-fault-management-api-v1-detail#api-versions _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Fri Dec 7 01:08:18 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 7 Dec 2018 01:08:18 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> <19D3C745-77D3-4FC3-982C-353FF96738B0@windriver.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF2D46@SHSMSX104.ccr.corp.intel.com> Ken, We did the installer update during the last kernel upgrade as well. I forgot if we have specific storyboard at that time, but this is the required step for kernel upgrade like Victor/Erich stated. Thx. - cindy -----Original Message----- From: Cordoba Malibran, Erich Sent: Friday, December 7, 2018 12:25 AM To: Young, Ken Cc: Xie, Cindy ; Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Centos Distro Direction Hi Ken, In the best case, I think it would be just matter of point to the new files and everything should work. In the worst case the packages that are installed by us in the installer conflicts with some other packages already there. But this is just a guess. I can run the experiment to see how it goes. -Erich On 12/6/18, 9:14 AM, "Young, Ken" wrote: Erich, Wouldn’t the installer build script just use the new items or are significant changes required? /KenY On 2018-12-05, 12:49 PM, "Cordoba Malibran, Erich" wrote: BTW, I created this story to update the installer from CentOS 7.4 to CentOS 7.6, in case anyone wants to participate. https://storyboard.openstack.org/#!/story/2004516 -Erich On Tue, 2018-12-04 at 08:42 +0000, Lin, Shuicheng wrote: > It seems just rpm package is released, but srpm is not released yet. > I will keep check it recently. > > Best Regards > Shuicheng > > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Tuesday, December 4, 2018 1:03 AM > To: Xie, Cindy ; starlingx-discuss at lists.starlin > gx.io > Subject: Re: [Starlingx-discuss] Centos Distro Direction > > > > On 12/3/18 7:43 AM, Xie, Cindy wrote: > > Seems like that CentOS 7 just announced 1810 release today (guess > > this is 7.6): > > > > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=% > 28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > > > > Yup, timing is! > > Cindy, can you please put this on the agenda for the next non- > openstack Distro meeting. > > We also have a topic for the TSC (thanks BruceJ) the following > morning, TSC members may want to start weighing in here regarding my > initial proposal below, which we can talk more about on Thursday. > > > Thanks > Sau! > > > > thx. - cindy > > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Saturday, December 1, 2018 7:49 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Centos Distro Direction > > > > > > Folks, > > > > As we move forward into the spring release (Stein based), we will > > also > > be dealing with another CentOS update. RHEL has already released > > the > > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 > > update shortl, about 30 days after RHEL releases. > > > > We should do the 7.6 Update as we did the 7.5 Update on a feature > > branch, it took about 2 months last time (including initial setup, > > rebasing, and de-fuzzing), I expect it will be shorter this time > > based on our past learning. > > > > We should start out with creating the feature branches (I will work > > with Dean on this) for stx-integ, stx-root, stx-tools, and stx- > > upstream repos. When we start the work, we need to remember to > > rebase the feature branches regularly and check for patch fuzzing > > issues. > > > > Cindy, can you please put this on your agenda for the next Non- > > Openstack Distro meeting. > > > > While on the topic of Cento Distro updates, many of you may have > > heard > > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > > CentOS release we should start thinking about that upgrade as it > > will > > be a larger effort as it includes the 4.18 kernel (alas not the > > 4.19 > > LTS > > kernel) along with many other upgrades. We should start a feature > > branch for CentOS 8 as well to do the updates, This will help > > reduce > > some of the patch load from the backported patches. Since we > > don't > > know exactly when CentOS 8 will be available this should be a > > Train-based release target (Fall 2019) (at the earliest) > > > > [0] > > https://www.redhat.com/en/blog/powering-its-future-while-preserving > > -pr > > esent-introducing-red-hat-enterprise-linux-8-beta > > > > Thanks > > Sau! > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Fri Dec 7 01:32:50 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 7 Dec 2018 01:32:50 +0000 Subject: [Starlingx-discuss] Centos Distro Direction In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DF2D46@SHSMSX104.ccr.corp.intel.com> References: <9aed8be1-2aca-c537-2d48-25bac32354ed@linux.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DEAF79@SHSMSX104.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE54214@SHSMSX101.ccr.corp.intel.com> <19D3C745-77D3-4FC3-982C-353FF96738B0@windriver.com> <2FD5DDB5A04D264C80D42CA35194914F35DF2D46@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE54B54@SHSMSX101.ccr.corp.intel.com> Hi all, kernel-3.10.0-957 srpm for both std and rt kernel are available now. We will begin the kernel upgrade task soon. Installer upgrade (anaconda/kickstart) will be covered in the CentOS 7.6 upgrade also. For the early boot stage image (initrd.img/squash/vmlinuz), they will be generated by script "update-pxe-network-installer ". And verified when related packages (kernel/kernel driver/system etc) are upgraded. Hi Penney, For the anaconda, you suggested to skip the package upgrade during CentOS7.5 upgrade. What's your suggestion for the anaconda package when do CentOS7.6 upgrade? Thanks. Best Regards Shuicheng -----Original Message----- From: Xie, Cindy Sent: Friday, December 7, 2018 9:08 AM To: Cordoba Malibran, Erich ; Young, Ken Cc: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Centos Distro Direction Ken, We did the installer update during the last kernel upgrade as well. I forgot if we have specific storyboard at that time, but this is the required step for kernel upgrade like Victor/Erich stated. Thx. - cindy -----Original Message----- From: Cordoba Malibran, Erich Sent: Friday, December 7, 2018 12:25 AM To: Young, Ken Cc: Xie, Cindy ; Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Centos Distro Direction Hi Ken, In the best case, I think it would be just matter of point to the new files and everything should work. In the worst case the packages that are installed by us in the installer conflicts with some other packages already there. But this is just a guess. I can run the experiment to see how it goes. -Erich On 12/6/18, 9:14 AM, "Young, Ken" wrote: Erich, Wouldn’t the installer build script just use the new items or are significant changes required? /KenY On 2018-12-05, 12:49 PM, "Cordoba Malibran, Erich" wrote: BTW, I created this story to update the installer from CentOS 7.4 to CentOS 7.6, in case anyone wants to participate. https://storyboard.openstack.org/#!/story/2004516 -Erich On Tue, 2018-12-04 at 08:42 +0000, Lin, Shuicheng wrote: > It seems just rpm package is released, but srpm is not released yet. > I will keep check it recently. > > Best Regards > Shuicheng > > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Tuesday, December 4, 2018 1:03 AM > To: Xie, Cindy ; starlingx-discuss at lists.starlin > gx.io > Subject: Re: [Starlingx-discuss] Centos Distro Direction > > > > On 12/3/18 7:43 AM, Xie, Cindy wrote: > > Seems like that CentOS 7 just announced 1810 release today (guess > > this is 7.6): > > > > > https://wiki.centos.org/Manuals/ReleaseNotes/CentOS7.1810?highlight=% > 28%28Manuals%7CReleaseNotes%7CCentOS7.1810%29%29 > > > > Yup, timing is! > > Cindy, can you please put this on the agenda for the next non- > openstack Distro meeting. > > We also have a topic for the TSC (thanks BruceJ) the following > morning, TSC members may want to start weighing in here regarding my > initial proposal below, which we can talk more about on Thursday. > > > Thanks > Sau! > > > > thx. - cindy > > > > -----Original Message----- > > From: Saul Wold [mailto:sgw at linux.intel.com] > > Sent: Saturday, December 1, 2018 7:49 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Centos Distro Direction > > > > > > Folks, > > > > As we move forward into the spring release (Stein based), we will > > also > > be dealing with another CentOS update. RHEL has already released > > the > > 7.6 Update on Oct 30th, typically we should expect the CentOS 7.6 > > update shortl, about 30 days after RHEL releases. > > > > We should do the 7.6 Update as we did the 7.5 Update on a feature > > branch, it took about 2 months last time (including initial setup, > > rebasing, and de-fuzzing), I expect it will be shorter this time > > based on our past learning. > > > > We should start out with creating the feature branches (I will work > > with Dean on this) for stx-integ, stx-root, stx-tools, and stx- > > upstream repos. When we start the work, we need to remember to > > rebase the feature branches regularly and check for patch fuzzing > > issues. > > > > Cindy, can you please put this on your agenda for the next Non- > > Openstack Distro meeting. > > > > While on the topic of Cento Distro updates, many of you may have > > heard > > that RHEL 8 Beta was announced on Nov 14 [0], while this is not a > > CentOS release we should start thinking about that upgrade as it > > will > > be a larger effort as it includes the 4.18 kernel (alas not the > > 4.19 > > LTS > > kernel) along with many other upgrades. We should start a feature > > branch for CentOS 8 as well to do the updates, This will help > > reduce > > some of the patch load from the backported patches. Since we > > don't > > know exactly when CentOS 8 will be available this should be a > > Train-based release target (Fall 2019) (at the earliest) > > > > [0] > > https://www.redhat.com/en/blog/powering-its-future-while-preserving > > -pr > > esent-introducing-red-hat-enterprise-linux-8-beta > > > > Thanks > > Sau! > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From huifeng.le at intel.com Fri Dec 7 09:11:09 2018 From: huifeng.le at intel.com (Le, Huifeng) Date: Fri, 7 Dec 2018 09:11:09 +0000 Subject: [Starlingx-discuss] Questions about patch 87a8c625 upstreaming In-Reply-To: References: <76647BD697F40748B1FA4F56DA02AA0B4D548D70@SHSMSX104.ccr.corp.intel.com> Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D54945A@SHSMSX104.ccr.corp.intel.com> Matt, Thanks much for the clarification! Sorry for more questions: · Current metadata proxy is managed by l3 agent or dhcp agent or both (e.g. there may be 3 haproxy processes co-exists in an environment with 1 route and 2 network). The STX patch is applied for dhcp agent now, Does it also need the capability for l3 agent to stop the haproxy (managed by l3 agent) when stopped? · The patch has a side effect of disabling the metadata query capability (from vm instance) when dhcp agent stop, suppose this is expected behavior, right? Best Regards, Le, Huifeng From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, December 6, 2018 11:11 PM To: Le, Huifeng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Questions about patch 87a8c625 upstreaming Hello Huifeng, I believe the use-case was having the ability to restart the metadata proxy in order to provide the ability to patch the software without a reboot. Therefore, in order to restart and/or reconfigure the metadata proxies that are managed by the DHCP agent, then they needed to be restarted when the agent was restarted since there was no direct way to restart the individual proxies managed by that service. The DHCP agent will automatically configure and launch the metadata proxy if there are no virtual routers currently serving that network. This is controlled by the parameter enable_isolated_metadata and is configured to be True by default for StarlingX. The force_metadata option can be used to always have the DHCP agent configure the metadata access but is not enabled by default (as you indicated). -Matt From: "Le, Huifeng" > Date: Wednesday, December 5, 2018 at 4:20 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Questions about patch 87a8c625 upstreaming Matt, I am looking at patch #87a8c625 (US86444: patching scripts for neutron processes) which includes 2 parts, could you please help to clarify below question? · Script to support neutron service restart: This is STX special script and no need for upstream · Metadata-proxy service lifecycle management: close metadata proxy (e.g. haproxy) process (if it is managed by dhcp client) when neutron-dhcp-agent stopped To my understanding, metadata proxy (e.g. haproxy) provides support for VM instance to query its metadata information from Neutron which is in data-path plane, neutron-dhcp-agent is responsible for configuring dhcp process (e.g. dnsmasq) or metadata proxy process which is lie in control-path plane. So it seems by design: whether the metadata-proxy process is alive is determined by the network/port status change instead of whether dhcp agent alive. Are there any special use cases which requires to stop metadata proxy process when dhcp agent stopped? In my test: · if metadata proxy is managed by l3 agent (default setting) Stopping(/Restarting) l3 agent will not stop(/restart) haproxy process and clear iptables rule, router can still work (in data path) and metadata information can still be queried inside VM instance · if metadata proxy is managed by dhcp agent (set “force_metadata = true” in /etc/neutron/dhcp_agent.ini) Stopping(/Restarting) dhcp agent will not stop(/restart) dnsmasq process and haproxy process, dhcp server can still work (in data path) and metadata information can still be queried inside VM instance. If applying the patch, Stopping dhcp agent will make the metadata query within VM instance fail while the DHCP server can still work (or does it also need to be stopped?) and metadata bridge information (e.g. 169.254.169.254/16) is also valid in dhcp interface, does it expected behavior? Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Fri Dec 7 08:41:33 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 7 Dec 2018 08:41:33 +0000 Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming In-Reply-To: <331FE402-1858-451D-8506-92E3E1033612@windriver.com> References: <51F8F06E-D06E-4DDA-AABF-D69B622EFD56@windriver.com> <331FE402-1858-451D-8506-92E3E1033612@windriver.com> Message-ID: Hi Matt, Ryan Tidwell comments on this patch and he thinks that AFTER_DELETE notification can be used to trigger l2pop. https://review.openstack.org/#/c/611261/ https://review.openstack.org/#/c/611261/4/neutron/db/l3_db.py From the comment in the following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/services/l3_router/service_providers/l2pop.py#L276 It seems that the router_id and port_id in AFTER_DELETE notification are None. As a result of that, the last_known_router_id and last_fixed_port_id should be used to construct FDB entries which are used to remove FDBs on each host. However, I print the notification in the following 2 cases: Case-1: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Delete fip-1 Case-2: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Disassociate fip-1 with vm-1 4) Delete fip-1 The notification for case1 and case 2 are attached. router_id and port_id are not None in case-1 and are None in case-2. Thus in case-1, AFTER_DELETE notification can be used. In case-2, FDB will be removed by step 3, thus no need to remove again. Based on the above analysis, I think we can use AFTER_DELETE notification. Could you please comment and review? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, November 12, 2018 11:19 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io; Legacy, Allain Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The latest RFE looks good to me. Regards, Matt From: "Xu, Chenjie" > Date: Monday, November 12, 2018 at 1:23 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" >, Allain Legacy > Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, The RFE has been updated and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, November 9, 2018 9:22 PM To: Xu, Chenjie >; Legacy, Allain > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The RFE looks good. The use cases are clear and detailed. I only have a few minor review comments (see attached). Regards, Matt From: "Xu, Chenjie" > Date: Thursday, November 1, 2018 at 4:28 AM To: "Peters, Matt" >, Allain Legacy > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, We analyze the patch 9f926a5 related to l2pop. An RFE “Add l2pop support for floating ip resources” has been written and is attached. The test case is provided by Allain. Could you please help to review and comment? Thanks very much! Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: case1.PNG Type: image/png Size: 200299 bytes Desc: case1.PNG URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: case2.PNG Type: image/png Size: 140408 bytes Desc: case2.PNG URL: From Matt.Peters at windriver.com Fri Dec 7 11:53:41 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 7 Dec 2018 11:53:41 +0000 Subject: [Starlingx-discuss] Questions about patch 87a8c625 upstreaming In-Reply-To: <76647BD697F40748B1FA4F56DA02AA0B4D54945A@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D548D70@SHSMSX104.ccr.corp.intel.com> <76647BD697F40748B1FA4F56DA02AA0B4D54945A@SHSMSX104.ccr.corp.intel.com> Message-ID: <6FD6E6FA-0AA2-476B-80B3-0DB8766702F8@windriver.com> See inline From: "Le, Huifeng" Date: Friday, December 7, 2018 at 4:11 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RE: Questions about patch 87a8c625 upstreaming Matt, Thanks much for the clarification! Sorry for more questions: * Current metadata proxy is managed by l3 agent or dhcp agent or both (e.g. there may be 3 haproxy processes co-exists in an environment with 1 route and 2 network). The STX patch is applied for dhcp agent now, Does it also need the capability for l3 agent to stop the haproxy (managed by l3 agent) when stopped? MP> Yes. This was not included in the original patch because the original seed code did not use the default L3 agent. * The patch has a side effect of disabling the metadata query capability (from vm instance) when dhcp agent stop, suppose this is expected behavior, right? MP> Yes. It needs to be temporarily stopped while it is patched. Best Regards, Le, Huifeng From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, December 6, 2018 11:11 PM To: Le, Huifeng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Questions about patch 87a8c625 upstreaming Hello Huifeng, I believe the use-case was having the ability to restart the metadata proxy in order to provide the ability to patch the software without a reboot. Therefore, in order to restart and/or reconfigure the metadata proxies that are managed by the DHCP agent, then they needed to be restarted when the agent was restarted since there was no direct way to restart the individual proxies managed by that service. The DHCP agent will automatically configure and launch the metadata proxy if there are no virtual routers currently serving that network. This is controlled by the parameter enable_isolated_metadata and is configured to be True by default for StarlingX. The force_metadata option can be used to always have the DHCP agent configure the metadata access but is not enabled by default (as you indicated). -Matt From: "Le, Huifeng" > Date: Wednesday, December 5, 2018 at 4:20 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Questions about patch 87a8c625 upstreaming Matt, I am looking at patch #87a8c625 (US86444: patching scripts for neutron processes) which includes 2 parts, could you please help to clarify below question? * Script to support neutron service restart: This is STX special script and no need for upstream * Metadata-proxy service lifecycle management: close metadata proxy (e.g. haproxy) process (if it is managed by dhcp client) when neutron-dhcp-agent stopped To my understanding, metadata proxy (e.g. haproxy) provides support for VM instance to query its metadata information from Neutron which is in data-path plane, neutron-dhcp-agent is responsible for configuring dhcp process (e.g. dnsmasq) or metadata proxy process which is lie in control-path plane. So it seems by design: whether the metadata-proxy process is alive is determined by the network/port status change instead of whether dhcp agent alive. Are there any special use cases which requires to stop metadata proxy process when dhcp agent stopped? In my test: * if metadata proxy is managed by l3 agent (default setting) Stopping(/Restarting) l3 agent will not stop(/restart) haproxy process and clear iptables rule, router can still work (in data path) and metadata information can still be queried inside VM instance * if metadata proxy is managed by dhcp agent (set “force_metadata = true” in /etc/neutron/dhcp_agent.ini) Stopping(/Restarting) dhcp agent will not stop(/restart) dnsmasq process and haproxy process, dhcp server can still work (in data path) and metadata information can still be queried inside VM instance. If applying the patch, Stopping dhcp agent will make the metadata query within VM instance fail while the DHCP server can still work (or does it also need to be stopped?) and metadata bridge information (e.g. 169.254.169.254/16) is also valid in dhcp interface, does it expected behavior? Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Allain.Legacy at windriver.com Fri Dec 7 14:19:46 2018 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Fri, 7 Dec 2018 14:19:46 +0000 Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming In-Reply-To: References: <51F8F06E-D06E-4DDA-AABF-D69B622EFD56@windriver.com> <331FE402-1858-451D-8506-92E3E1033612@windriver.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC4D60F@ALA-MBD.corp.ad.wrs.com> The change that is being reviewed here was originally a part of a larger commit (9f926a5d253). They should be implemented together or at least tested together. I seem to remember that there was information missing in case 1 that prevented a proper FDB notification from being generated. Please retest your scenarios and capture the input parameters to add_fdb_entries(), remove_fdb_entries(), and update_fdb_entries() in neutron/plugins/ml2/drivers/l2pop/rpc.py:L2populationAgentNotifyAPI to be sure that expected notifications are published. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 07, 2018 3:42 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, Ryan Tidwell comments on this patch and he thinks that AFTER_DELETE notification can be used to trigger l2pop. https://review.openstack.org/#/c/611261/ https://review.openstack.org/#/c/611261/4/neutron/db/l3_db.py From the comment in the following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/services/l3_router/service_providers/l2pop.py#L276 It seems that the router_id and port_id in AFTER_DELETE notification are None. As a result of that, the last_known_router_id and last_fixed_port_id should be used to construct FDB entries which are used to remove FDBs on each host. However, I print the notification in the following 2 cases: Case-1: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Delete fip-1 Case-2: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Disassociate fip-1 with vm-1 4) Delete fip-1 The notification for case1 and case 2 are attached. router_id and port_id are not None in case-1 and are None in case-2. Thus in case-1, AFTER_DELETE notification can be used. In case-2, FDB will be removed by step 3, thus no need to remove again. Based on the above analysis, I think we can use AFTER_DELETE notification. Could you please comment and review? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, November 12, 2018 11:19 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io; Legacy, Allain > Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The latest RFE looks good to me. Regards, Matt From: "Xu, Chenjie" > Date: Monday, November 12, 2018 at 1:23 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" >, Allain Legacy > Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, The RFE has been updated and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, November 9, 2018 9:22 PM To: Xu, Chenjie >; Legacy, Allain > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The RFE looks good. The use cases are clear and detailed. I only have a few minor review comments (see attached). Regards, Matt From: "Xu, Chenjie" > Date: Thursday, November 1, 2018 at 4:28 AM To: "Peters, Matt" >, Allain Legacy > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, We analyze the patch 9f926a5 related to l2pop. An RFE “Add l2pop support for floating ip resources” has been written and is attached. The test case is provided by Allain. Could you please help to review and comment? Thanks very much! Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From chenjie.xu at intel.com Fri Dec 7 14:30:44 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 7 Dec 2018 14:30:44 +0000 Subject: [Starlingx-discuss] Analysis of patch 4ae5a58 for StartlingX upstreaming Message-ID: Hi Matt, Will the change in BGPVPN upstream? The code to use the RFE "Enable other subprojects to extend l2pop fdbs". Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Fri Dec 7 15:49:53 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 7 Dec 2018 15:49:53 +0000 Subject: [Starlingx-discuss] Community's feedback for patch 4ae5a58 for StartlingX Upstreaming Message-ID: Hi Matt, The RFE "Enable other subprojects to extend l2pop fdb information" was discussed on OpenStack Neutron Driver meetings. The main concern of the community is that this RFE will not be used in the future. For now this RFE got 2 use cases. One is BGPVPN and another is RFE "Add l2pop support for floating ip resources". The community proposes to do the following things before deciding to approve or not approve RFE "Enable other subprojects to extend l2pop fdb information": 1) Review the RFE "Add l2pop support for floating ip resources". 2) Miguel volunteers to facilitate a conversation with the networking-bgpvpn team. The community needs to know whether networking-bgpvpn team will accept to use this RFE to extends l2pop FDB or not. The link for RFE "Enable other subprojects to extend l2pop fdb information": https://bugs.launchpad.net/neutron/+bug/1793653 The link for RFE "Add l2pop support for floating ip resources": https://bugs.launchpad.net/neutron/+bug/1803494 The link for the logs of OpenStack Neutron Driver Meeting: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-12-07-14.00.log.html The link for changes in stx-networking-bgpvpn to extend l2pop FDB: https://github.com/starlingx-staging/stx-networking-bgpvpn/blob/master/networking_bgpvpn/neutron/services/service_drivers/neutron_dynamic_routing/dr.py Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From John.Kung at windriver.com Fri Dec 7 15:51:29 2018 From: John.Kung at windriver.com (Kung, John) Date: Fri, 7 Dec 2018 15:51:29 +0000 Subject: [Starlingx-discuss] API requests: stx-config Message-ID: Abraham, Please find enclosed my feedback preceded by [JK] Given the level of detail and input required, could you please open a Gerrit review in order to get feedback and comments? Thanks, John -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Tuesday, December 04, 2018 11:48 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] API requests: stx-config stx-config team, Based in some time spent now within stx-config and with the objective to align our REST API Documentation with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] The mismatch between documentation [0] and information via API Query is addressed under [3] and [4]. ? Heads Up! The description includes the word "interfaces" however as you will find below, "interfaces is also listed in the documentation but not intuitively found as a REST method under the API query output. More information below. [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-config project: - Our initial "Migration WADL to RST", see history here [1] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:6385/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST METHODs below to include, we are adding in the second column what it seems to be the match for the valid API REST methods: System > isystems Clusters > clusters Interfaces > ? [JK] v1/iinterfaces Partitions > ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ Volume Groups > ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ Physical Volumes > ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ Ceph Storage Functions > ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ Profiles > iprofile DNS > idns NTP > intp External OAM > iextoam Infrastructure Subnet > iinfra DRBD Configuration > drbdconfig SNMP Communities > icommunity SNMP Trap Destinations > itrapdest Devices > ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices [JK] This should be v1/pci_devices/{pci_device_id} || /v1/ihosts/{ihost_id}/pci_devices Service Parameter > service_parameter SDN Controllers > sdn_controller Remote Logging > remotelogging Networks > networks Address Pools > addrpools Addresses > addresses Routes > ! /v1/ihosts/​{host_id}​/routes Storage Backends > storage_backend Storage Tiers > ! storage_tiers Controller Filesystem > controller_fs Ceph Monitors > ceph_mon System Certificate Configuration > ! certificate Custom Firewall Rules > firewallrules ? Are all the names and API REST methods correctly matched? [JK] Added /v1/iinterfaces above for Interfaces. No, devices should be pci_devices [JK] missing ethernet_ports ? Are all the valid API REST method names correct? [JK] See my comment above ? "Interfaces" is listed under v1/ API Version output [2] as an expected service but a REST METHOD match was not found, are we talking about "Interface" one of the following ones: [JK] This is referring to v1/iinterfaces 1) Is it the "interface_networks" REST method? Interface_networks is a separate REST method and allows association of a networks to an iinterfaces 2) Or found under "Profiles" as described under its description: "...This includes interface profiles..." 3) Or found under "SDN Controllers" as described under its description: "...SDN manager interface..." 4) Or as simple as "Networks" interfaces? [ v1/ ] [ Current Official API Documentation ] The following API REST methods documented under [0] give valid API output: - System - Clusters - DNS - NTP - External OAM - Infrastructure Subnet - DRBD Configuration - SNMP Communities - SNMP Trap Destinations - Service Parameter - SDN Controllers - Remote Logging - Networks - Address Pools - Storage Backends - Storage Tiers - Controller Filesystem - Ceph Monitors - System Certificate Configuration - Custom Firewall Rules - Partitions - Volume Groups - Physical Volumes - Ceph Storage Functions - Devices - Addresses - Routes The following API REST method documented under [0] has an invalid name: - Profiles Documentation pointing to: /v1/iprofiles and a valid v1/ endpoint: http://10.10.10.2:6385/v1/iprofile ? Is this a valid Documentation change from "iprofiles" to "iprofile"? [JKUNG] Yes, this should be updated to 'iprofile' from 'iprofiles' [ v1/ ] [ API Query Output ] Based in our "[Starlingx-discuss] API requests: stx-ha" [3] we learned the following API REST methods from "System Inventory API v1" are assigned to stx-ha: - services - servicenodes - service_groups [JK] this is servicegroup, not service_groups And in our "[Starlingx-discuss] API requests: stx-metal" [4] the following are assigned to stx-metal. [JK] This is part of a pending user story 2002950. - lldp_neighbours - ihosts - icpu - lldp_agents And now these API REST methods are assigned to stx-config: - isystems - clusters - - ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ - ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ - ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ - ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ - iprofile - idns - intp - iextoam - iinfra - drbdconfig - icommunity - itrapdest - ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices - service_parameter - sdn_controller - remotelogging - networks - addrpools - addresses - ! /v1/ihosts/​{host_id}​/routes - storage_backend - ! storage_tiers - controller_fs - ceph_mon - ! certificate - firewallrules Leaving the following assigned to other StarlingX components, more to come once we review the remaining StarlingX projects: - links [JKUNG] These are still currently in stx-config, except, I believe license, upgrade does not apply to StarlingX. Needs confirmation from storage which ones will continue to be supported (i.e. storage_ceph_external). - storage_file - storage_lvm - interface_networks - id - ptp - media_types - upgrade - imemory - storage_ceph_external - health - license - storage_ceph - storage_external - iuser - helm_charts - inode ? Do we need another level of review? YES, needs update and re-review. ? Should we target an update to the documentation in terms of number of services we are documenting comparing the 3 perspectives? ? Is there anything we need to take care of? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-config/index.html [1] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [2] https://docs.starlingx.io/api-ref/stx-config/api-ref-sysinv-v1-config.html?expanded=shows-details-for-configuration-api-v1-detail#api-versions [3]http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/001868.html [4] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/002032.html _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Dec 7 16:49:02 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 7 Dec 2018 16:49:02 +0000 Subject: [Starlingx-discuss] API requests: stx-config In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BB1EC52CA@fmsmsx117.amr.corp.intel.com> John, isn’t that backwards? Shouldn’t the dev team take the feedback from Abraham and the Docs team and implement the changes needed to fix the document? brucej From: Kung, John [mailto:John.Kung at windriver.com] Sent: Friday, December 7, 2018 7:51 AM To: Arce Moreno, Abraham Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] API requests: stx-config Abraham, Please find enclosed my feedback preceded by [JK] Given the level of detail and input required, could you please open a Gerrit review in order to get feedback and comments? Thanks, John -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Tuesday, December 04, 2018 11:48 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] API requests: stx-config stx-config team, Based in some time spent now within stx-config and with the objective to align our REST API Documentation with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] The mismatch between documentation [0] and information via API Query is addressed under [3] and [4]. ? Heads Up! The description includes the word "interfaces" however as you will find below, "interfaces is also listed in the documentation but not intuitively found as a REST method under the API query output. More information below. [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-config project: - Our initial "Migration WADL to RST", see history here [1] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:6385/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST METHODs below to include, we are adding in the second column what it seems to be the match for the valid API REST methods: System > isystems Clusters > clusters Interfaces > ? [JK] v1/iinterfaces Partitions > ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ Volume Groups > ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ Physical Volumes > ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ Ceph Storage Functions > ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ Profiles > iprofile DNS > idns NTP > intp External OAM > iextoam Infrastructure Subnet > iinfra DRBD Configuration > drbdconfig SNMP Communities > icommunity SNMP Trap Destinations > itrapdest Devices > ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices [JK] This should be v1/pci_devices/{pci_device_id} || /v1/ihosts/{ihost_id}/pci_devices Service Parameter > service_parameter SDN Controllers > sdn_controller Remote Logging > remotelogging Networks > networks Address Pools > addrpools Addresses > addresses Routes > ! /v1/ihosts/​{host_id}​/routes Storage Backends > storage_backend Storage Tiers > ! storage_tiers Controller Filesystem > controller_fs Ceph Monitors > ceph_mon System Certificate Configuration > ! certificate Custom Firewall Rules > firewallrules ? Are all the names and API REST methods correctly matched? [JK] Added /v1/iinterfaces above for Interfaces. No, devices should be pci_devices [JK] missing ethernet_ports ? Are all the valid API REST method names correct? [JK] See my comment above ? "Interfaces" is listed under v1/ API Version output [2] as an expected service but a REST METHOD match was not found, are we talking about "Interface" one of the following ones: [JK] This is referring to v1/iinterfaces 1) Is it the "interface_networks" REST method? Interface_networks is a separate REST method and allows association of a networks to an iinterfaces 2) Or found under "Profiles" as described under its description: "...This includes interface profiles..." 3) Or found under "SDN Controllers" as described under its description: "...SDN manager interface..." 4) Or as simple as "Networks" interfaces? [ v1/ ] [ Current Official API Documentation ] The following API REST methods documented under [0] give valid API output: - System - Clusters - DNS - NTP - External OAM - Infrastructure Subnet - DRBD Configuration - SNMP Communities - SNMP Trap Destinations - Service Parameter - SDN Controllers - Remote Logging - Networks - Address Pools - Storage Backends - Storage Tiers - Controller Filesystem - Ceph Monitors - System Certificate Configuration - Custom Firewall Rules - Partitions - Volume Groups - Physical Volumes - Ceph Storage Functions - Devices - Addresses - Routes The following API REST method documented under [0] has an invalid name: - Profiles Documentation pointing to: /v1/iprofiles and a valid v1/ endpoint: http://10.10.10.2:6385/v1/iprofile ? Is this a valid Documentation change from "iprofiles" to "iprofile"? [JKUNG] Yes, this should be updated to 'iprofile' from 'iprofiles' [ v1/ ] [ API Query Output ] Based in our "[Starlingx-discuss] API requests: stx-ha" [3] we learned the following API REST methods from "System Inventory API v1" are assigned to stx-ha: - services - servicenodes - service_groups [JK] this is servicegroup, not service_groups And in our "[Starlingx-discuss] API requests: stx-metal" [4] the following are assigned to stx-metal. [JK] This is part of a pending user story 2002950. - lldp_neighbours - ihosts - icpu - lldp_agents And now these API REST methods are assigned to stx-config: - isystems - clusters - - ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ - ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ - ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ - ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ - iprofile - idns - intp - iextoam - iinfra - drbdconfig - icommunity - itrapdest - ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices - service_parameter - sdn_controller - remotelogging - networks - addrpools - addresses - ! /v1/ihosts/​{host_id}​/routes - storage_backend - ! storage_tiers - controller_fs - ceph_mon - ! certificate - firewallrules Leaving the following assigned to other StarlingX components, more to come once we review the remaining StarlingX projects: - links [JKUNG] These are still currently in stx-config, except, I believe license, upgrade does not apply to StarlingX. Needs confirmation from storage which ones will continue to be supported (i.e. storage_ceph_external). - storage_file - storage_lvm - interface_networks - id - ptp - media_types - upgrade - imemory - storage_ceph_external - health - license - storage_ceph - storage_external - iuser - helm_charts - inode ? Do we need another level of review? YES, needs update and re-review. ? Should we target an update to the documentation in terms of number of services we are documenting comparing the 3 perspectives? ? Is there anything we need to take care of? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-config/index.html [1] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [2] https://docs.starlingx.io/api-ref/stx-config/api-ref-sysinv-v1-config.html?expanded=shows-details-for-configuration-api-v1-detail#api-versions [3]http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/001868.html [4] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/002032.html _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Fri Dec 7 17:15:13 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Fri, 7 Dec 2018 17:15:13 +0000 Subject: [Starlingx-discuss] Removing version from installer files Message-ID: Hi all, I'm sending this review to remove the installer version "stx-0.2" from the filenames required to create the installer. This could be a breaking change for some people as the tis-installer folder and the files inside there are populated manually. The changes needed to not have a broken build is : mv tis-installer stx-installer mv stx-installer/vmlinuz-stx-0.2 stx-installer/vmlinuz mv stx-installer/squashfs.img-stx-0.2 stx-installer/squashfs.img mv stx-installer/initrd.img-stx-0.2 stx-installer/initrd.img Thanks -Erich From jose.perez.carranza at intel.com Fri Dec 7 17:18:54 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 7 Dec 2018 17:18:54 +0000 Subject: [Starlingx-discuss] [ Test ] Discussion of the Testing strategy Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A9179AB@fmsmsx101.amr.corp.intel.com> Below are the key point reviewed on the meeting - A strategy document was presented by Numan: o Distribute the test responsibilities between partners. o Select automation system, get feedback from the community. o Split on different levels (Unit, Component and Functional) o Define how and where to contribute with new Test Cases. - Ada also presented a test strategy: o Functional Test strategy was reviewed. o A cadence of testing was proposed. o Performance test strategy will need further analysis. o Current Sanity Test was presented and some improvements will be made. - Both strategies will be merged and consolidated in one document to be presented on the community meeting. Regards, José -----Original Appointment----- From: Cabrales, Ada [mailto:ada.cabrales at intel.com] Sent: Wednesday, December 5, 2018 4:27 PM To: Cabrales, Ada; 'starlingx-discuss at lists.starlingx.io'; Waheed, Numan; Jones, Bruce E Subject: [Starlingx-discuss] [ Test ] Discussion of the Testing strategy When: Friday, December 7, 2018 10:00 AM-11:00 AM (UTC-06:00) Guadalajara, Mexico City, Monterrey. Where: Zoom info included Hello, This meeting is for having a healthy discussion about the testing strategy for StarlingX. Everyone is welcome Ada Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ << File: ATT00001.txt >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Fri Dec 7 17:20:36 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 7 Dec 2018 17:20:36 +0000 Subject: [Starlingx-discuss] Analysis of patch 4ae5a58 for StartlingX upstreaming In-Reply-To: References: Message-ID: Hello Chenjie, The plan is to upstream this to the respective projects (networking-bgpvpn and neutron-dyanmic-routing). However, in the short term this is not being prioritized nor has any attempt been made to approach the individual project teams about getting this accepted. -Matt From: "Xu, Chenjie" Date: Friday, December 7, 2018 at 9:30 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Analysis of patch 4ae5a58 for StartlingX upstreaming Hi Matt, Will the change in BGPVPN upstream? The code to use the RFE “Enable other subprojects to extend l2pop fdbs”. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Fri Dec 7 17:22:03 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 7 Dec 2018 17:22:03 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming In-Reply-To: References: Message-ID: <2C44D9ED-9F61-4223-A1DD-70FEE88DFA30@windriver.com> Hello Chenjie, I wasn’t planning on providing additional use-case information. I just wanted to make sure the StarlingX Distributed Cloud use-case was included in the RFE. Regards, Matt From: "Xu, Chenjie" Date: Tuesday, December 4, 2018 at 10:16 PM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, Thank you for your reply! Looking forward to the additional use-case information. Best Regards From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, December 4, 2018 9:12 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Chenjie, I would add additional use-case information to help with the justification for adding this capability. The detailed quota information is used within the StarlingX distributed cloud solution. The quota information for a given project/user is aggregated across all sub-clouds, therefore having an efficient mechanism to retrieve the quota details of all resources is required. Regards, Matt From: "Xu, Chenjie" > Date: Monday, December 3, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Fri Dec 7 18:25:58 2018 From: claire at openstack.org (Claire Massey) Date: Fri, 7 Dec 2018 12:25:58 -0600 Subject: [Starlingx-discuss] December 17 Call for 2019 Planning - Community Building, Marketing, etc Message-ID: <6F08F7DC-1BFA-4E02-BDAD-26B24A539221@openstack.org> Hi everyone, Looking ahead to 2019 we’ll have an open StarlingX community meeting to brainstorm and discuss plans for educational activities, engagement, marketing, advocacy, etc. *The call will be on Monday, December 17, at 7:00am PST (15:00 UTC).* Call in info is posted below. We will use this etherpad for notes: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans In the meantime, please take a look at the running list of 2019 events and make note of the upcoming CFP deadlines tracked here: https://docs.google.com/spreadsheets/d/1A9HiMjnqVGxSCd9No7theW8oNu3V1rmK6R1xJOzfUEU/edit?usp=sharing Thanks, Claire Zoom Meeting: https://zoom.us/j/952154828 One tap mobile +16468769923,,952154828# US (New York) +16699006833,,952154828# US (San Jose) Dial by your location +1 646 876 9923 US (New York) +1 669 900 6833 US (San Jose) Meeting ID: 952 154 828 Find your local number: https://zoom.us/u/abqUlOnSr -------------- next part -------------- An HTML attachment was scrubbed... URL: From John.Kung at windriver.com Fri Dec 7 18:28:51 2018 From: John.Kung at windriver.com (Kung, John) Date: Fri, 7 Dec 2018 18:28:51 +0000 Subject: [Starlingx-discuss] API requests: stx-config In-Reply-To: <9A85D2917C58154C960D95352B22818BB1EC52CA@fmsmsx117.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BB1EC52CA@fmsmsx117.amr.corp.intel.com> Message-ID: I was looking for a better method to provide feedback on Abraham’s objective to align REST API documentation with the REST APIs. We could start with email, however, I feel the details and feedback from the cores could be better represented in another tool. John From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, December 7, 2018 11:49 AM To: Kung, John; Arce Moreno, Abraham Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Re: [Starlingx-discuss] API requests: stx-config John, isn’t that backwards? Shouldn’t the dev team take the feedback from Abraham and the Docs team and implement the changes needed to fix the document? brucej From: Kung, John [mailto:John.Kung at windriver.com] Sent: Friday, December 7, 2018 7:51 AM To: Arce Moreno, Abraham Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] API requests: stx-config Abraham, Please find enclosed my feedback preceded by [JK] Given the level of detail and input required, could you please open a Gerrit review in order to get feedback and comments? Thanks, John -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Tuesday, December 04, 2018 11:48 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] API requests: stx-config stx-config team, Based in some time spent now within stx-config and with the objective to align our REST API Documentation with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] The mismatch between documentation [0] and information via API Query is addressed under [3] and [4]. ? Heads Up! The description includes the word "interfaces" however as you will find below, "interfaces is also listed in the documentation but not intuitively found as a REST method under the API query output. More information below. [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-config project: - Our initial "Migration WADL to RST", see history here [1] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:6385/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST METHODs below to include, we are adding in the second column what it seems to be the match for the valid API REST methods: System > isystems Clusters > clusters Interfaces > ? [JK] v1/iinterfaces Partitions > ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ Volume Groups > ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ Physical Volumes > ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ Ceph Storage Functions > ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ Profiles > iprofile DNS > idns NTP > intp External OAM > iextoam Infrastructure Subnet > iinfra DRBD Configuration > drbdconfig SNMP Communities > icommunity SNMP Trap Destinations > itrapdest Devices > ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices [JK] This should be v1/pci_devices/{pci_device_id} || /v1/ihosts/{ihost_id}/pci_devices Service Parameter > service_parameter SDN Controllers > sdn_controller Remote Logging > remotelogging Networks > networks Address Pools > addrpools Addresses > addresses Routes > ! /v1/ihosts/​{host_id}​/routes Storage Backends > storage_backend Storage Tiers > ! storage_tiers Controller Filesystem > controller_fs Ceph Monitors > ceph_mon System Certificate Configuration > ! certificate Custom Firewall Rules > firewallrules ? Are all the names and API REST methods correctly matched? [JK] Added /v1/iinterfaces above for Interfaces. No, devices should be pci_devices [JK] missing ethernet_ports ? Are all the valid API REST method names correct? [JK] See my comment above ? "Interfaces" is listed under v1/ API Version output [2] as an expected service but a REST METHOD match was not found, are we talking about "Interface" one of the following ones: [JK] This is referring to v1/iinterfaces 1) Is it the "interface_networks" REST method? Interface_networks is a separate REST method and allows association of a networks to an iinterfaces 2) Or found under "Profiles" as described under its description: "...This includes interface profiles..." 3) Or found under "SDN Controllers" as described under its description: "...SDN manager interface..." 4) Or as simple as "Networks" interfaces? [ v1/ ] [ Current Official API Documentation ] The following API REST methods documented under [0] give valid API output: - System - Clusters - DNS - NTP - External OAM - Infrastructure Subnet - DRBD Configuration - SNMP Communities - SNMP Trap Destinations - Service Parameter - SDN Controllers - Remote Logging - Networks - Address Pools - Storage Backends - Storage Tiers - Controller Filesystem - Ceph Monitors - System Certificate Configuration - Custom Firewall Rules - Partitions - Volume Groups - Physical Volumes - Ceph Storage Functions - Devices - Addresses - Routes The following API REST method documented under [0] has an invalid name: - Profiles Documentation pointing to: /v1/iprofiles and a valid v1/ endpoint: http://10.10.10.2:6385/v1/iprofile ? Is this a valid Documentation change from "iprofiles" to "iprofile"? [JKUNG] Yes, this should be updated to 'iprofile' from 'iprofiles' [ v1/ ] [ API Query Output ] Based in our "[Starlingx-discuss] API requests: stx-ha" [3] we learned the following API REST methods from "System Inventory API v1" are assigned to stx-ha: - services - servicenodes - service_groups [JK] this is servicegroup, not service_groups And in our "[Starlingx-discuss] API requests: stx-metal" [4] the following are assigned to stx-metal. [JK] This is part of a pending user story 2002950. - lldp_neighbours - ihosts - icpu - lldp_agents And now these API REST methods are assigned to stx-config: - isystems - clusters - - ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ - ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ - ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ - ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ - iprofile - idns - intp - iextoam - iinfra - drbdconfig - icommunity - itrapdest - ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices - service_parameter - sdn_controller - remotelogging - networks - addrpools - addresses - ! /v1/ihosts/​{host_id}​/routes - storage_backend - ! storage_tiers - controller_fs - ceph_mon - ! certificate - firewallrules Leaving the following assigned to other StarlingX components, more to come once we review the remaining StarlingX projects: - links [JKUNG] These are still currently in stx-config, except, I believe license, upgrade does not apply to StarlingX. Needs confirmation from storage which ones will continue to be supported (i.e. storage_ceph_external). - storage_file - storage_lvm - interface_networks - id - ptp - media_types - upgrade - imemory - storage_ceph_external - health - license - storage_ceph - storage_external - iuser - helm_charts - inode ? Do we need another level of review? YES, needs update and re-review. ? Should we target an update to the documentation in terms of number of services we are documenting comparing the 3 perspectives? ? Is there anything we need to take care of? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-config/index.html [1] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [2] https://docs.starlingx.io/api-ref/stx-config/api-ref-sysinv-v1-config.html?expanded=shows-details-for-configuration-api-v1-detail#api-versions [3]http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/001868.html [4] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/002032.html _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Fri Dec 7 18:43:21 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Fri, 7 Dec 2018 18:43:21 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Test meeting - 9:00 PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD45AC3@FMSMSX114.amr.corp.intel.com> Changing the frequency to weekly. Weekly meetings on Tuesdays at 9am PDT / 1600 UTC * Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4237 bytes Desc: not available URL: From juan.carlos.alonso at intel.com Fri Dec 7 21:34:09 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 7 Dec 2018 21:34:09 +0000 Subject: [Starlingx-discuss] Sanity Test for CENGN ISO Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7195F@FMSMSX108.amr.corp.intel.com> FYI.. Results of daily Sanity Test for CENGN ISOs will be published on this mailing list. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Dec 7 21:40:02 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 7 Dec 2018 21:40:02 +0000 Subject: [Starlingx-discuss] API requests: stx-config In-Reply-To: References: <9A85D2917C58154C960D95352B22818BB1EC52CA@fmsmsx117.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BB1EC5568@fmsmsx117.amr.corp.intel.com> I’m just trying to understand which sub-project is responsible for updating the API documents – the Docs sub-project or the sub-project that owns the APIs? I would hope the answer would be the later, as the sub-project that owns the service are the subject matter experts on the APIs. I’ve been assuming that the Docs team owns the infrastructure and the other teams own the documentation for their software. But the line is fuzzy and AFAIK we’ve never clarified it as a community. Now would be a good time. brucej From: Kung, John [mailto:John.Kung at windriver.com] Sent: Friday, December 7, 2018 10:29 AM To: Jones, Bruce E ; Arce Moreno, Abraham Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Re: [Starlingx-discuss] API requests: stx-config I was looking for a better method to provide feedback on Abraham’s objective to align REST API documentation with the REST APIs. We could start with email, however, I feel the details and feedback from the cores could be better represented in another tool. John From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Friday, December 7, 2018 11:49 AM To: Kung, John; Arce Moreno, Abraham Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Re: [Starlingx-discuss] API requests: stx-config John, isn’t that backwards? Shouldn’t the dev team take the feedback from Abraham and the Docs team and implement the changes needed to fix the document? brucej From: Kung, John [mailto:John.Kung at windriver.com] Sent: Friday, December 7, 2018 7:51 AM To: Arce Moreno, Abraham > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] API requests: stx-config Abraham, Please find enclosed my feedback preceded by [JK] Given the level of detail and input required, could you please open a Gerrit review in order to get feedback and comments? Thanks, John -----Original Message----- From: Arce Moreno, Abraham [mailto:abraham.arce.moreno at intel.com] Sent: Tuesday, December 04, 2018 11:48 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] API requests: stx-config stx-config team, Based in some time spent now within stx-config and with the objective to align our REST API Documentation with our REST APIs, we are kindly requesting your comments for questions "?" under each section [ Section ] [Sub Section] Please assume: - The require X-Auth-Token is in place to authenticate, only URLs might be shown. - StarlingX is configured as Standard Controller: 2 Controllers, 2 Computes. [ Project Information ] The mismatch between documentation [0] and information via API Query is addressed under [3] and [4]. ? Heads Up! The description includes the word "interfaces" however as you will find below, "interfaces is also listed in the documentation but not intuitively found as a REST method under the API query output. More information below. [ v1/ ] Here we are showing 3 different views of what we are seeing within stx-config project: - Our initial "Migration WADL to RST", see history here [1] - What we have documented in our "Current Official API Documentation" pages [0] - What the "API Query Output" is actually showing with curl -i http://10.10.10.2:6385/v1/... [ v1/ ] [ Migration WADL to RST ] Migration analysis from WADL to RST format gave us the REST METHODs below to include, we are adding in the second column what it seems to be the match for the valid API REST methods: System > isystems Clusters > clusters Interfaces > ? [JK] v1/iinterfaces Partitions > ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ Volume Groups > ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ Physical Volumes > ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ Ceph Storage Functions > ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ Profiles > iprofile DNS > idns NTP > intp External OAM > iextoam Infrastructure Subnet > iinfra DRBD Configuration > drbdconfig SNMP Communities > icommunity SNMP Trap Destinations > itrapdest Devices > ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices [JK] This should be v1/pci_devices/{pci_device_id} || /v1/ihosts/{ihost_id}/pci_devices Service Parameter > service_parameter SDN Controllers > sdn_controller Remote Logging > remotelogging Networks > networks Address Pools > addrpools Addresses > addresses Routes > ! /v1/ihosts/​{host_id}​/routes Storage Backends > storage_backend Storage Tiers > ! storage_tiers Controller Filesystem > controller_fs Ceph Monitors > ceph_mon System Certificate Configuration > ! certificate Custom Firewall Rules > firewallrules ? Are all the names and API REST methods correctly matched? [JK] Added /v1/iinterfaces above for Interfaces. No, devices should be pci_devices [JK] missing ethernet_ports ? Are all the valid API REST method names correct? [JK] See my comment above ? "Interfaces" is listed under v1/ API Version output [2] as an expected service but a REST METHOD match was not found, are we talking about "Interface" one of the following ones: [JK] This is referring to v1/iinterfaces 1) Is it the "interface_networks" REST method? Interface_networks is a separate REST method and allows association of a networks to an iinterfaces 2) Or found under "Profiles" as described under its description: "...This includes interface profiles..." 3) Or found under "SDN Controllers" as described under its description: "...SDN manager interface..." 4) Or as simple as "Networks" interfaces? [ v1/ ] [ Current Official API Documentation ] The following API REST methods documented under [0] give valid API output: - System - Clusters - DNS - NTP - External OAM - Infrastructure Subnet - DRBD Configuration - SNMP Communities - SNMP Trap Destinations - Service Parameter - SDN Controllers - Remote Logging - Networks - Address Pools - Storage Backends - Storage Tiers - Controller Filesystem - Ceph Monitors - System Certificate Configuration - Custom Firewall Rules - Partitions - Volume Groups - Physical Volumes - Ceph Storage Functions - Devices - Addresses - Routes The following API REST method documented under [0] has an invalid name: - Profiles Documentation pointing to: /v1/iprofiles and a valid v1/ endpoint: http://10.10.10.2:6385/v1/iprofile ? Is this a valid Documentation change from "iprofiles" to "iprofile"? [JKUNG] Yes, this should be updated to 'iprofile' from 'iprofiles' [ v1/ ] [ API Query Output ] Based in our "[Starlingx-discuss] API requests: stx-ha" [3] we learned the following API REST methods from "System Inventory API v1" are assigned to stx-ha: - services - servicenodes - service_groups [JK] this is servicegroup, not service_groups And in our "[Starlingx-discuss] API requests: stx-metal" [4] the following are assigned to stx-metal. [JK] This is part of a pending user story 2002950. - lldp_neighbours - ihosts - icpu - lldp_agents And now these API REST methods are assigned to stx-config: - isystems - clusters - - ! /v1/ihosts/​{host_id}​/partitions || /v1/partitions/​{partition_id}​ - ! /v1/ihosts/​{host_id}​/ilvgs || /v1/ilvgs/​{volumegroup_id}​ - ! /v1/ihosts/​{host_id}​/ipvs || /v1/ipvs/​{physicalvolume_id}​ - ! /v1/ihosts/​{host_id}​/istors || /v1/istors/​{stor_id}​ - iprofile - idns - intp - iextoam - iinfra - drbdconfig - icommunity - itrapdest - ! /v1/devices/​{device_id}​ || /v1/ihosts/​{host_id}​/pci_devices - service_parameter - sdn_controller - remotelogging - networks - addrpools - addresses - ! /v1/ihosts/​{host_id}​/routes - storage_backend - ! storage_tiers - controller_fs - ceph_mon - ! certificate - firewallrules Leaving the following assigned to other StarlingX components, more to come once we review the remaining StarlingX projects: - links [JKUNG] These are still currently in stx-config, except, I believe license, upgrade does not apply to StarlingX. Needs confirmation from storage which ones will continue to be supported (i.e. storage_ceph_external). - storage_file - storage_lvm - interface_networks - id - ptp - media_types - upgrade - imemory - storage_ceph_external - health - license - storage_ceph - storage_external - iuser - helm_charts - inode ? Do we need another level of review? YES, needs update and re-review. ? Should we target an update to the documentation in terms of number of services we are documenting comparing the 3 perspectives? ? Is there anything we need to take care of? Thanks for your initial support. [0] https://docs.starlingx.io/api-ref/stx-config/index.html [1] https://wiki.openstack.org/wiki/StarlingX/Developer_Guide/API_Documentation#Analysis [2] https://docs.starlingx.io/api-ref/stx-config/api-ref-sysinv-v1-config.html?expanded=shows-details-for-configuration-api-v1-detail#api-versions [3]http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/001868.html [4] http://lists.starlingx.io/pipermail/starlingx-discuss/2018-November/002032.html _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Fri Dec 7 21:43:26 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 7 Dec 2018 21:43:26 +0000 Subject: [Starlingx-discuss] Meeting Agenda: StarlingX Infrastructure Containerization Message-ID: Just a reminder that our next meeting will be Monday Dec 10th. The agenda is posted here: https://etherpad.openstack.org/p/stx-containerization If anyone would like to add an agenda topic please update the etherpad. Frank -----Original Appointment----- From: Miller, Frank Sent: Thursday, November 29, 2018 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: StarlingX Infrastructure Containerization When: Occurs every Monday effective 12/3/2018 until 3/25/2019 from 11:00 AM to 11:30 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Sat Dec 8 22:17:25 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Sat, 8 Dec 2018 22:17:25 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20181207 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C71B63@FMSMSX108.amr.corp.intel.com> This is the status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-07 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Dec 10 03:44:21 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 10 Dec 2018 03:44:21 +0000 Subject: [Starlingx-discuss] Analysis of patch 4ae5a58 for StartlingX upstreaming In-Reply-To: References: Message-ID: Hi Matt, Thank you for your information! Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Saturday, December 8, 2018 1:21 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Analysis of patch 4ae5a58 for StartlingX upstreaming Hello Chenjie, The plan is to upstream this to the respective projects (networking-bgpvpn and neutron-dyanmic-routing). However, in the short term this is not being prioritized nor has any attempt been made to approach the individual project teams about getting this accepted. -Matt From: "Xu, Chenjie" > Date: Friday, December 7, 2018 at 9:30 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Analysis of patch 4ae5a58 for StartlingX upstreaming Hi Matt, Will the change in BGPVPN upstream? The code to use the RFE “Enable other subprojects to extend l2pop fdbs”. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Dec 10 06:40:39 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 10 Dec 2018 06:40:39 +0000 Subject: [Starlingx-discuss] CentOS 7.6 upgrade Message-ID: <9700A18779F35F49AF027300A49E7C765FE5527C@SHSMSX101.ccr.corp.intel.com> Hi all, I attached the srpm list we plan to upgrade for you reference. These packages are downloaded with current repo lst + new created CentOS 7.6 repo [0]. There are total 86 srpm in our rpms list now, and 51 of them will be upgraded. Kernel upgrade will be done at master with story [1]. Other 49 srpm will be done at feature branch, which is not created yet. I will create another story to track these srpm. Please help review attached doc and share me your thought. Thanks. [0]: https://review.openstack.org/623975 [1]: https://storyboard.openstack.org/#!/story/2004521 Best Regards Shuicheng -------------- next part -------------- A non-text attachment was scrubbed... Name: srpm.xlsx Type: application/vnd.openxmlformats-officedocument.spreadsheetml.sheet Size: 17929 bytes Desc: srpm.xlsx URL: From chenjie.xu at intel.com Mon Dec 10 07:11:29 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 10 Dec 2018 07:11:29 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming In-Reply-To: <2C44D9ED-9F61-4223-A1DD-70FEE88DFA30@windriver.com> References: <2C44D9ED-9F61-4223-A1DD-70FEE88DFA30@windriver.com> Message-ID: Hi Matt, I am sorry for misunderstanding your meaning. The StarlingX Distributed Cloud use-case has been included in the RFE. To make the requirement clear, I update the RFE. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Saturday, December 8, 2018 1:22 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hello Chenjie, I wasn’t planning on providing additional use-case information. I just wanted to make sure the StarlingX Distributed Cloud use-case was included in the RFE. Regards, Matt From: "Xu, Chenjie" > Date: Tuesday, December 4, 2018 at 10:16 PM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, Thank you for your reply! Looking forward to the additional use-case information. Best Regards From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, December 4, 2018 9:12 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Chenjie, I would add additional use-case information to help with the justification for adding this capability. The detailed quota information is used within the StarlingX distributed cloud solution. The quota information for a given project/user is aggregated across all sub-clouds, therefore having an efficient mechanism to retrieve the quota details of all resources is required. Regards, Matt From: "Xu, Chenjie" > Date: Monday, December 3, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: RFE_ADD_SUPPORT_FOR_QUERYING_QUOTAS_WITH_USAGE.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 16615 bytes Desc: RFE_ADD_SUPPORT_FOR_QUERYING_QUOTAS_WITH_USAGE.docx URL: From cindy.xie at intel.com Mon Dec 10 09:12:10 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 10 Dec 2018 09:12:10 +0000 Subject: [Starlingx-discuss] CentOS 7.6 upgrade In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE5527C@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE5527C@SHSMSX101.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF6743@SHSMSX104.ccr.corp.intel.com> Shuicheng, >>> I will create another story to track these srpm. There is already one storyboard for sRPM upgrade to 7.6: https://storyboard.openstack.org/#!/story/2004522. What you need to do is to add tasks for each sRPM needs upgrade. Thx. - cindy -----Original Message----- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, December 10, 2018 2:41 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.6 upgrade Hi all, I attached the srpm list we plan to upgrade for you reference. These packages are downloaded with current repo lst + new created CentOS 7.6 repo [0]. There are total 86 srpm in our rpms list now, and 51 of them will be upgraded. Kernel upgrade will be done at master with story [1]. Other 49 srpm will be done at feature branch, which is not created yet. I will create another story to track these srpm. Please help review attached doc and share me your thought. Thanks. [0]: https://review.openstack.org/623975 [1]: https://storyboard.openstack.org/#!/story/2004521 Best Regards Shuicheng From Jason.McKenna at windriver.com Mon Dec 10 13:27:59 2018 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Mon, 10 Dec 2018 13:27:59 +0000 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch Message-ID: Hi build team, particularly reviewers, I'm starting to see some code reviews come through which have to do with the rebase to CentOS 7.6. When these come through, please ensure they are against the feature branch and not the "master" branch. I do not see the feature branch yet created, but I'm assuming it will be called "f/centos76". There is an AR to creates the branch pending (meeting minutes "Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5"). -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Mon Dec 10 14:37:09 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Mon, 10 Dec 2018 14:37:09 +0000 Subject: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming In-Reply-To: References: <2D8849FB-AB0B-4385-9DAC-9BF4BEF41960@windriver.com> <6A51E55B-3A78-4182-9605-FE97AF2A0A62@windriver.com> Message-ID: <36D4FE27-D370-4912-9D6C-E952ADC54D07@windriver.com> Attached is a copy with my revisions. There were no major changes, just some minor corrections and additions. Overall looks good. Thanks, Matt From: "Xu, Chenjie" Date: Thursday, November 29, 2018 at 10:29 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Matt, Thanks for your reply! Yes, we can fix this bug as part of the upstreaming. The code can be viewed by following link: https://review.openstack.org/#/c/620929/ Please help comment and fix the bug. The RFE for patch 88b7bc7 is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, November 29, 2018 8:59 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Chenjie, Since this isn’t causing a runtime issue at the moment, I think this can be fixed as part of the upstreaming where it can be reviewed in Gerrit. If you are looking for confirmation of the changes prior to that I would create a pull request so it can be reviewed/merged prior. -Matt From: "Xu, Chenjie" > Date: Wednesday, November 28, 2018 at 9:22 PM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Matt, Thanks for your comments! An RFE has been written to upstream patch 88b7bc7. Before I submit the code, I think we should fix this bug. I think we can change the parameter “host” to “agent” in the following lines: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L44 https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L57 https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L444 https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L434 Please let me know what you think of the proposal. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, November 29, 2018 1:43 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hello Chenjie, I agree, this is a bug. I think the only reason it hasn’t been an issue is because the only user of the get_fdb_entries RPC is the BGP agent and it is not passing a host in the request (it is None), therefore it excludes the attempt to try and access the parameter as an agent based on the condition in _create_agent_fdb. Here is the BGP caller: https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1498 From: "Xu, Chenjie" > Date: Wednesday, November 28, 2018 at 10:11 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Matt, I think that there exists a bug in patch 88b7bc7. Function _get_fdb_entries will call function _create­_agent_fdb by following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L441 The variable “host” will be used to call _create_agent_fdb. However function _create_agent_fdb expects variable “agent”: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L264 I think this should be a bug. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: RFE_ADD_RPC_QUERY_API_TO_L2POP_FOR_FDB_RESYNC-MP.docx Type: application/vnd.openxmlformats-officedocument.wordprocessingml.document Size: 19178 bytes Desc: RFE_ADD_RPC_QUERY_API_TO_L2POP_FOR_FDB_RESYNC-MP.docx URL: From cesar.lara at intel.com Mon Dec 10 16:37:00 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Mon, 10 Dec 2018 16:37:00 +0000 Subject: [Starlingx-discuss] [build] [meetings] Build team meeting minutes 12/6/2018 Message-ID: <0B566C62EC792145B40E29EFEBF1AB471058BE65@fmsmsx104.amr.corp.intel.com> Build team meeting 12/6/2018 Attendees Jason, Scott, Ken, Saul, Victor, Abraham, Marcela, Memo, Chuy, Erich, Mario, Cesar, Felipe Agenda - Public static analysis - what is the status - Bug triage - is there anything urgent? Are they all staffed? - Change logs and release notes - Follow up on ISO for releases and milestones - StarlingX Docker repository Notes Public static analysis - This is on hold, we figure out how to create new jobs and have the tool run them, but the effort today is not being prioritized, but we need this to happen. All critical and high issues are now cleared. Bug triage - we are currently not triaging build related issues, we will take 10 minutes of each meeting just to make sure that bugs have a person assigned to them and are being take care of. Change logs and release notes - We have release notes as part of the OpenStack release on the documentation space, what we need to cover y the change log for each ISO file being generated, today we don't have something to track changes on the Cengn space. Intel will share the script being used by internal build to see if it can be applied to Cengn. Follow up on ISO for releases and milestones - we need to flush out ideas around the retention policies around ISO files created based on special milestones and releases, today we don't have a clear understanding on what it looks like. AR - Ken will propose a timeline on this ad will post proposal to Build team wiki to get feedback from the community. Follow up next meeting, support window for any given release. StarlingX Docker repository - Scott sent a communication to ML and has not received a lot of feedback, do we need to explore more options? Not for the moment, Scott to follow up with Dean. Opens - How do we tag software in Git? AR Dean, Scott and Saul to have a meeting about the tagging of Docker images Dashboard? Still on the to-do list we need this for the history of builds, correlate those with sanity testing, and some performance metrics. Signature of RPM files? We need to figure this out. Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From Numan.Waheed at windriver.com Mon Dec 10 18:02:57 2018 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Mon, 10 Dec 2018 18:02:57 +0000 Subject: [Starlingx-discuss] Agenda Item for StarlingX Test Meeting Message-ID: <3CAA827B7A79BA46B15B280EC82088FE482486F1@ALA-MBD.corp.ad.wrs.com> Hi Ada, I would request to add an agenda item in tomorrow's StarlingX Test Meeting for discussion of Test Case Template. Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Mon Dec 10 18:30:47 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 10 Dec 2018 18:30:47 +0000 Subject: [Starlingx-discuss] Agenda Item for StarlingX Test Meeting In-Reply-To: <3CAA827B7A79BA46B15B280EC82088FE482486F1@ALA-MBD.corp.ad.wrs.com> References: <3CAA827B7A79BA46B15B280EC82088FE482486F1@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A91CFEE@fmsmsx101.amr.corp.intel.com> Hi Ada I also want to share a Proposal for a Test Suite, are you able to add this item to the agenda as well. Regards, José From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Monday, December 10, 2018 12:03 PM To: Cabrales, Ada ; starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Agenda Item for StarlingX Test Meeting Hi Ada, I would request to add an agenda item in tomorrow's StarlingX Test Meeting for discussion of Test Case Template. Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Mon Dec 10 19:01:52 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 10 Dec 2018 13:01:52 -0600 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: References: Message-ID: On Mon, Dec 10, 2018 at 7:28 AM McKenna, Jason wrote: > I’m starting to see some code reviews come through which have to do with the rebase to CentOS 7.6. When these come through, please ensure they are against the feature branch and not the “master” branch. I do not see the feature branch yet created, but I’m assuming it will be called “f/centos76”. There is an AR to creates the branch pending (meeting minutes “Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5”). The f/centos76 branch has been created in stx-integ stx-root stx-tools stx-upstream. dt -- Dean Troyer dtroyer at gmail.com From Ken.Young at windriver.com Mon Dec 10 19:41:56 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Mon, 10 Dec 2018 19:41:56 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20181207 Message-ID: <0EB01F37-007E-4C6F-B483-7FFB6ECB15DB@windriver.com> JC, This is a great start. What frequency are you planning to test? /KenY From: "Alonso, Juan Carlos" Date: Saturday, December 8, 2018 at 5:18 PM To: starlingx Subject: [Starlingx-discuss] Sanity Test - ISO 20181207 This is the status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-07 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Mon Dec 10 20:08:15 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 10 Dec 2018 20:08:15 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20181207 In-Reply-To: <0EB01F37-007E-4C6F-B483-7FFB6ECB15DB@windriver.com> References: <0EB01F37-007E-4C6F-B483-7FFB6ECB15DB@windriver.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C72038@FMSMSX108.amr.corp.intel.com> Hi Ken, The Sanity test is executed and reported daily, if there is a new CENGN ISO generated. Regards. Juan Carlos Alonso From: Young, Ken [mailto:Ken.Young at windriver.com] Sent: Monday, December 10, 2018 1:42 PM To: Alonso, Juan Carlos Cc: starlingx Subject: Re: [Starlingx-discuss] Sanity Test - ISO 20181207 JC, This is a great start. What frequency are you planning to test? /KenY From: "Alonso, Juan Carlos" > Date: Saturday, December 8, 2018 at 5:18 PM To: starlingx > Subject: [Starlingx-discuss] Sanity Test - ISO 20181207 This is the status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-07 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Dec 10 22:27:04 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 10 Dec 2018 22:27:04 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/11/2018 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4710D@FMSMSX114.amr.corp.intel.com> Testing team agenda for 12/11/2018 * Test case template - Numan * proposal for an unified test suite - Jose * Sanity testing: coverage improvements - JC * Opens Regards Ada From shuicheng.lin at intel.com Tue Dec 11 00:30:24 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 11 Dec 2018 00:30:24 +0000 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C765FE5548E@SHSMSX101.ccr.corp.intel.com> Thanks Dean for help create the feature branch. Hi Jason, All CentOS 7.6 related code will be submitted to feature branch. Best Regards Shuicheng -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Tuesday, December 11, 2018 3:02 AM To: McKenna, Jason Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch On Mon, Dec 10, 2018 at 7:28 AM McKenna, Jason wrote: > I’m starting to see some code reviews come through which have to do with the rebase to CentOS 7.6. When these come through, please ensure they are against the feature branch and not the “master” branch. I do not see the feature branch yet created, but I’m assuming it will be called “f/centos76”. There is an AR to creates the branch pending (meeting minutes “Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5”). The f/centos76 branch has been created in stx-integ stx-root stx-tools stx-upstream. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From shuicheng.lin at intel.com Tue Dec 11 00:54:37 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 11 Dec 2018 00:54:37 +0000 Subject: [Starlingx-discuss] CentOS 7.6 upgrade In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DF6743@SHSMSX104.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE5527C@SHSMSX101.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35DF6743@SHSMSX104.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE555C0@SHSMSX101.ccr.corp.intel.com> Thanks Cindy. I have added the detail task list, and some more description in the story. https://storyboard.openstack.org/#!/story/2004522 Best Regards Shuicheng -----Original Message----- From: Xie, Cindy Sent: Monday, December 10, 2018 5:12 PM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 upgrade Shuicheng, >>> I will create another story to track these srpm. There is already one storyboard for sRPM upgrade to 7.6: https://storyboard.openstack.org/#!/story/2004522. What you need to do is to add tasks for each sRPM needs upgrade. Thx. - cindy -----Original Message----- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, December 10, 2018 2:41 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.6 upgrade Hi all, I attached the srpm list we plan to upgrade for you reference. These packages are downloaded with current repo lst + new created CentOS 7.6 repo [0]. There are total 86 srpm in our rpms list now, and 51 of them will be upgraded. Kernel upgrade will be done at master with story [1]. Other 49 srpm will be done at feature branch, which is not created yet. I will create another story to track these srpm. Please help review attached doc and share me your thought. Thanks. [0]: https://review.openstack.org/623975 [1]: https://storyboard.openstack.org/#!/story/2004521 Best Regards Shuicheng From chenzz at certusnet.com.cn Tue Dec 11 01:31:33 2018 From: chenzz at certusnet.com.cn (chenzz) Date: Tue, 11 Dec 2018 09:31:33 +0800 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Message-ID: <201812110931334306841@certusnet.com.cn> Hi, I’m en engineer from CertusNet. At present, we are developing edge computing terminal products that we need to use StarlingX. I have some problems when deploying the StarlingX, when I made the StarlingX ISO, I have downloaded the source code library and run it with ‘repo init -u https://git.starlingx.io/stx-manifest -m default.xml’, the system report: Then i have configured the HTTP proxy, but the error was still reported. Please help me check this problem. Thank you. by the way do you have StaringX internal discussion groups such like Wechat group Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_(12-11-09-30-05).png Type: image/png Size: 325 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_BDE4(12-11-09-30-05).png Type: image/png Size: 3902 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_25EC(12-11-09-30-05).png Type: image/png Size: 3354 bytes Desc: not available URL: From erich.cm.lists at yandex.com Tue Dec 11 01:39:28 2018 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Mon, 10 Dec 2018 17:39:28 -0800 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812110931334306841@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn> Message-ID: <1722711544492368@sas1-02732547ccc0.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Dec 11 01:43:42 2018 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 11 Dec 2018 01:43:42 +0000 Subject: [Starlingx-discuss] Starlingx-discuss Digest, Vol 7, Issue 45 In-Reply-To: References: Message-ID: <56829C2A36C2E542B0CCB9854828E4D8508C42FF@CDSMSX102.ccr.corp.intel.com> Hi chenzz You can get a china mainland customized repo, baidu repo 清华. You will get expected tool. BR! Martin, Chen SSG OTC, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io [mailto:starlingx-discuss-request at lists.starlingx.io] Sent: Tuesday, December 11, 2018 9:32 AM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 7, Issue 45 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. [ Test ] Meeting agenda - 12/11/2018 (Cabrales, Ada) 2. Re: [build] CentOS 7.6 rebase into feature branch (Lin, Shuicheng) 3. Re: CentOS 7.6 upgrade (Lin, Shuicheng) 4. Re: Problems in build StatlingX mirror. Asking for help. (chenzz) ---------------------------------------------------------------------- Message: 1 Date: Mon, 10 Dec 2018 22:27:04 +0000 From: "Cabrales, Ada" To: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/11/2018 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4710D at FMSMSX114.amr.corp.intel.com> Content-Type: text/plain; charset="us-ascii" Testing team agenda for 12/11/2018 * Test case template - Numan * proposal for an unified test suite - Jose * Sanity testing: coverage improvements - JC * Opens Regards Ada ------------------------------ Message: 2 Date: Tue, 11 Dec 2018 00:30:24 +0000 From: "Lin, Shuicheng" To: Dean Troyer , "McKenna, Jason" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch Message-ID: <9700A18779F35F49AF027300A49E7C765FE5548E at SHSMSX101.ccr.corp.intel.com> Content-Type: text/plain; charset="utf-8" Thanks Dean for help create the feature branch. Hi Jason, All CentOS 7.6 related code will be submitted to feature branch. Best Regards Shuicheng -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Tuesday, December 11, 2018 3:02 AM To: McKenna, Jason Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch On Mon, Dec 10, 2018 at 7:28 AM McKenna, Jason wrote: > I’m starting to see some code reviews come through which have to do with the rebase to CentOS 7.6. When these come through, please ensure they are against the feature branch and not the “master” branch. I do not see the feature branch yet created, but I’m assuming it will be called “f/centos76”. There is an AR to creates the branch pending (meeting minutes “Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5”). The f/centos76 branch has been created in stx-integ stx-root stx-tools stx-upstream. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ Message: 3 Date: Tue, 11 Dec 2018 00:54:37 +0000 From: "Lin, Shuicheng" To: "Xie, Cindy" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] CentOS 7.6 upgrade Message-ID: <9700A18779F35F49AF027300A49E7C765FE555C0 at SHSMSX101.ccr.corp.intel.com> Content-Type: text/plain; charset="utf-8" Thanks Cindy. I have added the detail task list, and some more description in the story. https://storyboard.openstack.org/#!/story/2004522 Best Regards Shuicheng -----Original Message----- From: Xie, Cindy Sent: Monday, December 10, 2018 5:12 PM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 upgrade Shuicheng, >>> I will create another story to track these srpm. There is already one storyboard for sRPM upgrade to 7.6: https://storyboard.openstack.org/#!/story/2004522. What you need to do is to add tasks for each sRPM needs upgrade. Thx. - cindy -----Original Message----- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, December 10, 2018 2:41 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS 7.6 upgrade Hi all, I attached the srpm list we plan to upgrade for you reference. These packages are downloaded with current repo lst + new created CentOS 7.6 repo [0]. There are total 86 srpm in our rpms list now, and 51 of them will be upgraded. Kernel upgrade will be done at master with story [1]. Other 49 srpm will be done at feature branch, which is not created yet. I will create another story to track these srpm. Please help review attached doc and share me your thought. Thanks. [0]: https://review.openstack.org/623975 [1]: https://storyboard.openstack.org/#!/story/2004521 Best Regards Shuicheng ------------------------------ Message: 4 Date: Tue, 11 Dec 2018 09:31:33 +0800 From: chenzz To: starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Message-ID: <201812110931334306841 at certusnet.com.cn> Content-Type: text/plain; charset="gb2312" Hi, I’m en engineer from CertusNet. At present, we are developing edge computing terminal products that we need to use StarlingX. I have some problems when deploying the StarlingX, when I made the StarlingX ISO, I have downloaded the source code library and run it with ‘repo init -u https://git.starlingx.io/stx-manifest -m default.xml’, the system report: Then i have configured the HTTP proxy, but the error was still reported. Please help me check this problem. Thank you. by the way do you have StaringX internal discussion groups such like Wechat group Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_(12-11-09-30-05).png Type: image/png Size: 325 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_BDE4(12-11-09-30-05).png Type: image/png Size: 3902 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: InsertPic_25EC(12-11-09-30-05).png Type: image/png Size: 3354 bytes Desc: not available URL: ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 7, Issue 45 ************************************************ From shuicheng.lin at intel.com Tue Dec 11 02:15:12 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 11 Dec 2018 02:15:12 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812110931334306841@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn> Message-ID: <9700A18779F35F49AF027300A49E7C765FE55630@SHSMSX101.ccr.corp.intel.com> Hi chenzz, You need a proxy that could access google site to download the clone.bundle file. Then you could have a try with below cmd: repo init -u < uri of manifest on mirror > --repo-url ~/clone.bundle Best Regards Shuicheng From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 11, 2018 9:32 AM To: starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi, I'm en engineer from CertusNet. At present, we are developing edge computing terminal products that we need to use StarlingX. I have some problems when deploying the StarlingX, when I made the StarlingX ISO, I have downloaded the source code library and run it with 'repo init -u [cid:image001.png at 01D4913A.6677E090] https://git.starlingx.io/stx-manifest -m default.xml', the system report: [cid:image002.png at 01D4913A.6677E090] Then i have configured the HTTP proxy, but the error was still reported. [cid:image003.png at 01D4913A.6677E090] Please help me check this problem. Thank you. by the way do you have StaringX internal discussion groups such like Wechat group ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 325 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 3902 bytes Desc: image002.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.png Type: image/png Size: 3354 bytes Desc: image003.png URL: From juan.carlos.alonso at intel.com Tue Dec 11 02:45:44 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 11 Dec 2018 02:45:44 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181210 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7214A@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-10 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Dec 11 03:52:07 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 11 Dec 2018 03:52:07 +0000 Subject: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming In-Reply-To: <36D4FE27-D370-4912-9D6C-E952ADC54D07@windriver.com> References: <2D8849FB-AB0B-4385-9DAC-9BF4BEF41960@windriver.com> <6A51E55B-3A78-4182-9605-FE97AF2A0A62@windriver.com> <36D4FE27-D370-4912-9D6C-E952ADC54D07@windriver.com> Message-ID: Hi Matt, Thank you so much for your revisions! The RFE has been submitted to the Launchpad. The link is below: https://bugs.launchpad.net/neutron/+bug/1806316 I will try to discuss this RFE on OpenStack Neutron Driver Meeting. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, December 10, 2018 10:37 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Attached is a copy with my revisions. There were no major changes, just some minor corrections and additions. Overall looks good. Thanks, Matt From: "Xu, Chenjie" > Date: Thursday, November 29, 2018 at 10:29 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Matt, Thanks for your reply! Yes, we can fix this bug as part of the upstreaming. The code can be viewed by following link: https://review.openstack.org/#/c/620929/ Please help comment and fix the bug. The RFE for patch 88b7bc7 is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, November 29, 2018 8:59 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Chenjie, Since this isn’t causing a runtime issue at the moment, I think this can be fixed as part of the upstreaming where it can be reviewed in Gerrit. If you are looking for confirmation of the changes prior to that I would create a pull request so it can be reviewed/merged prior. -Matt From: "Xu, Chenjie" > Date: Wednesday, November 28, 2018 at 9:22 PM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Matt, Thanks for your comments! An RFE has been written to upstream patch 88b7bc7. Before I submit the code, I think we should fix this bug. I think we can change the parameter “host” to “agent” in the following lines: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L44 https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L57 https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L444 https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L434 Please let me know what you think of the proposal. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, November 29, 2018 1:43 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hello Chenjie, I agree, this is a bug. I think the only reason it hasn’t been an issue is because the only user of the get_fdb_entries RPC is the BGP agent and it is not passing a host in the request (it is None), therefore it excludes the attempt to try and access the parameter as an agent based on the condition in _create_agent_fdb. Here is the BGP caller: https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1498 From: "Xu, Chenjie" > Date: Wednesday, November 28, 2018 at 10:11 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 88b7bc7 for StartlingX upstreaming Hi Matt, I think that there exists a bug in patch 88b7bc7. Function _get_fdb_entries will call function _create­_agent_fdb by following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L441 The variable “host” will be used to call _create_agent_fdb. However function _create_agent_fdb expects variable “agent”: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/mech_driver.py#L264 I think this should be a bug. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Dec 11 05:27:49 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 11 Dec 2018 05:27:49 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF8006@SHSMSX104.ccr.corp.intel.com> Agenda for 12/12 meeting: 1. minor kernel version upgrade to 3.10.0.957 (Shuicheng/Martin) 2. preperation for CentOS 7.6 upgrade status (Shuicheng) 3. Ceph upgrade status (Vivian/Dehao/Changcheng) 4. Python2to3 status, flocks and OS packages (Austin) 5. Qemu 3.0 branch switch (Ghada/Jim) 6. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Wold, Saul; Jones, Bruce E; Troyer, Dean; Lin, Shuicheng; Zhu, Vivian; Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; 'Rowsell, Brent'; 'Khalil, Ghada'; Waheed, Numan; Somerville, Jim; starlingx-discuss at lists.starlingx.io Cc: Perez Carranza, Jose; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; Gomez, Juan P; Lara, Cesar; Arce Moreno, Abraham; Cobbley, David A; Hernandez Gonzalez, Fernando; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, Jacky'; 'Seiler, Glenn'; 'Eslimi, Dariush'; 'Young, Ken' Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 12, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From cindy.xie at intel.com Tue Dec 11 06:49:28 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 11 Dec 2018 06:49:28 +0000 Subject: [Starlingx-discuss] Python2to3 status Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF81C4@SHSMSX104.ccr.corp.intel.com> Ian, Saul, Per TSC request, here is the Python2to3 work items we are tracking and status: 1. Repos for flocks with Python code (11 projects) name Storyboards Status Notes Stx-config https://storyboard.openstack.org/#!/story/2003433 WIP Stx-distcloud https://storyboard.openstack.org/#!/story/2004585 Todo New Stx-distcloud-client https://storyboard.openstack.org/#!/story/2004586 Todo New Stx-fault https://storyboard.openstack.org/#!/story/2003310 Merged Stx-gui https://storyboard.openstack.org/#!/story/2003432 Merged Stx-ha https://storyboard.openstack.org/#!/story/2003430 Merged Stx-integ https://storyboard.openstack.org/#!/story/2002909 WIP Stx-metal https://storyboard.openstack.org/#!/story/2003426 Merged Stx-nfv https://storyboard.openstack.org/#!/story/2003427 WIP Stx-update https://storyboard.openstack.org/#!/story/2003429 Merged Stx-upstream https://storyboard.openstack.org/#!/story/2003428 Merged 2. For openStack repos: as we are moving to Stein, so the previous planned activities to backport those Python2to3 patches from master are no longer required. Checked with storage team, I am assuming other teams shall be in same situation. 3. For other OS packages which are with Python code, detail readiness analysis posted here: https://drive.google.com/open?id=1RT3oJJ5umHXZ_A--sJRpA23qB3LFY-lY. In short, we still found ~73 packages (out of 436) lacking of Python3 transition activities from upstream community. This needs to further worked on and finalize the strategy. We will discuss in Wed's non-openstack distro weekly call if people still want details. @Austin, feel free to add anything. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Tue Dec 11 15:38:46 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 11 Dec 2018 15:38:46 +0000 Subject: [Starlingx-discuss] Testing for CPEH upversion to v13.2.0? Message-ID: Dehao/Cindy: We see the multiple gerrit reviews out for the ceph upversion work and that these are getting closer to being merged. Ovidiu and I plan to attend the Distro non-Openstack call tomorrow but in case you are trying to merge these today can you answer these questions: 1) What testing has been done with the v13.2.0 to validate that this version of CEPH works with StarlingX? 2) The containerization subproject relies heavily on CEPH, not only for the dedicated storage config but also for the other configs. Can we get access to one of your loads before this is merged to determine if CEPH on the other configs works with v13.2.0 or if there are issues that need to be addressed? Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marvin.Huang at windriver.com Tue Dec 11 15:52:04 2018 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Tue, 11 Dec 2018 15:52:04 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE1630B@ALA-MBD.corp.ad.wrs.com> Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I'm using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Tue Dec 11 16:03:03 2018 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 11 Dec 2018 16:03:03 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support In-Reply-To: <74D9C1EDDC44EF468303629CF9A2832C9CE1630B@ALA-MBD.corp.ad.wrs.com> References: <74D9C1EDDC44EF468303629CF9A2832C9CE1630B@ALA-MBD.corp.ad.wrs.com> Message-ID: Marvin, A change was merged on Dec 10 which changed the install_non_bb target in the Makefile for fm-mgr There is an open review for adding a devstack job to zuul for stx-fault, so in order for that review to pass zuul, it will need to include the fix. Al From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 10:52 AM To: starlingx Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I'm using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Tue Dec 11 16:31:14 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 11 Dec 2018 16:31:14 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Message-ID: <9E7365F4-4B68-4DAB-AF76-057C7D2241D3@intel.com> Hi My bad, I wasn’t of this required changes on devstack. I’ll send the patch to solve it. -Erich From: "Bailey, Henry Albert (Al)" Date: Tuesday, December 11, 2018 at 10:03 AM To: "Huang, Marvin" , starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Marvin, A change was merged on Dec 10 which changed the install_non_bb target in the Makefile for fm-mgr There is an open review for adding a devstack job to zuul for stx-fault, so in order for that review to pass zuul, it will need to include the fix. Al From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 10:52 AM To: starlingx Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I’m using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From Marvin.Huang at windriver.com Tue Dec 11 17:03:01 2018 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Tue, 11 Dec 2018 17:03:01 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support In-Reply-To: <9E7365F4-4B68-4DAB-AF76-057C7D2241D3@intel.com> References: <9E7365F4-4B68-4DAB-AF76-057C7D2241D3@intel.com> Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE1633E@ALA-MBD.corp.ad.wrs.com> Thanks all! The good news is that it looks the current codes fixed some old issues I hit before (or this time it broke before the previous failing point). Marvin From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, December 11, 2018 11:31 AM To: Bailey, Henry Albert (Al); Huang, Marvin; starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Hi My bad, I wasn’t of this required changes on devstack. I’ll send the patch to solve it. -Erich From: "Bailey, Henry Albert (Al)" Date: Tuesday, December 11, 2018 at 10:03 AM To: "Huang, Marvin" , starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Marvin, A change was merged on Dec 10 which changed the install_non_bb target in the Makefile for fm-mgr There is an open review for adding a devstack job to zuul for stx-fault, so in order for that review to pass zuul, it will need to include the fix. Al From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 10:52 AM To: starlingx Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I’m using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Tue Dec 11 21:51:24 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 11 Dec 2018 21:51:24 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-11 (link) Sanity Test is executed in a Virtual Environment Status: RED Simplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 18 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] Duplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Controller Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Dedicated Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] ------------------------------------------------------------------ This issue was found by our Robot test framework. During config_controller, the suite copy a Config file from host to StarlingX system. Robot uses SSHLibrary keyword, such keyword uses sftp service to perform the transfer file. Such transfer failed due to sftp service is not working on the system. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Tue Dec 11 20:52:42 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 11 Dec 2018 20:52:42 +0000 Subject: [Starlingx-discuss] [Testing] Test Framework In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A8C37D6@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A8C37D6@fmsmsx101.amr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A91D340@fmsmsx101.amr.corp.intel.com> Hi After some work we developed a framework to cover Deployment + Testing + Reporting for a StarlingX (on virtual environment for now) please see attached file with a high level overview and please let us know your comments. Regards, José > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, August 10, 2018 3:10 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Testing] Test Framework > > Hello, > > We are currently working on automated tests for StarlingX Deployment, as > the base of the automation we are using Robot Framework [1]. If any of you > have experience or have read about this framework we would like to hear > your feedback of this approach. > > 1- http://robotframework.org/ > > Regards, > José > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- A non-text attachment was scrubbed... Name: Test_Framework_Proposal_pdf.pdf Type: application/pdf Size: 301737 bytes Desc: Test_Framework_Proposal_pdf.pdf URL: From bruce.e.jones at intel.com Tue Dec 11 23:17:25 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 11 Dec 2018 23:17:25 +0000 Subject: [Starlingx-discuss] [Testing] Test Framework In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A91D340@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A8C37D6@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A91D340@fmsmsx101.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BB1ED1785@fmsmsx117.amr.corp.intel.com> Looks good, nice work! -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: Tuesday, December 11, 2018 12:53 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Testing] Test Framework Hi After some work we developed a framework to cover Deployment + Testing + Reporting for a StarlingX (on virtual environment for now) please see attached file with a high level overview and please let us know your comments. Regards, José > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, August 10, 2018 3:10 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Testing] Test Framework > > Hello, > > We are currently working on automated tests for StarlingX Deployment, > as the base of the automation we are using Robot Framework [1]. If any > of you have experience or have read about this framework we would like > to hear your feedback of this approach. > > 1- http://robotframework.org/ > > Regards, > José > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Wed Dec 12 00:17:31 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 12 Dec 2018 00:17:31 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 12/11/2018 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD482B3@FMSMSX114.amr.corp.intel.com> Meeting minutes - 12/11 Attendees: Bruce, Elio, Jose, Maria, Abraham, JC, Numan, JP, Ada, David, Cristopher * Test case template - Numan - Numan presented a proposal for a document for writing test cases - Priority - who will define the priority of the test case? The one writing the test case? Suggestions is to define a set of rules for helping on defining the right priority - Question: who will be responsible for maintaining the test management tool? We need to define a role for taking ownership of this task - Feature field: use the storyboard number - Steps: use descriptive steps - Expected results: include check points - Could be written between developer and testing teams - The template is for unit test and functional test - Working on developing tests for new features is along done - Ada to upload the documents to the wiki * Proposal for an unified test suite - Jose - Jose presented robot framework Open source project supports CLI and GUI tests No extra packages required into the entity being tested Can execute native (Stx specific tests) or external suites (like tempest), bash scripting could also be run into Robot Robot uses tags for building test plans: regression, feature, etc It generates reports automatically - Jose to send the presentation to the mailing list for getting feedback from the community - Numan to contribute with the disadvantages they found when evaluating Robot * Pushed for next week: - Sanity testing: coverage improvement - JC * Opens - Testing strategy document to be sent EOD or early tomorrow, talk about this in the community meeting. - Ada - For the Community meeting on January - where to keep all the testing documentation? (not the test cases) Initially at the wiki, but define the final place (StarlingX docs?) From austin.sun at intel.com Wed Dec 12 00:50:32 2018 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 12 Dec 2018 00:50:32 +0000 Subject: [Starlingx-discuss] Python2to3 status In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DF81C4@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DF81C4@SHSMSX104.ccr.corp.intel.com> Message-ID: Hi All: For convenience, The sheet of packages is attached https://bugs.launchpad.net/starlingx/+bug/1808073 . Thanks. BR Austin Sun. From: Xie, Cindy Sent: Tuesday, December 11, 2018 2:49 PM To: 'Jolliffe, Ian' ; Rowsell, Brent ; Wold, Saul ; Sun, Austin ; starlingx-discuss at lists.starlingx.io Subject: Python2to3 status Ian, Saul, Per TSC request, here is the Python2to3 work items we are tracking and status: 1. Repos for flocks with Python code (11 projects) name Storyboards Status Notes Stx-config https://storyboard.openstack.org/#!/story/2003433 WIP Stx-distcloud https://storyboard.openstack.org/#!/story/2004585 Todo New Stx-distcloud-client https://storyboard.openstack.org/#!/story/2004586 Todo New Stx-fault https://storyboard.openstack.org/#!/story/2003310 Merged Stx-gui https://storyboard.openstack.org/#!/story/2003432 Merged Stx-ha https://storyboard.openstack.org/#!/story/2003430 Merged Stx-integ https://storyboard.openstack.org/#!/story/2002909 WIP Stx-metal https://storyboard.openstack.org/#!/story/2003426 Merged Stx-nfv https://storyboard.openstack.org/#!/story/2003427 WIP Stx-update https://storyboard.openstack.org/#!/story/2003429 Merged Stx-upstream https://storyboard.openstack.org/#!/story/2003428 Merged 2. For openStack repos: as we are moving to Stein, so the previous planned activities to backport those Python2to3 patches from master are no longer required. Checked with storage team, I am assuming other teams shall be in same situation. 3. For other OS packages which are with Python code, detail readiness analysis posted here: https://drive.google.com/open?id=1RT3oJJ5umHXZ_A--sJRpA23qB3LFY-lY. In short, we still found ~73 packages (out of 436) lacking of Python3 transition activities from upstream community. This needs to further worked on and finalize the strategy. We will discuss in Wed's non-openstack distro weekly call if people still want details. @Austin, feel free to add anything. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Dec 12 00:52:28 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 12 Dec 2018 00:52:28 +0000 Subject: [Starlingx-discuss] Testing for CPEH upversion to v13.2.0? In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DF9486@SHSMSX104.ccr.corp.intel.com> Frank, Ceph upgrade is on staging feature branch and it will not switch over before the full testing done. @Vivian, can you please share your plan before Ceph v13.2.2 can be switched over? Thx. - cindy From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, December 11, 2018 11:39 PM To: Xie, Cindy ; Shang, Dehao Cc: Church, Robert ; Poncea, Ovidiu ; 'starlingx-discuss at lists.starlingx.io' Subject: Testing for CPEH upversion to v13.2.0? Dehao/Cindy: We see the multiple gerrit reviews out for the ceph upversion work and that these are getting closer to being merged. Ovidiu and I plan to attend the Distro non-Openstack call tomorrow but in case you are trying to merge these today can you answer these questions: 1) What testing has been done with the v13.2.0 to validate that this version of CEPH works with StarlingX? 2) The containerization subproject relies heavily on CEPH, not only for the dedicated storage config but also for the other configs. Can we get access to one of your loads before this is merged to determine if CEPH on the other configs works with v13.2.0 or if there are issues that need to be addressed? Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From vivian.zhu at intel.com Wed Dec 12 01:51:59 2018 From: vivian.zhu at intel.com (Zhu, Vivian) Date: Wed, 12 Dec 2018 01:51:59 +0000 Subject: [Starlingx-discuss] Testing for CPEH upversion to v13.2.0? In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DF9486@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DF9486@SHSMSX104.ccr.corp.intel.com> Message-ID: <371DF9A763E9F44F924F4A821FC070264C596F10@SHSMSX104.ccr.corp.intel.com> Hi Cindy, I have added you in another email thread about the latest issue we hit on Ceph upgrade. What we finished, 11 commits have been merged to starlingx-staging/stx-ceph which are majorly addressing the ceph upgrade build process. What we are working in progress, based on above code base, rebase 30+ patches to generate the build, hit the issue, we are consulting with WR about the suspicion patch to impact the build. The coming steps include: 1. Solve the build issue to generate ISO 2. Test the ceph status on dedicated storage config. 3. May need WR team to check other configs if doubt the update impact to containerization projects (but I prefer we can accept the upgrade firstly since containerization project still needs last long, too many deltas will stagger the ceph upgrade again and again. ) 4. Rebased patches merging 5. Mexican team to test the build and prove accepted on the upgrade 6. Build team to switch to v13.2.2 on build script, then ceph upgrade can claim to be done. Since CentOS upgrade is also working in progress, I prefer we can finish the ceph upgrade asap to avoid too many deltas in parallel to be impacted. Thanks! - Vivian SSG OTC NST Storage Tel: (8621)61167437 From: Xie, Cindy Sent: Wednesday, December 12, 2018 8:52 AM To: Miller, Frank ; Shang, Dehao ; Zhu, Vivian ; Jones, Bruce E Cc: Church, Robert ; Poncea, Ovidiu ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: Testing for CPEH upversion to v13.2.0? Frank, Ceph upgrade is on staging feature branch and it will not switch over before the full testing done. @Vivian, can you please share your plan before Ceph v13.2.2 can be switched over? Thx. - cindy From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, December 11, 2018 11:39 PM To: Xie, Cindy >; Shang, Dehao > Cc: Church, Robert >; Poncea, Ovidiu >; 'starlingx-discuss at lists.starlingx.io' > Subject: Testing for CPEH upversion to v13.2.0? Dehao/Cindy: We see the multiple gerrit reviews out for the ceph upversion work and that these are getting closer to being merged. Ovidiu and I plan to attend the Distro non-Openstack call tomorrow but in case you are trying to merge these today can you answer these questions: 1) What testing has been done with the v13.2.0 to validate that this version of CEPH works with StarlingX? 2) The containerization subproject relies heavily on CEPH, not only for the dedicated storage config but also for the other configs. Can we get access to one of your loads before this is merged to determine if CEPH on the other configs works with v13.2.0 or if there are issues that need to be addressed? Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From ran1.an at intel.com Wed Dec 12 08:56:33 2018 From: ran1.an at intel.com (An, Ran1) Date: Wed, 12 Dec 2018 08:56:33 +0000 Subject: [Starlingx-discuss] removing link capacity configuration during controller config Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE052844B7@shsmsx102.ccr.corp.intel.com> Hi All: I am working on bug [1] removing "link capacity of interface" configuration in config_controller, config_validator and config_gui. Reason: the link capacity is set to 10G only when doing traffic control[2], it is unnecessary to input 'link capacity' during controller configure. Impact: 1) all 'link capacity' related configure input option is removed. 2) it is compatibility with old *.ini configure file: when "INTERFACE_LINK_CAPACITY=10000", it is compatible. For other values, you will see following errors: "Invalid link-capacity value for XXX". Relax. Just edit "INTERFACE_LINK_CAPACITY" to 10000 and retry "config_controller". Feel free to reply if you any idea about this. Any response is welcome. [1] https://bugs.launchpad.net/starlingx/+bug/1805320 [2] https://storyboard.openstack.org/#!/story/2003087 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From volker.von.hoesslin at gmx.de Wed Dec 12 12:31:42 2018 From: volker.von.hoesslin at gmx.de (volker.von.hoesslin at gmx.de) Date: Wed, 12 Dec 2018 13:31:42 +0100 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> Message-ID: An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Wed Dec 12 12:31:23 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 12 Dec 2018 12:31:23 +0000 Subject: [Starlingx-discuss] removing link capacity configuration during controller config In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CE052844B7@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE052844B7@shsmsx102.ccr.corp.intel.com> Message-ID: <5C1DF956-1A3E-44C3-9E8F-09008C6A6A2B@windriver.com> The link speed checks/warnings should also be removed when the input parameter is removed since it will no longer be provided during config_controller, and therefore not available for validation. Regards, Matt From: "An, Ran1" Date: Wednesday, December 12, 2018 at 3:57 AM To: "starlingx-discuss at lists.starlingx.io" , "Peters, Matt" Subject: [Starlingx-discuss] removing link capacity configuration during controller config Hi All: I am working on bug [1] removing “link capacity of interface” configuration in config_controller, config_validator and config_gui. Reason: the link capacity is set to 10G only when doing traffic control[2], it is unnecessary to input ‘link capacity’ during controller configure. Impact: 1) all ‘link capacity’ related configure input option is removed. 2) it is compatibility with old *.ini configure file: when “INTERFACE_LINK_CAPACITY=10000”, it is compatible. For other values, you will see following errors: “Invalid link-capacity value for XXX”. Relax. Just edit “INTERFACE_LINK_CAPACITY” to 10000 and retry “config_controller”. Feel free to reply if you any idea about this. Any response is welcome. [1] https://bugs.launchpad.net/starlingx/+bug/1805320 [2] https://storyboard.openstack.org/#!/story/2003087 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From Martin.Banszel at tieto.com Wed Dec 12 13:18:55 2018 From: Martin.Banszel at tieto.com (Banszel Martin) Date: Wed, 12 Dec 2018 13:18:55 +0000 Subject: [Starlingx-discuss] distributed cloud deployment Message-ID: Hi all, I am interested in the distributed StarlingX deployment. Are there any guidelines how to deploy StarlingX in a distributed cloud? I have found the installation guide [0] which seems to support just a single dc installation - control, storage and compute nodes. Is there any support for zero-touch installation of StarlingX on remote nodes? Thank you, Best regards, Martin [0] https://docs.starlingx.io/installation_guide/index.html# -------------- next part -------------- An HTML attachment was scrubbed... URL: From quickconvey at gmail.com Wed Dec 12 14:37:29 2018 From: quickconvey at gmail.com (Quick Convey) Date: Wed, 12 Dec 2018 20:07:29 +0530 Subject: [Starlingx-discuss] Software Management API Documentation - Software Patching and Upgrade Message-ID: HI, Could you please share the StralingX Software Management API Documentation. I could not find it in https://github.com/openstack/stx-update Please share an example of Software Patching and Upgrade with service restart. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Dec 12 14:47:05 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 12 Dec 2018 14:47:05 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DFB2B8@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 12/12 meeting: 1. minor kernel version upgrade to 3.10.0.957 (Shuicheng/Martin) storyboard: https://storyboard.openstack.org/#!/story/2004521 installer update: https://storyboard.openstack.org/#!/story/2004516 (this is depends on 2004521) Martin is currently working on kernel upgrade. Patch rebase for both std and rt kernel will be finished by this week. Will do testing on bare metal deployment. Will ask GDC for testing before merge. 2. preperation for CentOS 7.6 upgrade status (Shuicheng) storyboard: https://storyboard.openstack.org/#!/story/2004522 49 sRPM to be upgraded (+ 2 kernels). hundreds RPMs needs to be upgraded. out-of-tree kernel drivers are not included yet - need to check and see how many drivers need to be upgraded. Saul comments: Dean mentioned yesterday, general Openstack enablement in CentOS 7.6 with RPMs. no validated CentOS with Openstack yet. May need to verify if CentOS 7.6 + Vanilla Openstack work or not, before we port any additional patch. 3. Ceph upgrade status (Vivian/Dehao/Changcheng) Dean merged all the build process patche (on staging stx-ceph and Changcheng finish rebasing all patches according to latest stx-ceph (stx/v13.2.2). totally 17 PRs New PR: https://github.com/starlingx-staging/stx-ceph/pull/18 pending for review. We could build out ISO with below patch list patch list: 1. https://github.com/starlingx-staging/stx-ceph/pull/18 2. https://review.openstack.org/#/c/619460/11 3. https://review.openstack.org/#/c/619463/8 4. https://review.openstack.org/#/c/619465/7 5. https://review.openstack.org/#/c/620449/ 6. https://review.openstack.org/#/c/624085/ 5 & 6 are used to be base version and make ceph patches effect in build (Please use ceph_13_2_2.xml as the manifest file) Submitted 2 PR to staging. Dean reviewed them and merged stx/v13.2.2 branch. Those PR has no impact to other stx module. Current base build is still using the master branch. Other 4 patches are in openStack gerrit review. Image has been build based on the current patch porting. Still cannot work as expected. still working w/ WR to debug the problem under StarlingX system. Ovidiu is working w/ Changcheng to debug the issue. @Changcheng, Please continue work with Ovidiu to debug the issues. Changcheng/Dehao: 1st priority will need to make dedicated storage working w/ new Ceph. Once Ceph is working w/ dedicated storage on controller-config, Frank would like to try the containers. all-in-one simplex already working, dedicated storage is working but not stable. @Mingyuan will help to try Simplex first and then dedicated storage w/ old Ceph. 4. Python2to3 status, flocks and OS packages (Austin) focus on flock service upgrade, still working on stx-discloud; stx-distcloud-client, stx-nfv, stx-integ and stx-config. for other Os packages, out of 436 packages, we still find 73 packages do not have Python3 transition activities. 5. Qemu 3.0 branch switch (Ghada/Jim) Code merged as of Dec 10 https://review.openstack.org/#/c/623045/ https://review.openstack.org/#/c/622583/ no pending patches. eveything is in! Congratulation!!!! 6. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Wold, Saul; Jones, Bruce E; Troyer, Dean; Lin, Shuicheng; Zhu, Vivian; Shang, Dehao; Liu, ZhipengS; Hu, Yong; Sun, Austin; 'Rowsell, Brent'; 'Khalil, Ghada'; Waheed, Numan; Somerville, Jim; starlingx-discuss at lists.starlingx.io Cc: Perez Carranza, Jose; Armstrong, Robert H; Perez Rodriguez, Humberto I; Martinez Landa, Hayde; Martinez Monroy, Elio; Hu, Wei W; Gomez, Juan P; Lara, Cesar; Arce Moreno, Abraham; Cobbley, David A; Hernandez Gonzalez, Fernando; 'Hellmann, Gil'; 'Waines, Greg'; 'Chen, Jacky'; 'Seiler, Glenn'; 'Eslimi, Dariush'; 'Young, Ken' Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 12, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From ran1.an at intel.com Wed Dec 12 16:07:11 2018 From: ran1.an at intel.com (An, Ran1) Date: Wed, 12 Dec 2018 16:07:11 +0000 Subject: [Starlingx-discuss] removing link capacity configuration during controller config In-Reply-To: <5C1DF956-1A3E-44C3-9E8F-09008C6A6A2B@windriver.com> References: <9BAB5B7CAF57C3459E4636391F1071CE052844B7@shsmsx102.ccr.corp.intel.com> <5C1DF956-1A3E-44C3-9E8F-09008C6A6A2B@windriver.com> Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE052845B9@shsmsx102.ccr.corp.intel.com> Hi Matt For considering of compatibility with old *.ini configure files, I kept link speed checks/warnings part in config_validator. It could be removed if nobody need the old format configure file. Hi all If anyone need to keep link speed option in *.ini configure file, please let me know. Thanks Ran From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, December 12, 2018 8:31 PM To: An, Ran1 ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] removing link capacity configuration during controller config The link speed checks/warnings should also be removed when the input parameter is removed since it will no longer be provided during config_controller, and therefore not available for validation. Regards, Matt From: "An, Ran1" > Date: Wednesday, December 12, 2018 at 3:57 AM To: "starlingx-discuss at lists.starlingx.io" >, "Peters, Matt" > Subject: [Starlingx-discuss] removing link capacity configuration during controller config Hi All: I am working on bug [1] removing “link capacity of interface” configuration in config_controller, config_validator and config_gui. Reason: the link capacity is set to 10G only when doing traffic control[2], it is unnecessary to input ‘link capacity’ during controller configure. Impact: 1) all ‘link capacity’ related configure input option is removed. 2) it is compatibility with old *.ini configure file: when “INTERFACE_LINK_CAPACITY=10000”, it is compatible. For other values, you will see following errors: “Invalid link-capacity value for XXX”. Relax. Just edit “INTERFACE_LINK_CAPACITY” to 10000 and retry “config_controller”. Feel free to reply if you any idea about this. Any response is welcome. [1] https://bugs.launchpad.net/starlingx/+bug/1805320 [2] https://storyboard.openstack.org/#!/story/2003087 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Wed Dec 12 16:17:22 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 12 Dec 2018 16:17:22 +0000 Subject: [Starlingx-discuss] removing link capacity configuration during controller config In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CE052845B9@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE052844B7@shsmsx102.ccr.corp.intel.com> <5C1DF956-1A3E-44C3-9E8F-09008C6A6A2B@windriver.com> <9BAB5B7CAF57C3459E4636391F1071CE052845B9@shsmsx102.ccr.corp.intel.com> Message-ID: Hello Ran, We don’t normally maintain special handling for old parameters if they can be safely ignored. I recommend removing the checks and warnings related to this parameter. Regards, Matt From: "An, Ran1" Date: Wednesday, December 12, 2018 at 11:07 AM To: "Peters, Matt" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] removing link capacity configuration during controller config Hi Matt For considering of compatibility with old *.ini configure files, I kept link speed checks/warnings part in config_validator. It could be removed if nobody need the old format configure file. Hi all If anyone need to keep link speed option in *.ini configure file, please let me know. Thanks Ran From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, December 12, 2018 8:31 PM To: An, Ran1 ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] removing link capacity configuration during controller config The link speed checks/warnings should also be removed when the input parameter is removed since it will no longer be provided during config_controller, and therefore not available for validation. Regards, Matt From: "An, Ran1" > Date: Wednesday, December 12, 2018 at 3:57 AM To: "starlingx-discuss at lists.starlingx.io" >, "Peters, Matt" > Subject: [Starlingx-discuss] removing link capacity configuration during controller config Hi All: I am working on bug [1] removing “link capacity of interface” configuration in config_controller, config_validator and config_gui. Reason: the link capacity is set to 10G only when doing traffic control[2], it is unnecessary to input ‘link capacity’ during controller configure. Impact: 1) all ‘link capacity’ related configure input option is removed. 2) it is compatibility with old *.ini configure file: when “INTERFACE_LINK_CAPACITY=10000”, it is compatible. For other values, you will see following errors: “Invalid link-capacity value for XXX”. Relax. Just edit “INTERFACE_LINK_CAPACITY” to 10000 and retry “config_controller”. Feel free to reply if you any idea about this. Any response is welcome. [1] https://bugs.launchpad.net/starlingx/+bug/1805320 [2] https://storyboard.openstack.org/#!/story/2003087 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Dec 12 16:19:42 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 12 Dec 2018 10:19:42 -0600 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DFB2B8@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DFB2B8@SHSMSX104.ccr.corp.intel.com> Message-ID: On Wed, Dec 12, 2018 at 8:47 AM Xie, Cindy wrote: > 3. Ceph upgrade status (Vivian/Dehao/Changcheng) > Dean merged all the build process patche (on staging stx-ceph and > Changcheng finish rebasing all patches according to latest stx-ceph (stx/v13.2.2). totally 17 PRs > New PR: https://github.com/starlingx-staging/stx-ceph/pull/18 pending for review. I had asked for the relevant information to be included in the individual commit messages and I still do not see that being done. We are losing valuable information and traceability for why we are making these changes to upstream. Let's look at an example: In [0] we have the following commit message: ---------- Port: RevertMe: Use user root to run ceph services Avoid debugging file permission issues when upgrading to Jewel. This is done to provide the same setup as Hammer in StarlingX. This commit should be reverted when we decide to enable the ceph user. Port From: Ceph Rebase: Disable ceph user/group for Hammer equivalence.patch 0001____src_ceph-disk_ceph_disk_main.py.patch 0002____src_init-ceph.in.patch 0003____wrs_ceph.conf.patch Signed-off-by: Robert Church Signed-off-by: Daniel Badea Signed-off-by: Changcheng Liu Signed-off-by: Dehao Shang ---------- This appears to correspond to the original R5 commit c87de31f that has the following commit message: ---------- Ceph Rebase: Disable ceph user/group for Hammer equivalence Use default (root) user to run ceph services instead of dedicated (ceph) user and group to avoid debugging file permission issues while upgrading to Jewel. This is done to provide the same setup as Hammer in TiS. This commit should be reverted when we decide to enable the ceph user. ---------- Notice how the second paragraph of the original message is missing from the new commit. Also, the references to the original commit are not available externally, I have no idea what "0001____src_ceph-disk_ceph_disk_main.py.patch" refers to. So even for someone with access to the original commit I have to do text string searches to attempt to locate it in the R5 repo. It also seems like it would be easier to review and merge these in smaller batches. One big PR with 35 commits takes time to review, and when a single change needs to be made we have to re-review looking for the changes. There is also no reference in either the commit messages or the PR description to a Storyboard story or task or any further documentation to why this work is being done. Think of what you have available while doing this rebase/upgrade and imagine what the next person doing the next rebase/upgrade will want to see and make sure all of that is present in the commit messages. The GitHub PR may or may not be available at that time, only the git commit messages are guaranteed to stay with the code changes. dt [0] https://github.com/starlingx-staging/stx-ceph/pull/18/commits/552736f77f39897922e562a8477d19ab4e47a39f -- Dean Troyer dtroyer at gmail.com From juan.carlos.alonso at intel.com Wed Dec 12 16:57:39 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 12 Dec 2018 16:57:39 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> I think you need to get a new ISO updated and deploy the system again. Until now there is not a way to update the installation while it is running. Regards. Juan Carlos Alonso From: volker.von.hoesslin at gmx.de [mailto:volker.von.hoesslin at gmx.de] Sent: Wednesday, December 12, 2018 6:32 AM To: starlingx Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 regardless of the fact that this build failed, if i already have an installation running, how do i get it up to date, are there patch files somewhere i don't know anything about? volker Gesendet: Dienstag, 11. Dezember 2018 um 22:51 Uhr Von: "Alonso, Juan Carlos" > An: starlingx > Betreff: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-11 (link) Sanity Test is executed in a Virtual Environment Status: RED Simplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 18 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] Duplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Controller Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Dedicated Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] ------------------------------------------------------------------ This issue was found by our Robot test framework. During config_controller, the suite copy a Config file from host to StarlingX system. Robot uses SSHLibrary keyword, such keyword uses sftp service to perform the transfer file. Such transfer failed due to sftp service is not working on the system. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Dec 12 17:55:17 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 12 Dec 2018 17:55:17 +0000 Subject: [Starlingx-discuss] Dec 12th 2018 Community meeting minutes Message-ID: <9A85D2917C58154C960D95352B22818BB1ED1ECD@fmsmsx117.amr.corp.intel.com> Agenda and notes - Dec 12 call * Next Release Update -- Ghada & Bruce o Release Plan documented at: https://wiki.openstack.org/wiki/StarlingX/Release_Plan o Discussion Details/Options documented at: https://etherpad.openstack.org/p/stx-releases * Please register for the January community meeting so we can get a count for logistics (meals, etc...): https://starlingx_jan2019meetup.eventbrite.com o (ildikov) As of today we have 14 people registered * Looking for volunteers for the distro.openstack team to work on the rebase to OSF master and on patch resolution. o PL: Bruce o TL per area o Neutron - Matt Peters o Nova - Jim Gauld o Horizon - TBD o Neutron work will be handled in the Networking team o Need a meeting timeslot for the Nova and Horizon work. Tuesday AM PDT? 6am PDT / 1400 UTC o Discuss high level plan for this work e.g. branching, etc... * Holiday meeting schedule o No project meetings will be held for the weeks of Dec 24-28 and Dec 31-Jan 4. All StarlingX meetings are canceled for the holiday. We will resume meetings the week of Jan 7th. * Ada is updating the wiki with our proposed Test Strategy. She will post the link today. Please review and provide feedback. Thank you for reviewing the Test Strategy with the team! -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Wed Dec 12 19:33:41 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 12 Dec 2018 19:33:41 +0000 Subject: [Starlingx-discuss] [ Test ] Strategy posted Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD486B9@FMSMSX114.amr.corp.intel.com> Hello Please take a look at the strategy proposal uploaded to the wiki [0], your comments are welcome. Thanks! Ada [0] https://wiki.openstack.org/wiki/StarlingX/TestStrategy From scott.little at windriver.com Wed Dec 12 19:40:50 2018 From: scott.little at windriver.com (Scott Little) Date: Wed, 12 Dec 2018 14:40:50 -0500 Subject: [Starlingx-discuss] changing PLATFORM_RELEASE Message-ID: Or master branch builds are still reporting PLATFORM_RELEASE=18.10.  This value is also known as the SW_VERSION. To avoid confusion, I propose the following.  After we cut a release branch, should always advance the master branch value by at least +1 month beyond that release, but never greater than -1 month to the next projected release date. I propose changing the current master branch to PLATFORM_RELEASE=19.01 Scott From bruce.e.jones at intel.com Wed Dec 12 20:10:36 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 12 Dec 2018 20:10:36 +0000 Subject: [Starlingx-discuss] changing PLATFORM_RELEASE In-Reply-To: References: Message-ID: <9A85D2917C58154C960D95352B22818BB1ED3052@fmsmsx117.amr.corp.intel.com> This makes sense to me. Should we set the value to the next planned release date? brucej -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Wednesday, December 12, 2018 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] changing PLATFORM_RELEASE Or master branch builds are still reporting PLATFORM_RELEASE=18.10. This value is also known as the SW_VERSION. To avoid confusion, I propose the following.  After we cut a release branch, should always advance the master branch value by at least +1 month beyond that release, but never greater than -1 month to the next projected release date. I propose changing the current master branch to PLATFORM_RELEASE=19.01 Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Wed Dec 12 20:17:51 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 12 Dec 2018 20:17:51 +0000 Subject: [Starlingx-discuss] changing PLATFORM_RELEASE In-Reply-To: <9A85D2917C58154C960D95352B22818BB1ED3052@fmsmsx117.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BB1ED3052@fmsmsx117.amr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA495AF1@ALA-MBD.corp.ad.wrs.com> Scott was going to put the final release version (19.05) a little closer to milestone-3 / RC1. Do you prefer to have it set from now? Ghada -----Original Message----- From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Wednesday, December 12, 2018 3:11 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] changing PLATFORM_RELEASE This makes sense to me. Should we set the value to the next planned release date? brucej -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Wednesday, December 12, 2018 11:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] changing PLATFORM_RELEASE Or master branch builds are still reporting PLATFORM_RELEASE=18.10. This value is also known as the SW_VERSION. To avoid confusion, I propose the following.  After we cut a release branch, should always advance the master branch value by at least +1 month beyond that release, but never greater than -1 month to the next projected release date. I propose changing the current master branch to PLATFORM_RELEASE=19.01 Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Dec 12 20:29:17 2018 From: scott.little at windriver.com (Scott Little) Date: Wed, 12 Dec 2018 15:29:17 -0500 Subject: [Starlingx-discuss] changing PLATFORM_RELEASE In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA495AF1@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BB1ED3052@fmsmsx117.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA495AF1@ALA-MBD.corp.ad.wrs.com> Message-ID: <22e2fdad-2faa-d64d-4f81-657ac9628313@windriver.com> Until we are at least RC quality, we shouldn't set PLATFORM_RELEASE to the final date. Also keep in mind that this value should never go backwards.  So I want to leave some wiggle room, just in case (however unlikely) we move the release date forward (perhaps 19.03). Scott On 18-12-12 03:17 PM, Khalil, Ghada wrote: > Scott was going to put the final release version (19.05) a little closer to milestone-3 / RC1. Do you prefer to have it set from now? > > Ghada > > -----Original Message----- > From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] > Sent: Wednesday, December 12, 2018 3:11 PM > To: Little, Scott; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] changing PLATFORM_RELEASE > > This makes sense to me. Should we set the value to the next planned release date? > > brucej > > -----Original Message----- > From: Scott Little [mailto:scott.little at windriver.com] > Sent: Wednesday, December 12, 2018 11:41 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] changing PLATFORM_RELEASE > > Or master branch builds are still reporting PLATFORM_RELEASE=18.10. This value is also known as the SW_VERSION. > > To avoid confusion, I propose the following.  After we cut a release branch, should always advance the master branch value by at least +1 month beyond that release, but never greater than -1 month to the next projected release date. > > I propose changing the current master branch to PLATFORM_RELEASE=19.01 > > Scott > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Wed Dec 12 21:03:32 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 12 Dec 2018 15:03:32 -0600 Subject: [Starlingx-discuss] changing PLATFORM_RELEASE In-Reply-To: <22e2fdad-2faa-d64d-4f81-657ac9628313@windriver.com> References: <9A85D2917C58154C960D95352B22818BB1ED3052@fmsmsx117.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA495AF1@ALA-MBD.corp.ad.wrs.com> <22e2fdad-2faa-d64d-4f81-657ac9628313@windriver.com> Message-ID: On Wed, Dec 12, 2018 at 2:31 PM Scott Little wrote: > Until we are at least RC quality, we shouldn't set PLATFORM_RELEASE to > the final date. > > Also keep in mind that this value should never go backwards. So I want > to leave some wiggle room, just in case (however unlikely) we move the > release date forward (perhaps 19.03). This is essentially the same problem Scott and I have been talking about re naming the milestones. The downside to using date-based release names is when the value of $NEXT_RELEASE is unknown it is hard to use it in places like milestone names. Would it be too confusing to just use values as of the date we need the value, ie milestone 1 is based on 2018.01 no matter when the release actually occurs? Would this be too confusing to users/deployers/developers? So we would have this: milestone 1 in Jan 2019: tag=2019.01.b1 PLATFORM_RELEASE=19.01 or 19.01.b1 or similar milestone 2 in Mar 2019: tag=2019.01.b2 or 2019.03.b1 PLATFORM_RELEASE=19.01 or 19.01.b2 or similar dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Wed Dec 12 21:25:38 2018 From: scott.little at windriver.com (Scott Little) Date: Wed, 12 Dec 2018 16:25:38 -0500 Subject: [Starlingx-discuss] changing PLATFORM_RELEASE In-Reply-To: References: <9A85D2917C58154C960D95352B22818BB1ED3052@fmsmsx117.amr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA495AF1@ALA-MBD.corp.ad.wrs.com> <22e2fdad-2faa-d64d-4f81-657ac9628313@windriver.com> Message-ID: I like the idea of the milestones using a date that match the interim PLATFORM_RELEASE value. I'd like to avoid deviating from the YY.MM format of PLATFORM_RELEASE anytime soon.  I think there is a fair bit of infrastructure, including our internal test automation, that would be sensitive to such a change. On 18-12-12 04:03 PM, Dean Troyer wrote: > On Wed, Dec 12, 2018 at 2:31 PM Scott Little wrote: >> Until we are at least RC quality, we shouldn't set PLATFORM_RELEASE to >> the final date. >> >> Also keep in mind that this value should never go backwards. So I want >> to leave some wiggle room, just in case (however unlikely) we move the >> release date forward (perhaps 19.03). > This is essentially the same problem Scott and I have been talking > about re naming the milestones. > > The downside to using date-based release names is when the value of > $NEXT_RELEASE is unknown it is hard to use it in places like milestone > names. > > Would it be too confusing to just use values as of the date we need > the value, ie milestone 1 is based on 2018.01 no matter when the > release actually occurs? Would this be too confusing to > users/deployers/developers? > > So we would have this: > milestone 1 in Jan 2019: > tag=2019.01.b1 > PLATFORM_RELEASE=19.01 or 19.01.b1 or similar > > milestone 2 in Mar 2019: > tag=2019.01.b2 or 2019.03.b1 > PLATFORM_RELEASE=19.01 or 19.01.b2 or similar > > dt > From bruce.e.jones at intel.com Wed Dec 12 22:26:38 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 12 Dec 2018 22:26:38 +0000 Subject: [Starlingx-discuss] Issue from internal Intel user Message-ID: <9A85D2917C58154C960D95352B22818BB1ED31AF@fmsmsx117.amr.corp.intel.com> Yatindra is trying to deploy StarlingX in Simplex mode. I got the following email today. Can we please give him some guidance as to how to get the system up and running? Thank you! brucej Simplex: Controller-0 configuration was success. - Was able to provision controller-0 host as per the StarlingX Installation Simplex link. But after unlocking controller-0 as per command "[wrsroot at controller-0 ~(keystone_admin)]$ system host-unlock controller-0", system get rebooted and unable to get to the login screen giving error like in the attached image(stx_reboot_connect_issue) . It is like "State change failed: device is opened by someone". - Reboot takes multiple time and once after 5 times it reached to login screen. Then when I checked status about the controller-0 "[wrsroot at controller-0 ~(keystone_admin)]$ system host-list", in the availability column it says degraded. See Last command in the link. - I tried re-installing, but faced another issue about LVM storage backend remaining in the state of configuring for longer time, hence failing to provision controller-0. Please look in the attached image(stxissue_simplex_lvm_configuring) which is output of command [wrsroot at controller-0 ~(keystone_admin)]$ system storage-backend-list" for better understanding. When I run command "[wrsroot at controller-0 ~(keystone_admin)]$ system storage-backend-add lvm -s cinder --confirmed" there was some issue related to connecting to controller-oam interface. As LVM-storage backend state remained in the configuring state for long time, I tried to delete it which was not possible and neither it allowed me to create another. -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Thu Dec 13 02:04:10 2018 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Thu, 13 Dec 2018 02:04:10 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35DFB2B8@SHSMSX104.ccr.corp.intel.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F15B3A@SHSMSX104.ccr.corp.intel.com> Hi Dean, 1. [Dean] Notice how the second paragraph of the original message is missing from the new commit. [Changcheng] The second paragraph isn't missed in the new patch. You could find: " RevertMe: " & " This commit should be reverted when we decide to enable the ceph user." 2. [Dean] the references to the original commit are not available externally [Changcheng] Yes. The reference should be removed from commit message at last. Originally, I want both Intel & WindRiver engineers could find where the patches are ported from which place at the initial porting stage. 3. [Dean] It also seems like it would be easier to review and merge these in smaller batches. [Changcheng] Yes. I’m syncing with WindRiver engineers to check whether we could merge some patches firstly to avoid times of rebase and review. 4. [Dean] There is also no reference in either the commit messages or the PR description to a Storyboard story or task or any further documentation to why this work is being done. [Changcheng] I’ll add related information in PR message if we agree with merge part of patches firstly. 5. [Dean] only the git commit messages are guaranteed to stay with the code changes. [Changcheng] We’ll give document about stx-ceph upgrade once it’s been upgraded successfully. 6. [Dean] I had asked for the relevant information to be included in the individual commit messages and I still do not see that being done. We are losing valuable information and traceability for why we are making these changes to upstream. [Changcheng] Personally, I think I’ve kept most part of original commit message in the new ported patches. Some huge patch is divided into small patches(If you look the original patch, it’s merged by several patches. It’s hard to be maintained). For PR info, we could give more detail info according to your requirement. B.R. Changcheng [X] -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Thursday, December 13, 2018 12:20 AM To: Xie, Cindy Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 On Wed, Dec 12, 2018 at 8:47 AM Xie, Cindy > wrote: > 3. Ceph upgrade status (Vivian/Dehao/Changcheng) Dean merged all > the build process patche (on staging stx-ceph and Changcheng finish > rebasing all patches according to latest stx-ceph (stx/v13.2.2). > totally 17 PRs New PR: https://github.com/starlingx-staging/stx-ceph/pull/18 pending for review. I had asked for the relevant information to be included in the individual commit messages and I still do not see that being done. We are losing valuable information and traceability for why we are making these changes to upstream. Let's look at an example: In [0] we have the following commit message: ---------- Port: RevertMe: Use user root to run ceph services Avoid debugging file permission issues when upgrading to Jewel. This is done to provide the same setup as Hammer in StarlingX. This commit should be reverted when we decide to enable the ceph user. Port From: Ceph Rebase: Disable ceph user/group for Hammer equivalence.patch 0001____src_ceph-disk_ceph_disk_main.py.patch 0002____src_init-ceph.in.patch 0003____wrs_ceph.conf.patch Signed-off-by: Robert Church > Signed-off-by: Daniel Badea > Signed-off-by: Changcheng Liu > Signed-off-by: Dehao Shang > ---------- This appears to correspond to the original R5 commit c87de31f that has the following commit message: ---------- Ceph Rebase: Disable ceph user/group for Hammer equivalence Use default (root) user to run ceph services instead of dedicated (ceph) user and group to avoid debugging file permission issues while upgrading to Jewel. This is done to provide the same setup as Hammer in TiS. This commit should be reverted when we decide to enable the ceph user. ---------- Notice how the second paragraph of the original message is missing from the new commit. Also, the references to the original commit are not available externally, I have no idea what "0001____src_ceph-disk_ceph_disk_main.py.patch" refers to. So even for someone with access to the original commit I have to do text string searches to attempt to locate it in the R5 repo. It also seems like it would be easier to review and merge these in smaller batches. One big PR with 35 commits takes time to review, and when a single change needs to be made we have to re-review looking for the changes. There is also no reference in either the commit messages or the PR description to a Storyboard story or task or any further documentation to why this work is being done. Think of what you have available while doing this rebase/upgrade and imagine what the next person doing the next rebase/upgrade will want to see and make sure all of that is present in the commit messages. The GitHub PR may or may not be available at that time, only the git commit messages are guaranteed to stay with the code changes. dt [0] https://github.com/starlingx-staging/stx-ceph/pull/18/commits/552736f77f39897922e562a8477d19ab4e47a39f -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Dec 13 05:00:23 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 13 Dec 2018 05:00:23 +0000 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> Hi Dean, Centos76 branch is failed to build due to qemu patch in master is not merged to feature branch. I try to use "git rebase master" to do the rebase. But it seems I don't have the grant for it. Could you help rebase stx-integ master change to branch? Thanks. " To ssh://slin14 at review.openstack.org:29418/openstack/stx-integ.git ! [remote rejected] HEAD -> refs/publish/f/centos76/centos76 (you are not allowed to upload merges) " Best Regards Shuicheng -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Tuesday, December 11, 2018 3:02 AM To: McKenna, Jason Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch On Mon, Dec 10, 2018 at 7:28 AM McKenna, Jason wrote: > I’m starting to see some code reviews come through which have to do with the rebase to CentOS 7.6. When these come through, please ensure they are against the feature branch and not the “master” branch. I do not see the feature branch yet created, but I’m assuming it will be called “f/centos76”. There is an AR to creates the branch pending (meeting minutes “Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5”). The f/centos76 branch has been created in stx-integ stx-root stx-tools stx-upstream. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cesar.lara at intel.com Thu Dec 13 05:15:44 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Thu, 13 Dec 2018 05:15:44 +0000 Subject: [Starlingx-discuss] [build] [meetings] Build team meeting Agenda 12/13/2018 Message-ID: Build team meeting Agenda 12/13/2018 - bug triage - ISO files retention follow up - overall picture for next gen build system - holiday schedule - Opens Regards Cesar Lara Software Engineering Manager Open Source Technology Center Sent from my mobile phone -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Dec 13 06:35:46 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 13 Dec 2018 06:35:46 +0000 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35DFC47D@SHSMSX104.ccr.corp.intel.com> + Saul who has the right to do so as well. -----Original Message----- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Thursday, December 13, 2018 1:00 PM To: Dean Troyer Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch Hi Dean, Centos76 branch is failed to build due to qemu patch in master is not merged to feature branch. I try to use "git rebase master" to do the rebase. But it seems I don't have the grant for it. Could you help rebase stx-integ master change to branch? Thanks. " To ssh://slin14 at review.openstack.org:29418/openstack/stx-integ.git ! [remote rejected] HEAD -> refs/publish/f/centos76/centos76 (you are not allowed to upload merges) " Best Regards Shuicheng -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Tuesday, December 11, 2018 3:02 AM To: McKenna, Jason Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch On Mon, Dec 10, 2018 at 7:28 AM McKenna, Jason wrote: > I’m starting to see some code reviews come through which have to do with the rebase to CentOS 7.6. When these come through, please ensure they are against the feature branch and not the “master” branch. I do not see the feature branch yet created, but I’m assuming it will be called “f/centos76”. There is an AR to creates the branch pending (meeting minutes “Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/5”). The f/centos76 branch has been created in stx-integ stx-root stx-tools stx-upstream. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From quickconvey at gmail.com Thu Dec 13 07:27:09 2018 From: quickconvey at gmail.com (Quick Convey) Date: Thu, 13 Dec 2018 12:57:09 +0530 Subject: [Starlingx-discuss] starlingx-staging projects and OpenStack projects future plan Message-ID: HI, I there any plan to merge "stx-neutron" and "openstack/neutron". I would like to know the future plan about the projects under https://github.com/starlingx-staging A per my understanding, StarlingX will work only with the OpenStack projects under the https://github.com/starlingx-staging , is it right ? Where can I find the additional commits made in "stx-neutron" for edge use-case., any document ? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Thu Dec 13 07:40:24 2018 From: yong.hu at intel.com (Hu, Yong) Date: Thu, 13 Dec 2018 07:40:24 +0000 Subject: [Starlingx-discuss] starlingx-staging projects and OpenStack projects future plan In-Reply-To: References: Message-ID: <64AC788C-7B84-416B-BCDD-C863574F7C0E@intel.com> AFIK, Openstack projects on starlingx-staging *will* be end of life someday, *AFTER* StarlingX moves to use upstream version (https://github.com/openstack) (starting from “Stein” release). The rough timeline would be 2019 May or so. For the diff b/w projects (such as neutron) and upstream projects, Starling (and related Openstack teams) are proposing patches to upstream progressively, and supposedly some of patches will be accepted eventually. Regards, Yong From: Quick Convey Date: Thursday, 13 December 2018 at 3:28 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] starlingx-staging projects and OpenStack projects future plan HI, I there any plan to merge "stx-neutron" and "openstack/neutron". I would like to know the future plan about the projects under https://github.com/starlingx-staging A per my understanding, StarlingX will work only with the OpenStack projects under the https://github.com/starlingx-staging , is it right ? Where can I find the additional commits made in "stx-neutron" for edge use-case., any document ? Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuyun at jxresearch.com Thu Dec 13 10:45:16 2018 From: xuyun at jxresearch.com (=?gb2312?B?0OzUzA==?=) Date: Thu, 13 Dec 2018 18:45:16 +0800 Subject: [Starlingx-discuss] ovs-vswitchd consuming 100% CPU Message-ID: <9992DAA7-F48C-4A83-B62A-83887FC015E5@jxresearch.com> Hi, I’ve managed to deploy an all-in-one node using the ISO downloaded from http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/, thank you for your help. I noticed that ovs-vswitchd consuming 100% CPU all the time, is it normal for my configuration? My machine has two CPUs and 64G memory and 4 Intel I350 NICs. Br, Xu Yun From volker.von.hoesslin at gmx.de Thu Dec 13 13:31:06 2018 From: volker.von.hoesslin at gmx.de (volker.von.hoesslin at gmx.de) Date: Thu, 13 Dec 2018 14:31:06 +0100 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> Message-ID: An HTML attachment was scrubbed... URL: From cesarlarag at gmail.com Thu Dec 13 14:01:59 2018 From: cesarlarag at gmail.com (Cesar Lara) Date: Thu, 13 Dec 2018 08:01:59 -0600 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> Message-ID: Not entirely true, StarlingX has an update service so you can upgrade your deployment at any given time, I suggest you follow the stx-update project to get an idea of how that works. If you have a deployment based on the latest stable release you are good to go. These daily builds are getting patches and code for features that are not ready for production nor fully tested until the next release cycle. We are basically fixing bugs being introduced as part of the normal development process. CL On Thu, Dec 13, 2018, 7:31 AM outch, that's not so good :( how are the plans to offer updates about > patches here? > > volker... > > *Gesendet:* Mittwoch, 12. Dezember 2018 um 17:57 Uhr > *Von:* "Alonso, Juan Carlos" > *An:* "volker.von.hoesslin at gmx.de" , > starlingx > *Betreff:* RE: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 > > I think you need to get a new ISO updated and deploy the system again. > > Until now there is not a way to update the installation while it is > running. > > > > Regards. > > Juan Carlos Alonso > > > > *From:* volker.von.hoesslin at gmx.de [mailto:volker.von.hoesslin at gmx.de] > *Sent:* Wednesday, December 12, 2018 6:32 AM > *To:* starlingx > *Subject:* Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 > > > > regardless of the fact that this build failed, if i already have an > installation running, how do i get it up to date, are there patch files > somewhere i don't know anything about? > > > > volker > > > > *Gesendet:* Dienstag, 11. Dezember 2018 um 22:51 Uhr > *Von:* "Alonso, Juan Carlos" > *An:* starlingx > *Betreff:* [Starlingx-discuss] FW: Sanity Test - ISO 20181211 > > Status of the Sanity Test for last CENGN ISO: *bootimage.iso from > 2018-Dec-11 *(link > > ) > > > > Sanity Test is executed in a *Virtual Environment* > > > > Status: *RED* > > > > *Simplex* > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 18 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > > > *Duplex* > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > *Multinode Controller Storage* > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > *Multinode Dedicated Storage* > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > ------------------------------------------------------------------ > > > > This issue was found by our Robot test framework. > > During config_controller, the suite copy a Config file from host to > StarlingX system. Robot uses SSHLibrary keyword, such keyword uses sftp > service to perform the transfer file. Such transfer failed due to sftp > service is not working on the system. > > > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > > > Regards. > > Juan Carlos Alonso > > > > > > _______________________________________________ Starlingx-discuss mailing > list Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Thu Dec 13 15:32:38 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 13 Dec 2018 15:32:38 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-12 (link) Sanity Test is executed in a Virtual Environment Status: RED Simplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 18 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] Duplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Controller Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Dedicated Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] ------------------------------------------------------------------ SFTP service still not working. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From ran1.an at intel.com Thu Dec 13 16:06:20 2018 From: ran1.an at intel.com (An, Ran1) Date: Thu, 13 Dec 2018 16:06:20 +0000 Subject: [Starlingx-discuss] removing link capacity configuration during controller config In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE052844B7@shsmsx102.ccr.corp.intel.com> <5C1DF956-1A3E-44C3-9E8F-09008C6A6A2B@windriver.com> <9BAB5B7CAF57C3459E4636391F1071CE052845B9@shsmsx102.ccr.corp.intel.com> Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE05284785@shsmsx102.ccr.corp.intel.com> Hi Matt Thanks for your advises and I removed the related checks and warnings. After tests the latest version is totally compatible with old *.ini configure file. My patch is https://review.openstack.org/#/c/625018/, any further comments are welcome~ Thanks Ran From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, December 13, 2018 12:17 AM To: An, Ran1 ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] removing link capacity configuration during controller config Hello Ran, We don’t normally maintain special handling for old parameters if they can be safely ignored. I recommend removing the checks and warnings related to this parameter. Regards, Matt From: "An, Ran1" > Date: Wednesday, December 12, 2018 at 11:07 AM To: "Peters, Matt" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] removing link capacity configuration during controller config Hi Matt For considering of compatibility with old *.ini configure files, I kept link speed checks/warnings part in config_validator. It could be removed if nobody need the old format configure file. Hi all If anyone need to keep link speed option in *.ini configure file, please let me know. Thanks Ran From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, December 12, 2018 8:31 PM To: An, Ran1 >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] removing link capacity configuration during controller config The link speed checks/warnings should also be removed when the input parameter is removed since it will no longer be provided during config_controller, and therefore not available for validation. Regards, Matt From: "An, Ran1" > Date: Wednesday, December 12, 2018 at 3:57 AM To: "starlingx-discuss at lists.starlingx.io" >, "Peters, Matt" > Subject: [Starlingx-discuss] removing link capacity configuration during controller config Hi All: I am working on bug [1] removing “link capacity of interface” configuration in config_controller, config_validator and config_gui. Reason: the link capacity is set to 10G only when doing traffic control[2], it is unnecessary to input ‘link capacity’ during controller configure. Impact: 1) all ‘link capacity’ related configure input option is removed. 2) it is compatibility with old *.ini configure file: when “INTERFACE_LINK_CAPACITY=10000”, it is compatible. For other values, you will see following errors: “Invalid link-capacity value for XXX”. Relax. Just edit “INTERFACE_LINK_CAPACITY” to 10000 and retry “config_controller”. Feel free to reply if you any idea about this. Any response is welcome. [1] https://bugs.launchpad.net/starlingx/+bug/1805320 [2] https://storyboard.openstack.org/#!/story/2003087 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Dec 13 16:36:11 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 13 Dec 2018 10:36:11 -0600 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F15B3A@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DFB2B8@SHSMSX104.ccr.corp.intel.com> <0D7994A90DD70040A9F5E77C4D23C57D50F15B3A@SHSMSX104.ccr.corp.intel.com> Message-ID: On Wed, Dec 12, 2018 at 8:06 PM Liu, Changcheng wrote: > 2. *[Dean] **the references to the original commit are not available > externally* > [Changcheng] Yes. The reference should be removed from commit > message at last. > Originally, I want both Intel & WindRiver engineers could find > where the patches are ported from which place at the initial porting stage. > [dt] The reference should NOT be removed. If a patch is being rebased the reference to the original patch must be preserved. > 3. *[Dean] **It also seems like it would be easier to review and merge > these in smaller batches.* > [Changcheng] Yes. I’m syncing with WindRiver engineers to check > whether we could merge some patches firstly to avoid times of rebase and > review. > [dt] I am not talking about merging commits, I am talking about splitting the existing commits into multiple Github PRs. Please do not merge commits that are not directly related to each other.. > 4. *[Dean] **There is also no reference in either the commit messages > or the PR description to a Storyboard story or task or any further > documentation to why this work is being done.* > [Changcheng] I’ll add related information in PR message if we > agree with merge part of patches firstly. > Please put it into the commit messages. The PR text is not part of the git repo and is lost github is unavailable. > 5. *[Dean] **only the git commit messages are guaranteed to stay with > the code changes.* > [Changcheng] We’ll give document about stx-ceph upgrade once it’s > been upgraded successfully. > That does not address the need to put good information into the individual commit messages. > 6. *[**Dean**]* *I had asked for the relevant information to be included > in the individual commit messages and I still do not see that being done. > We are losing valuable information and traceability for why we are making > these changes to upstream.* > [Changcheng] Personally, I think I’ve kept most part of original > commit message in the new ported patches. Some huge patch is divided into > small patches(If you look the original patch, it’s merged by several > patches. It’s hard to be maintained). For PR info, we could give more > detail info according to your requirement. > Thank you for splitting up previously squashed patches. Please do not confuse PR information (the text that is part of a Github PR) with a commit message (the text that is part of a git commit). Github PRs are not the place of record for us. Information that does not fit into a git commit message should be in Storyboard or Launchpad, the two places we keep track of those things. But more importantly, things that the next team that looks at this code will want to have without access to Github or Storyboard or Launchpad needs to be in the commit message. This is exactly the problem we have with the existing patches agains upstream code where we have commitmessages sometimes with only a link to a ticketing system that we do not have access to. I do not want that to continue. dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Dec 13 17:06:35 2018 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 13 Dec 2018 09:06:35 -0800 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> Message-ID: I am glad to see that our testing is actually finding failures! We need to be a little more proactive on resolving issues such as these, this is the second day that this failure is still occurring. **Zhipeng** Please take a look at this issue and if needed revert the change. Sau! On 12/13/18 7:32 AM, Alonso, Juan Carlos wrote: > Status of the Sanity Test for last CENGN ISO: /bootimage.iso from > 2018-Dec-12 /(link > ) > > Sanity Test is executed in a /_Virtual Environment_/ > > Status: *RED* > > ** > > *Simplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 18 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > *Duplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Controller Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Dedicated Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > ------------------------------------------------------------------ > > SFTP service still not working. > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > Regards. > > Juan Carlos Alonso > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From juan.carlos.alonso at intel.com Thu Dec 13 17:12:21 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 13 Dec 2018 17:12:21 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7CAAC@FMSMSX108.amr.corp.intel.com> config_controller can be applied manually, after that the Provisioning and Sanity can be launched without issues. So, I working on make a little fix on our test framework by replacing “SSHLibrary.Put File” keyword with another way to transfer ini files, in order to don’t use SFTP service. This will be a temporary fix, since sftp service must be available to use the test suite in normal way. Regards. Juan Carlos Alonso -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, December 13, 2018 11:07 AM To: Xie, Cindy ; starlingx ; Liu, ZhipengS Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 I am glad to see that our testing is actually finding failures! We need to be a little more proactive on resolving issues such as these, this is the second day that this failure is still occurring. **Zhipeng** Please take a look at this issue and if needed revert the change. Sau! On 12/13/18 7:32 AM, Alonso, Juan Carlos wrote: > Status of the Sanity Test for last CENGN ISO: /bootimage.iso from > 2018-Dec-12 /(link > 212T152535Z/outputs/iso/>) > > Sanity Test is executed in a /_Virtual Environment_/ > > Status: *RED* > > ** > > *Simplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 18 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > *Duplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Controller Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Dedicated Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > ------------------------------------------------------------------ > > SFTP service still not working. > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > Regards. > > Juan Carlos Alonso > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Thu Dec 13 17:13:20 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 13 Dec 2018 11:13:20 -0600 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> Message-ID: On Thu, Dec 13, 2018 at 8:02 AM Cesar Lara wrote: > Not entirely true, StarlingX has an update service so you can upgrade your deployment at any given time, I suggest you follow the stx-update project to get an idea of how that works. I do not think that the process of creating update/upgrade packages is documented publicly yet. This service uses specifically built files for this purpose, it does not use a later ISO build. > If you have a deployment based on the latest stable release you are good to go. These daily builds are getting patches and code for features that are not ready for production nor fully tested until the next release cycle. We are basically fixing bugs being introduced as part of the normal development process. As I understand the process, updates to the internal package repo need to be loaded onto the controller, then those updates can be schedules out to the various nodes. It is the creation of those updates that I think we are missing here. I am not aware of a documented way to take a later ISO and apply that directly to achieve the same result. I would love to be wrong about that... dt -- Dean Troyer dtroyer at gmail.com From erich.cordoba.malibran at intel.com Thu Dec 13 17:35:04 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 13 Dec 2018 17:35:04 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> Message-ID: On Thu, 2018-12-13 at 11:13 -0600, Dean Troyer wrote: > On Thu, Dec 13, 2018 at 8:02 AM Cesar Lara > wrote: > > Not entirely true, StarlingX has an update service so you can > > upgrade your deployment at any given time, I suggest you follow the > > stx-update project to get an idea of how that works. > > I do not think that the process of creating update/upgrade packages > is > documented publicly yet. This service uses specifically built files > for this purpose, it does not use a later ISO build. > > > If you have a deployment based on the latest stable release you > > are good to go. These daily builds are getting patches and code for > > features that are not ready for production nor fully tested until > > the next release cycle. We are basically fixing bugs being > > introduced as part of the normal development process. > > As I understand the process, updates to the internal package repo > need > to be loaded onto the controller, then those updates can be schedules > out to the various nodes. It is the creation of those updates that I > think we are missing here. I am not aware of a documented way to > take > a later ISO and apply that directly to achieve the same result. > We don't have a way to provide updates for the people that already installed a StarlingX instance. I would recommend to use the r/2018.10 ISO, the master branch is on heavy development and prone to bugs. However, if we want to fix it, I think we need to : 1) Provide the packages with updates: we should pay special attention into versioning of every package. 2) Find a way to install the updates and download them to be served into the additional controllers and computes. (not sure if this is what stx-update does) > I would love to be wrong about that... > > dt > From dtroyer at gmail.com Thu Dec 13 17:36:34 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 13 Dec 2018 11:36:34 -0600 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> Message-ID: On Wed, Dec 12, 2018 at 11:00 PM Lin, Shuicheng wrote: > Centos76 branch is failed to build due to qemu patch in master is not merged to feature branch. > I try to use "git rebase master" to do the rebase. But it seems I don't have the grant for it. This is limited to the starlingx-release group in Gerrit. > Could you help rebase stx-integ master change to branch? Done. https://review.openstack.org/#/c/625068/ dt -- Dean Troyer dtroyer at gmail.com From Matt.Peters at windriver.com Thu Dec 13 19:24:57 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 13 Dec 2018 19:24:57 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming In-Reply-To: References: <2C44D9ED-9F61-4223-A1DD-70FEE88DFA30@windriver.com> Message-ID: <04FB2E62-D590-4365-B0AE-F8FE5ABDB21D@windriver.com> Hi Chenjie, This looks good. I think it is ready to be posted for review. Regards, Matt From: "Xu, Chenjie" Date: Monday, December 10, 2018 at 2:11 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, I am sorry for misunderstanding your meaning. The StarlingX Distributed Cloud use-case has been included in the RFE. To make the requirement clear, I update the RFE. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Saturday, December 8, 2018 1:22 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hello Chenjie, I wasn’t planning on providing additional use-case information. I just wanted to make sure the StarlingX Distributed Cloud use-case was included in the RFE. Regards, Matt From: "Xu, Chenjie" > Date: Tuesday, December 4, 2018 at 10:16 PM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, Thank you for your reply! Looking forward to the additional use-case information. Best Regards From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, December 4, 2018 9:12 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Chenjie, I would add additional use-case information to help with the justification for adding this capability. The detailed quota information is used within the StarlingX distributed cloud solution. The quota information for a given project/user is aggregated across all sub-clouds, therefore having an efficient mechanism to retrieve the quota details of all resources is required. Regards, Matt From: "Xu, Chenjie" > Date: Monday, December 3, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Dec 13 19:25:32 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 13 Dec 2018 13:25:32 -0600 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> Message-ID: On Thu, Dec 13, 2018 at 9:33 AM Alonso, Juan Carlos wrote: > > Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-12 (link) > > > > Sanity Test is executed in a Virtual Environment > > > > Status: RED > > > > Simplex > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 18 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > > > Duplex > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > Multinode Controller Storage > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > Multinode Dedicated Storage > > > > Setup 03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity 00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > ------------------------------------------------------------------ > > > > SFTP service still not working. > > > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > Having launchpad created for the issues ia great preactive step thanks for this great job > > Regards. > > Juan Carlos Alonso > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Thu Dec 13 19:28:48 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 13 Dec 2018 13:28:48 -0600 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> Message-ID: On Thu, Dec 13, 2018 at 11:35 AM Cordoba Malibran, Erich wrote: > > On Thu, 2018-12-13 at 11:13 -0600, Dean Troyer wrote: > > On Thu, Dec 13, 2018 at 8:02 AM Cesar Lara > > wrote: > > > Not entirely true, StarlingX has an update service so you can > > > upgrade your deployment at any given time, I suggest you follow the > > > stx-update project to get an idea of how that works. > > > > I do not think that the process of creating update/upgrade packages > > is > > documented publicly yet. This service uses specifically built files > > for this purpose, it does not use a later ISO build. > > > > > If you have a deployment based on the latest stable release you > > > are good to go. These daily builds are getting patches and code for > > > features that are not ready for production nor fully tested until > > > the next release cycle. We are basically fixing bugs being > > > introduced as part of the normal development process. > > > > As I understand the process, updates to the internal package repo > > need > > to be loaded onto the controller, then those updates can be schedules > > out to the various nodes. It is the creation of those updates that I > > think we are missing here. I am not aware of a documented way to > > take > > a later ISO and apply that directly to achieve the same result. > > > > We don't have a way to provide updates for the people that already > installed a StarlingX instance. Then i had this idea wrong, i had the strong belive we had stx-update for this reason Can someone help me to point about external documentation for stx-update? I would recommend to use the r/2018.10 > ISO, the master branch is on heavy development and prone to bugs. > > However, if we want to fix it, I think we need to : > > 1) Provide the packages with updates: we should pay special attention > into versioning of every package. Don't we provide a list of rpm's on every new build? > 2) Find a way to install the updates and download them to be served > into the additional controllers and computes. (not sure if this is what > stx-update does) Me neither , i think we might need a good education mail about what does stx-update does > > > > > I would love to be wrong about that... > > > > dt > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Thu Dec 13 19:10:52 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 13 Dec 2018 19:10:52 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal Message-ID: Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: StarlingX Deployment Improvements.pptx Type: application/vnd.openxmlformats-officedocument.presentationml.presentation Size: 771773 bytes Desc: StarlingX Deployment Improvements.pptx URL: From Marvin.Huang at windriver.com Thu Dec 13 22:08:02 2018 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Thu, 13 Dec 2018 22:08:02 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support In-Reply-To: <74D9C1EDDC44EF468303629CF9A2832C9CE1633E@ALA-MBD.corp.ad.wrs.com> References: <9E7365F4-4B68-4DAB-AF76-057C7D2241D3@intel.com> <74D9C1EDDC44EF468303629CF9A2832C9CE1633E@ALA-MBD.corp.ad.wrs.com> Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE178E4@ALA-MBD.corp.ad.wrs.com> Hi Eric/Al, Can you point me the review this issue was involved? So that I can check the status to see I can try it again. Thanks! Marvin From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 12:03 PM To: Cordoba Malibran, Erich; Bailey, Henry Albert (Al); starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Thanks all! The good news is that it looks the current codes fixed some old issues I hit before (or this time it broke before the previous failing point). Marvin From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, December 11, 2018 11:31 AM To: Bailey, Henry Albert (Al); Huang, Marvin; starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Hi My bad, I wasn’t of this required changes on devstack. I’ll send the patch to solve it. -Erich From: "Bailey, Henry Albert (Al)" Date: Tuesday, December 11, 2018 at 10:03 AM To: "Huang, Marvin" , starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Marvin, A change was merged on Dec 10 which changed the install_non_bb target in the Makefile for fm-mgr There is an open review for adding a devstack job to zuul for stx-fault, so in order for that review to pass zuul, it will need to include the fix. Al From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 10:52 AM To: starlingx Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I’m using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Thu Dec 13 22:24:29 2018 From: claire at openstack.org (Claire Massey) Date: Thu, 13 Dec 2018 16:24:29 -0600 Subject: [Starlingx-discuss] Creation of OSF Project Confirmation Guidelines Message-ID: <96D477E2-791F-4C1D-A728-6D6DD1E66FCA@openstack.org> Hi everyone, As you know, StarlingX is a pilot project supported by the OpenStack Foundation (OSF). As such, we wanted to make sure you’re aware of an effort recently kicked off a sub group of the OSF Board of Directors to draft guidelines[1] for confirming pilot projects as full, top-level, open infrastructure project of the Foundation. The current pilot projects that will be eligible for confirmation review in 2019-2020 are Airship, Kata Containers, StarlingX and Zuul. Currently the OpenStack project is the only confirmed project. The discussion about drafting these guidelines is just getting started, and *we invite you to participate in this effort* by reviewing the working draft in this etherpad and adding in your feedback and thoughts starting on line 139: https://etherpad.openstack.org/p/BrainstormingOSFProjectConfirmationGuidelines There will most likely be an open call scheduled in early 2019 where additional feedback will also be welcomed. I will share that invitation with you once it’s schedule. The goal is to come up with a set of points that the Board can use when reviewing pilot projects for confirmation. This topic has previously been discussed at a very high-level at previous Board meetings, including the September 18th meeting[2] (starting around slide 38). 1. https://etherpad.openstack.org/p/ProjectConfirmationGuidelines 2. https://docs.google.com/presentation/d/10UyCpxkjPqC3kT-dYRpBxzNT39i2OhlggJvzGDosMz0/edit#slide=id.g4274351d5e_1_392 Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Dec 14 00:06:09 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 14 Dec 2018 00:06:09 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181213 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7CC1A@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-13 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ config_controller can be applied manually, after that the Provisioning and Sanity can be launched without issues. So, I made a little fix on our test framework by replacing "SSHLibrary.Put File" keyword with another way to transfer ini configuration files, in order to don't use SFTP service. With this fix tests executed in automated way. This is a temporary fix, since sftp service must be available to use the test suite in normal way. SFTP service still not working. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Dec 14 00:08:06 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 14 Dec 2018 00:08:06 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181213 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7CC31@FMSMSX108.amr.corp.intel.com> Sorry.. the status Of current the ISO should be YELLOW since there is a critical issue. From: Alonso, Juan Carlos Sent: Thursday, December 13, 2018 6:06 PM To: starlingx Subject: FW: Sanity Test - ISO 20181213 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-13 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ config_controller can be applied manually, after that the Provisioning and Sanity can be launched without issues. So, I made a little fix on our test framework by replacing "SSHLibrary.Put File" keyword with another way to transfer ini configuration files, in order to don't use SFTP service. With this fix tests executed in automated way. This is a temporary fix, since sftp service must be available to use the test suite in normal way. SFTP service still not working. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Fri Dec 14 00:42:44 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 14 Dec 2018 00:42:44 +0000 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: References: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE55FFB@SHSMSX101.ccr.corp.intel.com> Hi Dean/Saul, Could you help rebase stx-tools master change to branch also? Otherwise, build iso will be failed in branch. Thanks. Best Regards Shuicheng -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, December 14, 2018 1:37 AM To: Lin, Shuicheng Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch On Wed, Dec 12, 2018 at 11:00 PM Lin, Shuicheng wrote: > Centos76 branch is failed to build due to qemu patch in master is not merged to feature branch. > I try to use "git rebase master" to do the rebase. But it seems I don't have the grant for it. This is limited to the starlingx-release group in Gerrit. > Could you help rebase stx-integ master change to branch? Done. https://review.openstack.org/#/c/625068/ dt -- Dean Troyer dtroyer at gmail.com From yong.hu at intel.com Fri Dec 14 00:49:18 2018 From: yong.hu at intel.com (Hu, Yong) Date: Fri, 14 Dec 2018 00:49:18 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal Message-ID: Hi Matt, How will this improvement proposal impact those configurations done by puppet? As well, as to the proposal of system inventory improvement, you mentioned other BM attributes (Type, IP address, Credentials). Do we expect something more than PXEboot setting done in BIOS on the host in advance? Regards, Yong From: "Peters, Matt" Date: Friday, 14 December 2018 at 3:41 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Dec 14 02:01:38 2018 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 13 Dec 2018 18:01:38 -0800 Subject: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE55FFB@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE55E2F@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE55FFB@SHSMSX101.ccr.corp.intel.com> Message-ID: <384d85b3-c7e8-b882-5728-1520535abff6@linux.intel.com> On 12/13/18 4:42 PM, Lin, Shuicheng wrote: > Hi Dean/Saul, > Could you help rebase stx-tools master change to branch also? > Otherwise, build iso will be failed in branch. > Thanks. > Done Review here: https://review.openstack.org/#/c/625137/ Sau! > Best Regards > Shuicheng > > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > Sent: Friday, December 14, 2018 1:37 AM > To: Lin, Shuicheng > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [build] CentOS 7.6 rebase into feature branch > > On Wed, Dec 12, 2018 at 11:00 PM Lin, Shuicheng wrote: >> Centos76 branch is failed to build due to qemu patch in master is not merged to feature branch. >> I try to use "git rebase master" to do the rebase. But it seems I don't have the grant for it. > > This is limited to the starlingx-release group in Gerrit. > >> Could you help rebase stx-integ master change to branch? > > Done. https://review.openstack.org/#/c/625068/ > > dt > From chenjie.xu at intel.com Fri Dec 14 02:22:48 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 14 Dec 2018 02:22:48 +0000 Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming In-Reply-To: <04FB2E62-D590-4365-B0AE-F8FE5ABDB21D@windriver.com> References: <2C44D9ED-9F61-4223-A1DD-70FEE88DFA30@windriver.com> <04FB2E62-D590-4365-B0AE-F8FE5ABDB21D@windriver.com> Message-ID: Hi Matt, Thank you for your response. The RFE has been posted and the link is below: https://bugs.launchpad.net/python-neutronclient/+bug/1808451 Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 3:25 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Chenjie, This looks good. I think it is ready to be posted for review. Regards, Matt From: "Xu, Chenjie" > Date: Monday, December 10, 2018 at 2:11 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, I am sorry for misunderstanding your meaning. The StarlingX Distributed Cloud use-case has been included in the RFE. To make the requirement clear, I update the RFE. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Saturday, December 8, 2018 1:22 AM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hello Chenjie, I wasn’t planning on providing additional use-case information. I just wanted to make sure the StarlingX Distributed Cloud use-case was included in the RFE. Regards, Matt From: "Xu, Chenjie" > Date: Tuesday, December 4, 2018 at 10:16 PM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, Thank you for your reply! Looking forward to the additional use-case information. Best Regards From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, December 4, 2018 9:12 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Chenjie, I would add additional use-case information to help with the justification for adding this capability. The detailed quota information is used within the StarlingX distributed cloud solution. The quota information for a given project/user is aggregated across all sub-clouds, therefore having an efficient mechanism to retrieve the quota details of all resources is required. Regards, Matt From: "Xu, Chenjie" > Date: Monday, December 3, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 71c07d7 for StartlingX upstreaming Hi Matt, The RFE for patch 71c07d7 has been drafted and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Fri Dec 14 02:26:42 2018 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Fri, 14 Dec 2018 02:26:42 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F35DFB2B8@SHSMSX104.ccr.corp.intel.com> <0D7994A90DD70040A9F5E77C4D23C57D50F15B3A@SHSMSX104.ccr.corp.intel.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F16549@SHSMSX104.ccr.corp.intel.com> Hi Dean, Please check my below interleaved reply. Thanks for your suggestion. B.R. Changcheng From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: Friday, December 14, 2018 12:36 AM To: Liu, Changcheng Cc: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 12/12 On Wed, Dec 12, 2018 at 8:06 PM Liu, Changcheng > wrote: 2. [Dean] the references to the original commit are not available externally [Changcheng] Yes. The reference should be removed from commit message at last. Originally, I want both Intel & WindRiver engineers could find where the patches are ported from which place at the initial porting stage. [dt] The reference should NOT be removed. If a patch is being rebased the reference to the original patch must be preserved. [Changcheng]: I didn’t get your point. Commit message should be kept there. For the reference(how to find the original patch), I’ll keep them in the initial porting stage. Once we verified these ported patches works in the end, I’ll remove the reference since they can’t be accessed by external Intel/WindRiver engineers. Do you mean that we should keep “url link” in the commit message directly? Intel internal github can’t be accessed by external engineers. 3. [Dean] It also seems like it would be easier to review and merge these in smaller batches. [Changcheng] Yes. I’m syncing with WindRiver engineers to check whether we could merge some patches firstly to avoid times of rebase and review. [dt] I am not talking about merging commits, I am talking about splitting the existing commits into multiple Github PRs. Please do not merge commits that are not directly related to each other.. [Changcheng] Some patches are depended on previous patches. We could try to extract patches into new PR later. Currently, we need make sure ceph works locally. The PR could be accessed by WindRiver engineers who’re working with us to debug the problems. 4. [Dean] There is also no reference in either the commit messages or the PR description to a Storyboard story or task or any further documentation to why this work is being done. [Changcheng] I’ll add related information in PR message if we agree with merge part of patches firstly. [dt] Please put it into the commit messages. The PR text is not part of the git repo and is lost github is unavailable. [Changcheng] Yes. Right commit message should be kept there. If you find some commit message is lost, please tell give comment in the patch. Currently, I have kept all the commit message in the right place. For PR message, we could refine them to meet with your requirement. 5. [Dean] only the git commit messages are guaranteed to stay with the code changes. [Changcheng] We’ll give document about stx-ceph upgrade once it’s been upgraded successfully. [dt]That does not address the need to put good information into the individual commit messages. [Changcheng] As I’ve said previously “If you find some commit message is lost, please tell give comment in the patch. Currently, I have kept all the commit message in the right place.” 6. [Dean] I had asked for the relevant information to be included in the individual commit messages and I still do not see that being done. We are losing valuable information and traceability for why we are making these changes to upstream. [Changcheng] Personally, I think I’ve kept most part of original commit message in the new ported patches. Some huge patch is divided into small patches(If you look the original patch, it’s merged by several patches. It’s hard to be maintained). For PR info, we could give more detail info according to your requirement. [dt]Thank you for splitting up previously squashed patches. Please do not confuse PR information (the text that is part of a Github PR) with a commit message (the text that is part of a git commit). Github PRs are not the place of record for us. Information that does not fit into a git commit message should be in Storyboard or Launchpad, the two places we keep track of those things. But more importantly, things that the next team that looks at this code will want to have without access to Github or Storyboard or Launchpad needs to be in the commit message. This is exactly the problem we have with the existing patches agains upstream code where we have commitmessages sometimes with only a link to a ticketing system that we do not have access to. I do not want that to continue. [Changcheng] We could avoid much effort if we didn’t squash several patches into one big patch in stx-ceph/v10.2.2. Thanks for your reminding about the distinguish between PR info and patch commit message. For PR info, we’ll refine them with your requirement. For patch commit message, I haven’t found any serious problem. If you find something wrong, give comment in the patch directly. dt -- Dean Troyer dtroyer at gmail.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Fri Dec 14 06:27:22 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 14 Dec 2018 06:27:22 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> Message-ID: <93814834B4855241994F290E959305C75300E3D9@SHSMSX104.ccr.corp.intel.com> Hi all, I have double checked my patch below https://review.openstack.org/#/c/623994 No obvious issue found. It just modify ssh_config and sshd_config configuration file in /etc/ssh/ folder through configuration package instead of using source patch to modify this 2 files during build time. I also tried to reproduce this issue in my deployment environment. Before I trigger config_controller, I set ip address 10.10.10.111 for controller Then I tried below 2 commands from my deploy host. Both are OK! It seems ssh can work normally. 1) ssh wrsroot at 10.10.10.111 2) scp file wrsroot at 10.10.10.111:~/ From the debug log in https://bugs.launchpad.net/starlingx/+bug/1808054 I can see that first ssh to 10.10.10.111 is successful. Then it ssh to 10.10.10.3 and could not connect to it. I'm confused here, why script trigger ssh to 10.10.10.3, which will be configured later during executing config_controller. @JC, could you double check your test script as well? ===========================log markers============================================= 20181211 05:33:57.632 - INFO - Logging into '10.10.10.111 prompt=$:22' as 'wrsroot'. 20181211 05:33:58.784 - INFO - Read output: Last login: Tue Dec 11 11:31:44 2018 from 10.10.10.1  WARNING: Unauthorized access to this system is forbidden and will be prosecuted by law. By accessing this system, you agree that your actions may be monitored if unauthorized usage is suspected. localhost:~$ 20181211 05:33:58.806 - INFO - +------ START KW: SSHLibrary.Login [ ${user} | ${password} | delay=${delay} ] 20181211 05:33:58.806 - INFO - Logging into '10.10.10.3 prompt=$:22' as 'wrsroot'. 20181211 05:34:01.804 - FAIL - NoValidConnectionsError: [Errno None] Unable to connect to port 22 on 10.10.10.3 BTW, is there any related issue observed from WR side? Thx! -zhipeng -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: 2018年12月14日 1:07 To: Xie, Cindy ; starlingx ; Liu, ZhipengS Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 I am glad to see that our testing is actually finding failures! We need to be a little more proactive on resolving issues such as these, this is the second day that this failure is still occurring. **Zhipeng** Please take a look at this issue and if needed revert the change. Sau! On 12/13/18 7:32 AM, Alonso, Juan Carlos wrote: > Status of the Sanity Test for last CENGN ISO: /bootimage.iso from > 2018-Dec-12 /(link > 212T152535Z/outputs/iso/>) > > Sanity Test is executed in a /_Virtual Environment_/ > > Status: *RED* > > ** > > *Simplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 18 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > *Duplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Controller Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Dedicated Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > ------------------------------------------------------------------ > > SFTP service still not working. > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > Regards. > > Juan Carlos Alonso > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From yi.c.wang at intel.com Fri Dec 14 08:53:22 2018 From: yi.c.wang at intel.com (Wang, Yi C) Date: Fri, 14 Dec 2018 08:53:22 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: Message-ID: Hi Matt, I just went through your slides. And I have a few questions. I appreciate if you can share more information about your proposal. Many thanks! 1. We know config_controller will do many things, like bootstrap configuration and controller configuration together with required hieradata generation. All the jobs of config_controller will be taken over by Ansible, or just part of them? 2. Does WindRiver has plan to replace Puppet with Ansible for all configuration jobs in the future? 3. For the first controller, we still need local execution of Ansible playbook for initial bootstrap. Is my understanding correct? BR. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 3:11 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Fri Dec 14 09:08:44 2018 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Fri, 14 Dec 2018 09:08:44 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: <93814834B4855241994F290E959305C75300E3D9@SHSMSX104.ccr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> <93814834B4855241994F290E959305C75300E3D9@SHSMSX104.ccr.corp.intel.com> Message-ID: <93814834B4855241994F290E959305C75300E702@SHSMSX104.ccr.corp.intel.com> Hi JC, Now I have found the root cause. The sftp path in sshd_config is not correct before executing config_controller! Please cherry pick below patch and try it again. Pls let me know your result, thanks! https://review.openstack.org/#/c/625184/ My analysis is below From original source patch, I can see sftp patch is set to /usr/libexec/sftp-server In harden-server-and-client-config.patch @@ -137,3 +138,11 @@ Subsystem sftp /usr/libexec/sftp-server # AllowTcpForwarding no # PermitTTY no That's why I set it the same in my refactor patch. After puppet applied, sshd_config will be overwritten by a puppet version--puppet-sshd/src/sshd/templates/sshd_config.erb In this version, sftp path is right -- /usr/libexec/openssh/sftp-server. So, I'm still confused why it can work with old version which should also use a wrong path. I will check further. Thanks! Zhipeng -----Original Message----- From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] Sent: 2018年12月14日 14:27 To: Saul Wold ; Xie, Cindy ; starlingx Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 Hi all, I have double checked my patch below https://review.openstack.org/#/c/623994 No obvious issue found. It just modify ssh_config and sshd_config configuration file in /etc/ssh/ folder through configuration package instead of using source patch to modify this 2 files during build time. I also tried to reproduce this issue in my deployment environment. Before I trigger config_controller, I set ip address 10.10.10.111 for controller Then I tried below 2 commands from my deploy host. Both are OK! It seems ssh can work normally. 1) ssh wrsroot at 10.10.10.111 2) scp file wrsroot at 10.10.10.111:~/ From the debug log in https://bugs.launchpad.net/starlingx/+bug/1808054 I can see that first ssh to 10.10.10.111 is successful. Then it ssh to 10.10.10.3 and could not connect to it. I'm confused here, why script trigger ssh to 10.10.10.3, which will be configured later during executing config_controller. @JC, could you double check your test script as well? ===========================log markers============================================= 20181211 05:33:57.632 - INFO - Logging into '10.10.10.111 prompt=$:22' as 'wrsroot'. 20181211 05:33:58.784 - INFO - Read output: Last login: Tue Dec 11 11:31:44 2018 from 10.10.10.1  WARNING: Unauthorized access to this system is forbidden and will be prosecuted by law. By accessing this system, you agree that your actions may be monitored if unauthorized usage is suspected. localhost:~$ 20181211 05:33:58.806 - INFO - +------ START KW: SSHLibrary.Login [ ${user} | ${password} | delay=${delay} ] 20181211 05:33:58.806 - INFO - Logging into '10.10.10.3 prompt=$:22' as 'wrsroot'. 20181211 05:34:01.804 - FAIL - NoValidConnectionsError: [Errno None] Unable to connect to port 22 on 10.10.10.3 BTW, is there any related issue observed from WR side? Thx! -zhipeng -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: 2018年12月14日 1:07 To: Xie, Cindy ; starlingx ; Liu, ZhipengS Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 I am glad to see that our testing is actually finding failures! We need to be a little more proactive on resolving issues such as these, this is the second day that this failure is still occurring. **Zhipeng** Please take a look at this issue and if needed revert the change. Sau! On 12/13/18 7:32 AM, Alonso, Juan Carlos wrote: > Status of the Sanity Test for last CENGN ISO: /bootimage.iso from > 2018-Dec-12 /(link > 212T152535Z/outputs/iso/>) > > Sanity Test is executed in a /_Virtual Environment_/ > > Status: *RED* > > ** > > *Simplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 18 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > *Duplex* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Controller Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > *Multinode Dedicated Storage* > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > ------------------------------------------------------------------ > > SFTP service still not working. > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > Regards. > > Juan Carlos Alonso > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Volker.Hoesslin at swsn.de Fri Dec 14 12:12:24 2018 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Fri, 14 Dec 2018 12:12:24 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: <3k03ta01c8bua06m@shdsegapp2> References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> <3k03ta01c8bua06m@shdsegapp2> Message-ID: The currently provided ISOs are all stable? or which version should I use? how this should work with the updates (stx-update-project) is not clear to me yet, but I am thinking about an OTA update solution with which I can keep my current stack up to date?! These update options are basically a war decision for me, as I would like to go into a productive environment with starlingX. volker Von: Cesar Lara [mailto:cesarlarag at gmail.com] Gesendet: Donnerstag, 13. Dezember 2018 15:02 An: volker.von.hoesslin at gmx.de Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 Not entirely true, StarlingX has an update service so you can upgrade your deployment at any given time, I suggest you follow the stx-update project to get an idea of how that works. If you have a deployment based on the latest stable release you are good to go. These daily builds are getting patches and code for features that are not ready for production nor fully tested until the next release cycle. We are basically fixing bugs being introduced as part of the normal development process. CL On Thu, Dec 13, 2018, 7:31 AM wrote: outch, that's not so good :( how are the plans to offer updates about patches here? volker... Gesendet: Mittwoch, 12. Dezember 2018 um 17:57 Uhr Von: "Alonso, Juan Carlos" > An: "volker.von.hoesslin at gmx.de" >, starlingx > Betreff: RE: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 I think you need to get a new ISO updated and deploy the system again. Until now there is not a way to update the installation while it is running. Regards. Juan Carlos Alonso From: volker.von.hoesslin at gmx.de [mailto:volker.von.hoesslin at gmx.de] Sent: Wednesday, December 12, 2018 6:32 AM To: starlingx > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 regardless of the fact that this build failed, if i already have an installation running, how do i get it up to date, are there patch files somewhere i don't know anything about? volker Gesendet: Dienstag, 11. Dezember 2018 um 22:51 Uhr Von: "Alonso, Juan Carlos" > An: starlingx > Betreff: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-11 (link) Sanity Test is executed in a Virtual Environment Status: RED Simplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 18 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] Duplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Controller Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Dedicated Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] ------------------------------------------------------------------ This issue was found by our Robot test framework. During config_controller, the suite copy a Config file from host to StarlingX system. Robot uses SSHLibrary keyword, such keyword uses sftp service to perform the transfer file. Such transfer failed due to sftp service is not working on the system. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Fri Dec 14 13:42:44 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 14 Dec 2018 13:42:44 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: <93814834B4855241994F290E959305C75300E3D9@SHSMSX104.ccr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> <93814834B4855241994F290E959305C75300E3D9@SHSMSX104.ccr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A91DCD0@fmsmsx101.amr.corp.intel.com> > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Friday, December 14, 2018 12:27 AM > To: Saul Wold ; Xie, Cindy ; > starlingx > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 > > Hi all, > > I have double checked my patch below > https://review.openstack.org/#/c/623994 > No obvious issue found. It just modify ssh_config and sshd_config > configuration file in /etc/ssh/ folder through configuration package instead > of using source patch to modify this 2 files during build time. > > I also tried to reproduce this issue in my deployment environment. > Before I trigger config_controller, I set ip address 10.10.10.111 for controller > Then I tried below 2 commands from my deploy host. Both are OK! It seems > ssh can work normally. > 1) ssh wrsroot at 10.10.10.111 > 2) scp file wrsroot at 10.10.10.111:~/ The issue appears when you use $ sftp wrsroot at 10.10.10.111 , above 2 commands were working correctly all the time > > From the debug log in https://bugs.launchpad.net/starlingx/+bug/1808054 > I can see that first ssh to 10.10.10.111 is successful. Connection is successful stablished, as I mentioned before SSH connections were working all the time, what was failing was SFTP connections. > Then it ssh to 10.10.10.3 and could not connect to it. > I'm confused here, why script trigger ssh to 10.10.10.3, which will be > configured later during executing config_controller. > @JC, could you double check your test script as well? The second error is because a Tear Down is executed after the test is finished, and the logic is pointing to the final IP not for the temporal one. This will be fixed. > ===========================log > markers============================================= > 20181211 05:33:57.632 - INFO - Logging into '10.10.10.111 prompt=$:22' as > 'wrsroot'. > 20181211 05:33:58.784 - INFO - Read output: Last login: Tue Dec 11 11:31:44 > 2018 from 10.10.10.1  > WARNING: Unauthorized access to this system is forbidden and will be > prosecuted by law. By accessing this system, you agree that your actions may > be monitored if unauthorized usage is suspected. > > localhost:~$ > 20181211 05:33:58.806 - INFO - +------ START KW: SSHLibrary.Login [ ${user} | > ${password} | delay=${delay} ] > 20181211 05:33:58.806 - INFO - Logging into '10.10.10.3 prompt=$:22' as > 'wrsroot'. > 20181211 05:34:01.804 - FAIL - NoValidConnectionsError: [Errno None] > Unable to connect to port 22 on 10.10.10.3 > > BTW, is there any related issue observed from WR side? > > Thx! -zhipeng > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: 2018年12月14日 1:07 > To: Xie, Cindy ; starlingx discuss at lists.starlingx.io>; Liu, ZhipengS > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 > > > I am glad to see that our testing is actually finding failures! > > We need to be a little more proactive on resolving issues such as these, this is > the second day that this failure is still occurring. > > **Zhipeng** > Please take a look at this issue and if needed revert the change. > > Sau! > > > On 12/13/18 7:32 AM, Alonso, Juan Carlos wrote: > > Status of the Sanity Test for last CENGN ISO: /bootimage.iso from > > 2018-Dec-12 /(link > > > 212T152535Z/outputs/iso/>) > > > > Sanity Test is executed in a /_Virtual Environment_/ > > > > Status: *RED* > > > > ** > > > > *Simplex* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 18 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > > > *Duplex* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > *Multinode Controller Storage* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > *Multinode Dedicated Storage* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > ------------------------------------------------------------------ > > > > SFTP service still not working. > > > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > > > Regards. > > > > Juan Carlos Alonso > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Fri Dec 14 13:43:00 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 14 Dec 2018 13:43:00 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: Message-ID: <1D02BEF6-5D98-484B-A436-B5867A59A380@windriver.com> See inline. From: "Hu, Yong" Date: Thursday, December 13, 2018 at 7:49 PM To: "Peters, Matt" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Improvements Proposal Hi Matt, How will this improvement proposal impact those configurations done by puppet? MP> There will be some changes impacting the puppet manifests performed during the initial bootstrap phase (initial controller host), but there are no specific impacts to the use of Puppet for the rest of the configuration management. As well, as to the proposal of system inventory improvement, you mentioned other BM attributes (Type, IP address, Credentials). MP> These attributes are existing. The change in behavior is to support identifying the host hardware based on inventory gathered from the BMC rather than needing the operator to specify the boot interface MAC address explicitelty. Do we expect something more than PXEboot setting done in BIOS on the host in advance? MP> Nothing is planned at this time. Regards, Yong From: "Peters, Matt" Date: Friday, 14 December 2018 at 3:41 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Fri Dec 14 14:26:13 2018 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 14 Dec 2018 14:26:13 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 In-Reply-To: <93814834B4855241994F290E959305C75300E702@SHSMSX104.ccr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C7CA4F@FMSMSX108.amr.corp.intel.com> <93814834B4855241994F290E959305C75300E3D9@SHSMSX104.ccr.corp.intel.com> <93814834B4855241994F290E959305C75300E702@SHSMSX104.ccr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A91DD11@fmsmsx101.amr.corp.intel.com> Hi Zhipeng We executed manually a test with "Subsystem sftp /usr/libexec/openssh/sftp-server" and SFTP connection is started correctly, I set +1 to the patch and will verify on our suite automatically when this gets integrated on an CENGN ISO. Regards, José > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: Friday, December 14, 2018 3:09 AM > To: Alonso, Juan Carlos > Cc: starlingx > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 > > Hi JC, > > Now I have found the root cause. > The sftp path in sshd_config is not correct before executing > config_controller! > Please cherry pick below patch and try it again. Pls let me know your result, > thanks! > https://review.openstack.org/#/c/625184/ > > My analysis is below > > From original source patch, I can see sftp patch is set to /usr/libexec/sftp- > server In harden-server-and-client-config.patch > @@ -137,3 +138,11 @@ Subsystem sftp /usr/libexec/sftp-server # > AllowTcpForwarding no # PermitTTY no > > That's why I set it the same in my refactor patch. > After puppet applied, sshd_config will be overwritten by a puppet version-- > puppet-sshd/src/sshd/templates/sshd_config.erb > In this version, sftp path is right -- /usr/libexec/openssh/sftp-server. > > So, I'm still confused why it can work with old version which should also use > a wrong path. > I will check further. > > Thanks! > Zhipeng > > -----Original Message----- > From: Liu, ZhipengS [mailto:zhipengs.liu at intel.com] > Sent: 2018年12月14日 14:27 > To: Saul Wold ; Xie, Cindy ; > starlingx > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 > > Hi all, > > I have double checked my patch below > https://review.openstack.org/#/c/623994 > No obvious issue found. It just modify ssh_config and sshd_config > configuration file in /etc/ssh/ folder through configuration package instead > of using source patch to modify this 2 files during build time. > > I also tried to reproduce this issue in my deployment environment. > Before I trigger config_controller, I set ip address 10.10.10.111 for controller > Then I tried below 2 commands from my deploy host. Both are OK! It seems > ssh can work normally. > 1) ssh wrsroot at 10.10.10.111 > 2) scp file wrsroot at 10.10.10.111:~/ > > From the debug log in https://bugs.launchpad.net/starlingx/+bug/1808054 > I can see that first ssh to 10.10.10.111 is successful. > Then it ssh to 10.10.10.3 and could not connect to it. > I'm confused here, why script trigger ssh to 10.10.10.3, which will be > configured later during executing config_controller. > @JC, could you double check your test script as well? > ===========================log > markers============================================= > 20181211 05:33:57.632 - INFO - Logging into '10.10.10.111 prompt=$:22' as > 'wrsroot'. > 20181211 05:33:58.784 - INFO - Read output: Last login: Tue Dec 11 11:31:44 > 2018 from 10.10.10.1  > WARNING: Unauthorized access to this system is forbidden and will be > prosecuted by law. By accessing this system, you agree that your actions may > be monitored if unauthorized usage is suspected. > > localhost:~$ > 20181211 05:33:58.806 - INFO - +------ START KW: SSHLibrary.Login [ ${user} | > ${password} | delay=${delay} ] > 20181211 05:33:58.806 - INFO - Logging into '10.10.10.3 prompt=$:22' as > 'wrsroot'. > 20181211 05:34:01.804 - FAIL - NoValidConnectionsError: [Errno None] > Unable to connect to port 22 on 10.10.10.3 > > BTW, is there any related issue observed from WR side? > > Thx! -zhipeng > > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: 2018年12月14日 1:07 > To: Xie, Cindy ; starlingx discuss at lists.starlingx.io>; Liu, ZhipengS > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181212 > > > I am glad to see that our testing is actually finding failures! > > We need to be a little more proactive on resolving issues such as these, this is > the second day that this failure is still occurring. > > **Zhipeng** > Please take a look at this issue and if needed revert the change. > > Sau! > > > On 12/13/18 7:32 AM, Alonso, Juan Carlos wrote: > > Status of the Sanity Test for last CENGN ISO: /bootimage.iso from > > 2018-Dec-12 /(link > > > 212T152535Z/outputs/iso/>) > > > > Sanity Test is executed in a /_Virtual Environment_/ > > > > Status: *RED* > > > > ** > > > > *Simplex* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 18 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] > > > > *Duplex* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > *Multinode Controller Storage* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > *Multinode Dedicated Storage* > > > > Setup        03 TCs [PASS] | 01 TCs [FAIL] > > > > Provisioning 00 TCs [PASS] | 01 TCs [FAIL] > > > > Sanity       00 TCs [PASS] | 19 TCs [FAIL] > > > > TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] > > > > ------------------------------------------------------------------ > > > > SFTP service still not working. > > > > Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 > > > > Regards. > > > > Juan Carlos Alonso > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Fri Dec 14 14:43:26 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 14 Dec 2018 14:43:26 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: Message-ID: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> See inline. From: "Wang, Yi C" Date: Friday, December 14, 2018 at 3:53 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RE: Deployment Improvements Proposal Hi Matt, I just went through your slides. And I have a few questions. I appreciate if you can share more information about your proposal. Many thanks! 1. We know config_controller will do many things, like bootstrap configuration and controller configuration together with required hieradata generation. All the jobs of config_controller will be taken over by Ansible, or just part of them? MP> Yes most of these tasks will be handled by the Ansible playbook. However, much of the existing capabilities may be leveraged in the implementation to avoid re-writing everything. The details will be outlined in the forthcoming spec. 2. Does WindRiver has plan to replace Puppet with Ansible for all configuration jobs in the future? MP> There are no specific plans to replace Puppet for all configuration management. However, there are several features being actively developed in StarlingX that will be changing the existing Puppet manifests (e.g. OpenStack Containerization). 3. For the first controller, we still need local execution of Ansible playbook for initial bootstrap. Is my understanding correct? MP> This is one of the main drivers for changing some of the existing config_controller and Puppet manifest handling. The operator will have the ability to run the Ansible playbook locally or remotely. BR. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 3:11 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Dec 14 16:56:43 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 14 Dec 2018 16:56:43 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C723EB@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C7A83C@FMSMSX108.amr.corp.intel.com> <3k03ta01c8bua06m@shdsegapp2> Message-ID: <9A85D2917C58154C960D95352B22818BB1ED4101@fmsmsx117.amr.corp.intel.com> We recommend that all users of StarlingX use the October release ISO image [0]. The daily ISO images we are producing are test images that may or may not run. They have not been through a release test cycle. We do not guarantee that they are stable. Providing OTA updates is an issue that has not been discussed in our community recently. The software is capable of it but relies on special builds to create the updates. We need to decide if our next release (ETA May’19) will be upgradable from the October’18 release and if so, plan the work. I’m putting this topic onto the agenda for our January 15-16th community meet-up in Chandler AZ. [1] brucej [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ [1] https://etherpad.openstack.org/p/stx-chandler-meetup From: von Hoesslin, Volker [mailto:Volker.Hoesslin at swsn.de] Sent: Friday, December 14, 2018 4:12 AM To: 'Cesar Lara' Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 The currently provided ISOs are all stable? or which version should I use? how this should work with the updates (stx-update-project) is not clear to me yet, but I am thinking about an OTA update solution with which I can keep my current stack up to date?! These update options are basically a war decision for me, as I would like to go into a productive environment with starlingX. volker Von: Cesar Lara [mailto:cesarlarag at gmail.com] Gesendet: Donnerstag, 13. Dezember 2018 15:02 An: volker.von.hoesslin at gmx.de Cc: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 Not entirely true, StarlingX has an update service so you can upgrade your deployment at any given time, I suggest you follow the stx-update project to get an idea of how that works. If you have a deployment based on the latest stable release you are good to go. These daily builds are getting patches and code for features that are not ready for production nor fully tested until the next release cycle. We are basically fixing bugs being introduced as part of the normal development process. CL On Thu, Dec 13, 2018, 7:31 AM wrote: outch, that's not so good :( how are the plans to offer updates about patches here? volker... Gesendet: Mittwoch, 12. Dezember 2018 um 17:57 Uhr Von: "Alonso, Juan Carlos" > An: "volker.von.hoesslin at gmx.de" >, starlingx > Betreff: RE: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 I think you need to get a new ISO updated and deploy the system again. Until now there is not a way to update the installation while it is running. Regards. Juan Carlos Alonso From: volker.von.hoesslin at gmx.de [mailto:volker.von.hoesslin at gmx.de] Sent: Wednesday, December 12, 2018 6:32 AM To: starlingx > Subject: Re: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 regardless of the fact that this build failed, if i already have an installation running, how do i get it up to date, are there patch files somewhere i don't know anything about? volker Gesendet: Dienstag, 11. Dezember 2018 um 22:51 Uhr Von: "Alonso, Juan Carlos" > An: starlingx > Betreff: [Starlingx-discuss] FW: Sanity Test - ISO 20181211 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-11 (link) Sanity Test is executed in a Virtual Environment Status: RED Simplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 18 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 20 TCs FAIL ] Duplex Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Controller Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Dedicated Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] ------------------------------------------------------------------ This issue was found by our Robot test framework. During config_controller, the suite copy a Config file from host to StarlingX system. Robot uses SSHLibrary keyword, such keyword uses sftp service to perform the transfer file. Such transfer failed due to sftp service is not working on the system. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Fri Dec 14 08:23:30 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 14 Dec 2018 08:23:30 +0000 Subject: [Starlingx-discuss] FW: Analysis of patch 9f926a5 for StartlingX upstreaming References: <51F8F06E-D06E-4DDA-AABF-D69B622EFD56@windriver.com> <331FE402-1858-451D-8506-92E3E1033612@windriver.com> <70A7408C6E1BFB41B192A929744D8523BAC4D60F@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Allain, As you suggested, I’m testing the 2 changes together. However I find that the fdb for floatingip can’t be installed on the br-tun of the compute node. Because the network-id in fdb is not in the LocalVlanManager. The details are below and could you please help review and comment? The environment: latest DevStack with 1 controller node and 1 compute node The network topology is below: [cid:image004.jpg at 01D493C9.581333B0] Steps: 1. Create an external network external-net neutron net-create external-net --router:external True --provider:network_type vxlan 2. Create a subnet on external-net neutron subnet-create external-net 192.168.25.0/24 --name external-subnet --allocation-pool start=192.168.25.200,end=192.168.25.250 3. Create an internal network, create a subnet on net4 neutron net-create net4 neutron subnet-create net4 192.168.2.0/24 --name subnet4 4. Create a router, set router gateway as external-net, add subnet4 to router neutron router-create router neutron router-gateway-set $router-id $external-net-id neutron router-interface-add $router-id $subnet4-id 5. Create a VM vm-1 on net4 (vm-1 runs on compute node) 6. Allocate floating IP FIP-1 on external-net through horizon 7. Associate FIP-1 with vm-1 through horizon The fdb is below: [cid:image007.jpg at 01D493C9.581333B0] The OVS agent on the compute node receives the FDB: [cid:image008.jpg at 01D493C9.581333B0] In OVS agent, LocalVlanManager is used to map tunnel ids or vlan ids to internal vlans: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/vlanmanager.py After OVS agent receiving the FDB, it will try to get LocalVlanMapping from LocalVlanMannager: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L558 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L237 However the network-id f2ebf82a-e788-4456-8516-c95b12f91d49 is not in the LocalVlanManager. Thus the fdb can’t be installed on br-tun. The mapping in the LocalVlanManager in the OVS agent which is on the compute node is below: [cid:image009.jpg at 01D493C9.581333B0] Analysis: The vm-1 is created on internal network net4 . When creating vm-1, a port in the net4 will be bound to vm-1. Thus the network-id can be added to the LocalVlanManager. The network id for floating IP is used in FDB but the corresponding network is an external network. And this external network’s network id is not in the LocalVlanManager in the OVS agent which is on the compute node. Best Regard, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Friday, December 7, 2018 10:20 PM To: Xu, Chenjie >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming The change that is being reviewed here was originally a part of a larger commit (9f926a5d253). They should be implemented together or at least tested together. I seem to remember that there was information missing in case 1 that prevented a proper FDB notification from being generated. Please retest your scenarios and capture the input parameters to add_fdb_entries(), remove_fdb_entries(), and update_fdb_entries() in neutron/plugins/ml2/drivers/l2pop/rpc.py:L2populationAgentNotifyAPI to be sure that expected notifications are published. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 07, 2018 3:42 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, Ryan Tidwell comments on this patch and he thinks that AFTER_DELETE notification can be used to trigger l2pop. https://review.openstack.org/#/c/611261/ https://review.openstack.org/#/c/611261/4/neutron/db/l3_db.py From the comment in the following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/services/l3_router/service_providers/l2pop.py#L276 It seems that the router_id and port_id in AFTER_DELETE notification are None. As a result of that, the last_known_router_id and last_fixed_port_id should be used to construct FDB entries which are used to remove FDBs on each host. However, I print the notification in the following 2 cases: Case-1: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Delete fip-1 Case-2: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Disassociate fip-1 with vm-1 4) Delete fip-1 The notification for case1 and case 2 are attached. router_id and port_id are not None in case-1 and are None in case-2. Thus in case-1, AFTER_DELETE notification can be used. In case-2, FDB will be removed by step 3, thus no need to remove again. Based on the above analysis, I think we can use AFTER_DELETE notification. Could you please comment and review? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, November 12, 2018 11:19 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io; Legacy, Allain > Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The latest RFE looks good to me. Regards, Matt From: "Xu, Chenjie" > Date: Monday, November 12, 2018 at 1:23 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" >, Allain Legacy > Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, The RFE has been updated and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, November 9, 2018 9:22 PM To: Xu, Chenjie >; Legacy, Allain > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The RFE looks good. The use cases are clear and detailed. I only have a few minor review comments (see attached). Regards, Matt From: "Xu, Chenjie" > Date: Thursday, November 1, 2018 at 4:28 AM To: "Peters, Matt" >, Allain Legacy > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, We analyze the patch 9f926a5 related to l2pop. An RFE “Add l2pop support for floating ip resources” has been written and is attached. The test case is provided by Allain. Could you please help to review and comment? Thanks very much! Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 6983 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image007.jpg Type: image/jpeg Size: 17543 bytes Desc: image007.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 16474 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 15792 bytes Desc: image009.jpg URL: From Allain.Legacy at windriver.com Fri Dec 14 12:55:09 2018 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Fri, 14 Dec 2018 12:55:09 +0000 Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming In-Reply-To: References: <51F8F06E-D06E-4DDA-AABF-D69B622EFD56@windriver.com> <331FE402-1858-451D-8506-92E3E1033612@windriver.com> <70A7408C6E1BFB41B192A929744D8523BAC4D60F@ALA-MBD.corp.ad.wrs.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC5211C@ALA-MBD.corp.ad.wrs.com> An FDB entry for a Floating IP resource would only be required on nodes with ports attached to the related external network. Normally that means only the network nodes that are hosting virtual routers attached to those external networks rather than compute nodes. However, there are less frequently used scenarios that involve customers launching VM instances directly on external networks and in those scenarios the external network will be instantiated in some compute nodes as well as the network nodes. In your test scenario, the external network will only be present on the network node and therefore only the agent running on the network node will be capable of processing any related FDB entries. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 14, 2018 3:24 AM To: Legacy, Allain Cc: Peters, Matt; starlingx-discuss at lists.starlingx.io Subject: FW: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Allain, As you suggested, I’m testing the 2 changes together. However I find that the fdb for floatingip can’t be installed on the br-tun of the compute node. Because the network-id in fdb is not in the LocalVlanManager. The details are below and could you please help review and comment? The environment: latest DevStack with 1 controller node and 1 compute node The network topology is below: [cid:image010.jpg at 01D49382.52015E80] Steps: 1. Create an external network external-net neutron net-create external-net --router:external True --provider:network_type vxlan 2. Create a subnet on external-net neutron subnet-create external-net 192.168.25.0/24 --name external-subnet --allocation-pool start=192.168.25.200,end=192.168.25.250 3. Create an internal network, create a subnet on net4 neutron net-create net4 neutron subnet-create net4 192.168.2.0/24 --name subnet4 4. Create a router, set router gateway as external-net, add subnet4 to router neutron router-create router neutron router-gateway-set $router-id $external-net-id neutron router-interface-add $router-id $subnet4-id 5. Create a VM vm-1 on net4 (vm-1 runs on compute node) 6. Allocate floating IP FIP-1 on external-net through horizon 7. Associate FIP-1 with vm-1 through horizon The fdb is below: [cid:image011.jpg at 01D49382.52015E80] The OVS agent on the compute node receives the FDB: [cid:image012.jpg at 01D49382.52015E80] In OVS agent, LocalVlanManager is used to map tunnel ids or vlan ids to internal vlans: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/vlanmanager.py After OVS agent receiving the FDB, it will try to get LocalVlanMapping from LocalVlanMannager: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L558 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L237 However the network-id f2ebf82a-e788-4456-8516-c95b12f91d49 is not in the LocalVlanManager. Thus the fdb can’t be installed on br-tun. The mapping in the LocalVlanManager in the OVS agent which is on the compute node is below: [cid:image013.jpg at 01D49382.52015E80] Analysis: The vm-1 is created on internal network net4 . When creating vm-1, a port in the net4 will be bound to vm-1. Thus the network-id can be added to the LocalVlanManager. The network id for floating IP is used in FDB but the corresponding network is an external network. And this external network’s network id is not in the LocalVlanManager in the OVS agent which is on the compute node. Best Regard, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Friday, December 7, 2018 10:20 PM To: Xu, Chenjie >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming The change that is being reviewed here was originally a part of a larger commit (9f926a5d253). They should be implemented together or at least tested together. I seem to remember that there was information missing in case 1 that prevented a proper FDB notification from being generated. Please retest your scenarios and capture the input parameters to add_fdb_entries(), remove_fdb_entries(), and update_fdb_entries() in neutron/plugins/ml2/drivers/l2pop/rpc.py:L2populationAgentNotifyAPI to be sure that expected notifications are published. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 07, 2018 3:42 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, Ryan Tidwell comments on this patch and he thinks that AFTER_DELETE notification can be used to trigger l2pop. https://review.openstack.org/#/c/611261/ https://review.openstack.org/#/c/611261/4/neutron/db/l3_db.py From the comment in the following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/services/l3_router/service_providers/l2pop.py#L276 It seems that the router_id and port_id in AFTER_DELETE notification are None. As a result of that, the last_known_router_id and last_fixed_port_id should be used to construct FDB entries which are used to remove FDBs on each host. However, I print the notification in the following 2 cases: Case-1: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Delete fip-1 Case-2: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Disassociate fip-1 with vm-1 4) Delete fip-1 The notification for case1 and case 2 are attached. router_id and port_id are not None in case-1 and are None in case-2. Thus in case-1, AFTER_DELETE notification can be used. In case-2, FDB will be removed by step 3, thus no need to remove again. Based on the above analysis, I think we can use AFTER_DELETE notification. Could you please comment and review? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, November 12, 2018 11:19 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io; Legacy, Allain > Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The latest RFE looks good to me. Regards, Matt From: "Xu, Chenjie" > Date: Monday, November 12, 2018 at 1:23 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" >, Allain Legacy > Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, The RFE has been updated and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, November 9, 2018 9:22 PM To: Xu, Chenjie >; Legacy, Allain > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The RFE looks good. The use cases are clear and detailed. I only have a few minor review comments (see attached). Regards, Matt From: "Xu, Chenjie" > Date: Thursday, November 1, 2018 at 4:28 AM To: "Peters, Matt" >, Allain Legacy > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, We analyze the patch 9f926a5 related to l2pop. An RFE “Add l2pop support for floating ip resources” has been written and is attached. The test case is provided by Allain. Could you please help to review and comment? Thanks very much! Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 5143 bytes Desc: image010.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.jpg Type: image/jpeg Size: 11643 bytes Desc: image011.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image012.jpg Type: image/jpeg Size: 11184 bytes Desc: image012.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image013.jpg Type: image/jpeg Size: 10148 bytes Desc: image013.jpg URL: From sgw at linux.intel.com Fri Dec 14 18:40:39 2018 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 14 Dec 2018 10:40:39 -0800 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> Message-ID: <7a4dd372-c5a0-1a44-f8c6-49543e6f9977@linux.intel.com> See more inline On 12/14/18 6:43 AM, Peters, Matt wrote: > See inline. > > *From: *"Wang, Yi C" > *Date: *Friday, December 14, 2018 at 3:53 AM > *To: *"Peters, Matt" > *Cc: *"starlingx-discuss at lists.starlingx.io" > > *Subject: *RE: Deployment Improvements Proposal > > Hi Matt, > > I just went through your slides. And I have a few questions. I > appreciate if you can share more information about your proposal. Many > thanks! > > 1. We know config_controller will do many things, like bootstrap > configuration and controller configuration together with required > hieradata generation. All the jobs of config_controller will be  taken > over by Ansible, or just part of them? > > /MP> Yes most of these tasks will be handled by the Ansible playbook. > However, much of the existing capabilities may be leveraged in the > implementation to avoid re-writing everything.  The details will be > outlined in the forthcoming spec./ > We will look forward to the coming spec(s). Will you be addressing how to handle different OS setup? Ie will this move some of the existing kickstart related configuration into the Ansible playbook? I am just starting to look at Anisble, so I am not sure how much early system configuration it can take over from kickstart type of scripting. This is one of the challenges with supporting multiple os distributions, not just the build side, but the installation and configuration. > 2. Does WindRiver has plan to replace Puppet with Ansible for all > configuration jobs in the future? > > /MP> There are no specific plans to replace Puppet for all configuration > management.  However, there are several features being actively > developed in StarlingX that will be changing the existing Puppet > manifests (e.g. OpenStack Containerization)./ > I think this has been mentioned already, a concern is that containerization won't solve all problems, it just moves where and how the configuration work happens. I think we may still need to address how containers are handled as we need to address different OSes inside of the containers. > 3. For the first controller, we still need local execution of Ansible > playbook for initial bootstrap. Is my understanding correct? > > /MP> This is one of the main drivers for changing some of the existing > config_controller and Puppet manifest handling.  The operator will have > the ability to run the Ansible playbook locally or remotely. / > Another question is will this work further reduce the need for the configuration related packages (again multi-os related)? Can we move the system utility configuration into this Deployment work? Thanks Sau! > BR. > > Yi > > *From:*Peters, Matt [mailto:Matt.Peters at windriver.com] > *Sent:* Friday, December 14, 2018 3:11 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Deployment Improvements Proposal > > Hello, > > Attached are the slides I presented during the TSC call on Dec 13, 2018 > for the proposed improvements to the StarlingX initial bootstrap and > system inventory.  As indicated on the call, a detailed stx-spec will > follow, but wanted to share the high-level changes being proposed before > the arrival of the spec to get some early feedback. > > Regards, Matt > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Al.Bailey at windriver.com Fri Dec 14 18:57:04 2018 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Fri, 14 Dec 2018 18:57:04 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support In-Reply-To: <74D9C1EDDC44EF468303629CF9A2832C9CE178E4@ALA-MBD.corp.ad.wrs.com> References: <9E7365F4-4B68-4DAB-AF76-057C7D2241D3@intel.com> <74D9C1EDDC44EF468303629CF9A2832C9CE1633E@ALA-MBD.corp.ad.wrs.com> <74D9C1EDDC44EF468303629CF9A2832C9CE178E4@ALA-MBD.corp.ad.wrs.com> Message-ID: I’m not entirely sure I understand the question. The devstack plugin was impacted by a Makefile change here https://github.com/openstack/stx-fault/commit/93f316da167a5dbb99a234e022a134d16baa5449 There is currently a devstack review for stx-fault here https://review.openstack.org/#/c/623590/ The code in that review is adding devstack as a zuul job, so it will have include fixes related to the Makefile in order for devstack to pass, and it to be able to merge. Al From: Huang, Marvin Sent: Thursday, December 13, 2018 5:08 PM To: Bailey, Henry Albert (Al); starlingx Cc: Huang, Marvin Subject: RE: [Starlingx-discuss] latest issue regarding devstack/stx support Hi Eric/Al, Can you point me the review this issue was involved? So that I can check the status to see I can try it again. Thanks! Marvin From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 12:03 PM To: Cordoba Malibran, Erich; Bailey, Henry Albert (Al); starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Thanks all! The good news is that it looks the current codes fixed some old issues I hit before (or this time it broke before the previous failing point). Marvin From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, December 11, 2018 11:31 AM To: Bailey, Henry Albert (Al); Huang, Marvin; starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Hi My bad, I wasn’t of this required changes on devstack. I’ll send the patch to solve it. -Erich From: "Bailey, Henry Albert (Al)" Date: Tuesday, December 11, 2018 at 10:03 AM To: "Huang, Marvin" , starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Marvin, A change was merged on Dec 10 which changed the install_non_bb target in the Makefile for fm-mgr There is an open review for adding a devstack job to zuul for stx-fault, so in order for that review to pass zuul, it will need to include the fix. Al From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 10:52 AM To: starlingx Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I’m using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Fri Dec 14 21:53:40 2018 From: claire at openstack.org (Claire Massey) Date: Fri, 14 Dec 2018 15:53:40 -0600 Subject: [Starlingx-discuss] December 17 Call for 2019 Planning - Community Building, Marketing, etc In-Reply-To: <6F08F7DC-1BFA-4E02-BDAD-26B24A539221@openstack.org> References: <6F08F7DC-1BFA-4E02-BDAD-26B24A539221@openstack.org> Message-ID: <9EFB5CF2-6F6C-414F-8D9F-801D9A36B199@openstack.org> Friendly reminder for Monday’s meeting to discuss plans for 2019. Please join us! > On Dec 7, 2018, at 12:25 PM, Claire Massey wrote: > > Hi everyone, > > Looking ahead to 2019 we’ll have an open StarlingX community meeting to brainstorm and discuss plans for educational activities, engagement, marketing, advocacy, etc. > > *The call will be on Monday, December 17, at 7:00am PST (15:00 UTC).* Call in info is posted below. > > We will use this etherpad for notes: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans > > In the meantime, please take a look at the running list of 2019 events and make note of the upcoming CFP deadlines tracked here: https://docs.google.com/spreadsheets/d/1A9HiMjnqVGxSCd9No7theW8oNu3V1rmK6R1xJOzfUEU/edit?usp=sharing > > Thanks, > Claire > > > Zoom Meeting: https://zoom.us/j/952154828 > > One tap mobile > +16468769923,,952154828# US (New York) > +16699006833,,952154828# US (San Jose) > > Dial by your location > +1 646 876 9923 US (New York) > +1 669 900 6833 US (San Jose) > Meeting ID: 952 154 828 > Find your local number: https://zoom.us/u/abqUlOnSr > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tao.Liu at windriver.com Fri Dec 14 22:42:03 2018 From: Tao.Liu at windriver.com (Liu, Tao) Date: Fri, 14 Dec 2018 22:42:03 +0000 Subject: [Starlingx-discuss] Compute personality & subfunction have been changed to worker Message-ID: <7242A3DC72E453498E3D783BBB134C3E9DDA5B6F@ALA-MBD.corp.ad.wrs.com> Hello everyone, Effective immediately, per subject matter line, story 2004022: The personality of compute node is changed as shown below. The hostname is user configurable, and can be worker-x, compute-x or any other names. system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | unlocked | enabled | available | | 3 | compute-0 | worker | unlocked | enabled | available | | 4 | compute-1 | worker | unlocked | enabled | available | | 5 | compute-2 | worker | unlocked | enabled | available | When provisioning a worker node using CLI specify the personality as worker: system host-update hostname= personality=worker or system host-update hostname== personality=worker subfunctions=lowlatency On All-in-one system, the compute subfunction is changed to worker as shown below: system host-show controller-0 +---------------------+--------------------------------------------+ | Property | Value | +---------------------+--------------------------------------------+ | action | none | | administrative | unlocked | | availability | available | | bm_ip | 128.224.64.61 | | bm_type | bmc | | bm_username | root | | boot_device | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0 | | capabilities | {u'Personality': u'Controller-Standby'} | | config_applied | 0d128779-8690-46d6-a8e7-053bbfbc6383 | | config_status | None | | config_target | 0d128779-8690-46d6-a8e7-053bbfbc6383 | | console | ttyS0,115200n8 | | created_at | 2018-12-13T19:01:26.471712+00:00 | | hostname | controller-0 | | id | 1 | | install_output | text | | install_state | None | | install_state_info | None | | invprovision | provisioned | | location | {} | | mgmt_ip | 192.168.204.3 | | mgmt_mac | 24:8a:07:58:d0:a4 | | operational | enabled | | personality | controller | | reserved | False | | rootfs_device | /dev/disk/by-path/pci-0000:00:1f.2-ata-1.0 | | serialid | None | | software_load | 18.10 | | subfunction_avail | available | | subfunction_oper | enabled | | subfunctions | controller,worker,lowlatency | | task | | | tboot | false | | ttys_dcd | None | | updated_at | 2018-12-13T21:17:23.117350+00:00 | | uptime | 6409 | | uuid | a2605c1e-ff16-4313-9733-e3ffaf2a2004 | | vim_progress_status | services-enabled | +---------------------+--------------------------------------------+ In addition, the CPU function 'VM' was renamed to 'Application' and 'VM' pages was rename to 'Application' pages as illustrated below: system host-cpu-list controller-0 +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | uuid | log_c | processor | phy_c | thread | processor_model | assigned_function | | | ore | | ore | | | | +--------------------------------------+-------+-----------+-------+--------+-------------------------------------------+-------------------+ | 6a4afaf7-0622-4201-a84c-eba368de0ccb | 0 | 0 | 0 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Platform | | a9e00da5-f1fd-4c3e-9345-8e0aeba11678 | 1 | 0 | 1 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Platform | | dea13a70-ebaa-4626-913b-e78b66abbba7 | 2 | 0 | 2 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | vSwitch | | bde8fef8-5a3f-4496-84c6-51c8975bf5b8 | 3 | 0 | 3 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | vSwitch | | 88d57f8d-2ee1-459b-bce6-9a7d52147c5e | 4 | 0 | 4 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Shared | | d855f3ae-e9d6-4bef-bd36-be9fce790c9f | 5 | 0 | 8 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | 840eee9f-6032-4c1d-8c6a-19a64e93135a | 6 | 0 | 9 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | 996a4606-dc8c-470e-9209-f2420b2aa209 | 7 | 0 | 10 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | 4ad42341-dbbf-4387-9754-7aa86dcffa3a | 8 | 0 | 11 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | 3af498ee-0a2a-434a-8903-a82e7cb5bc87 | 9 | 0 | 16 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | 90d7f0e1-721d-4a01-99e0-ffe399ad98c0 | 10 | 0 | 17 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | ca6d6464-5d68-407e-bc1b-3fe15978b258 | 11 | 0 | 18 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | | 819e8588-f756-47ad-a964-0a79ea86e251 | 12 | 0 | 19 | 0 | Intel(R) Xeon(R) CPU E5-2699 v3 @ 2.30GHz | Applications | system host-memory-show controller-0 0 +-------------------------------------+--------------------------------------+ | Property | Value | +-------------------------------------+--------------------------------------+ | Memory: Usable Total (MiB) | 15294 | | Platform (MiB) | 14500 | | Available (MiB) | 14270 | | Huge Pages Configured | True | | vSwitch Huge Pages: Size (MiB) | 1024 | | Total | 1 | | Available | 0 | | Application Pages (4K): Total | 365056 | | Application Huge Pages (2M): Total | 6422 | | Available | 6422 | | Application Huge Pages (1G): Total | 0 | | Available | 0 | | uuid | 42b751a5-efc0-40a1-bfd9-9c6a4b3f968e | | ihost_uuid | 6cbdb22e-95d8-4c52-99b8-633ad5b63b5e | | inode_uuid | a280f909-02c0-458f-aa80-40b4f65b8d20 | | created_at | 2018-12-14T04:51:23.099030+00:00 | | updated_at | 2018-12-14T16:03:39.428400+00:00 | +-------------------------------------+--------------------------------------+ Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Dec 14 23:00:23 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 14 Dec 2018 23:00:23 +0000 Subject: [Starlingx-discuss] Tracking process for the Big OpenStack Refactoring Message-ID: <9A85D2917C58154C960D95352B22818BB1ED44B3@fmsmsx117.amr.corp.intel.com> Bill, Derek and I have been working on tracking for the big patch refactoring effort. As part of that, we've moved the tracking data into the overall Master Patch Tracking Spreadsheet [0] that many of you already have access to. If you do not have access, please send an email to me to get access. We would like all tracking of progress on the OpenStack patch resolution effort to be tracked in this spreadsheet, which includes work being done in the distro.openstack sub-project and the networking sub-project. Thank you! Brucej [0] https://docs.google.com/spreadsheets/d/1nKnkweuxcqvVOoRcpnTYMVUUv1RoAugOWXMjB7VIrfc/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Dec 14 23:11:07 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 14 Dec 2018 23:11:07 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181214 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7CEAA@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-14 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Issue was fixed and merged today. Issue should not be present in tomorrow's ISO. Today's ISO still facing the issue. Workaround used to avoid use SFTP service. SFTP service still not working. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Sat Dec 15 03:21:12 2018 From: yi.c.wang at intel.com (Wang, Yi C) Date: Sat, 15 Dec 2018 03:21:12 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> Message-ID: Hi Matt, Thanks for your answers! Here is one further question about my #3 question. As you said, the operator will have the capability to run ansible playbook remotely. For now, the first controller network is configured during config_controller, so to support running playbook remotely for bootstrap, at which stage will the network will be configured, installation stage by anaconda? Thanks. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 10:43 PM To: Wang, Yi C Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Deployment Improvements Proposal See inline. From: "Wang, Yi C" > Date: Friday, December 14, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: Deployment Improvements Proposal Hi Matt, I just went through your slides. And I have a few questions. I appreciate if you can share more information about your proposal. Many thanks! 1. We know config_controller will do many things, like bootstrap configuration and controller configuration together with required hieradata generation. All the jobs of config_controller will be taken over by Ansible, or just part of them? MP> Yes most of these tasks will be handled by the Ansible playbook. However, much of the existing capabilities may be leveraged in the implementation to avoid re-writing everything. The details will be outlined in the forthcoming spec. 2. Does WindRiver has plan to replace Puppet with Ansible for all configuration jobs in the future? MP> There are no specific plans to replace Puppet for all configuration management. However, there are several features being actively developed in StarlingX that will be changing the existing Puppet manifests (e.g. OpenStack Containerization). 3. For the first controller, we still need local execution of Ansible playbook for initial bootstrap. Is my understanding correct? MP> This is one of the main drivers for changing some of the existing config_controller and Puppet manifest handling. The operator will have the ability to run the Ansible playbook locally or remotely. BR. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 3:11 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Sat Dec 15 12:52:02 2018 From: serverascode at gmail.com (Curtis) Date: Sat, 15 Dec 2018 07:52:02 -0500 Subject: [Starlingx-discuss] [Testing] Test Framework In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A91D340@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A8C37D6@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A91D340@fmsmsx101.amr.corp.intel.com> Message-ID: On Tue, Dec 11, 2018 at 5:48 PM Perez Carranza, Jose < jose.perez.carranza at intel.com> wrote: > Hi > > After some work we developed a framework to cover Deployment + Testing + > Reporting for a StarlingX (on virtual environment for now) please see > attached file with a high level overview and please let us know your > comments. > Would it be possible to forward this or another similar message to the OpenStack discuss list and see what the OpenStack's communities thoughts are? I don't actually know how OpenStack tests command line applications. Could spark a good conversation. :) Thanks, Curtis > > Regards, > José > > > > -----Original Message----- > > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > > Sent: Friday, August 10, 2018 3:10 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] [Testing] Test Framework > > > > Hello, > > > > We are currently working on automated tests for StarlingX Deployment, as > > the base of the automation we are using Robot Framework [1]. If any of > you > > have experience or have read about this framework we would like to hear > > your feedback of this approach. > > > > 1- http://robotframework.org/ > > > > Regards, > > José > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Sat Dec 15 13:28:53 2018 From: serverascode at gmail.com (Curtis) Date: Sat, 15 Dec 2018 08:28:53 -0500 Subject: [Starlingx-discuss] distributed cloud deployment In-Reply-To: References: Message-ID: On Wed, Dec 12, 2018 at 8:20 AM Banszel Martin wrote: > Hi all, > > I am interested in the distributed StarlingX deployment. Are there any > guidelines how to deploy StarlingX in a distributed cloud? > > I have found the installation guide [0] which seems to support just a > single dc installation – control, storage and compute nodes. > > Is there any support for zero-touch installation of StarlingX on remote > nodes? > Hi Martin, First, with regards to "zero touch", there is some work ongoing around that, see this recent thread [1]. Feel free to chime in with your particular use case or other comments. :) As far as the distributed cloud piece, I'm not quite sure where the project is with that. Perhaps someone else on the list can answer? I will try to check on this for you and update you when I have more details. Thanks, Curtis [1]: http://lists.starlingx.io/pipermail/starlingx-discuss/2018-December/002246.html > > > Thank you, > > Best regards, > > Martin > > > > [0] https://docs.starlingx.io/installation_guide/index.html# > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Dec 17 00:28:34 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 17 Dec 2018 00:28:34 +0000 Subject: [Starlingx-discuss] ovs-vswitchd consuming 100% CPU In-Reply-To: <9992DAA7-F48C-4A83-B62A-83887FC015E5@jxresearch.com> References: <9992DAA7-F48C-4A83-B62A-83887FC015E5@jxresearch.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE5673F@SHSMSX101.ccr.corp.intel.com> Hi Yun, It is by design. " Yes, 100% CPU utilization for DPDK cores. All the drivers are in polling mode, and we bind the thread to certain CPU cores, the core will always polling/working at 100% utilization. " http://mails.dpdk.org/archives/users/2016-August/000812.html Best Regards Shuicheng -----Original Message----- From: 徐蕴 [mailto:xuyun at jxresearch.com] Sent: Thursday, December 13, 2018 6:45 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] ovs-vswitchd consuming 100% CPU Hi, I’ve managed to deploy an all-in-one node using the ISO downloaded from http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/, thank you for your help. I noticed that ovs-vswitchd consuming 100% CPU all the time, is it normal for my configuration? My machine has two CPUs and 64G memory and 4 Intel I350 NICs. Br, Xu Yun _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Mon Dec 17 00:47:00 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 17 Dec 2018 00:47:00 +0000 Subject: [Starlingx-discuss] StarlingX Feature Prioritization for the next release In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB357840@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB347EE9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB357840@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3587A9@ALA-MBD.corp.ad.wrs.com> Folks, The TSC met last week to prioritize the feature content for the upcoming release. The results have been captured in the following ethercalc , https://ethercalc.openstack.org/tmv05jth0a5y This will be used for the planning meeting coming up on Jan 15-16th. Please direct any questions in the meantime to the mailing list. Thanks, Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Dec 17 00:55:04 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 17 Dec 2018 00:55:04 +0000 Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded Message-ID: <9700A18779F35F49AF027300A49E7C765FE56853@SHSMSX101.ccr.corp.intel.com> Hi all, We are working on kernel upgrade task recently [0]. After upgrade the kernel, we find several modules cannot pass build, due to data structure/function api change in kernel. Here is the module list cannot pass build with the new kernel: Mlnx-ofa_kernel Intel-i40e Intel-i40evf Tpmdd Intel-ixgbe drbd openvswitch To fix the build failure, I plan to upgrade these packages to newer version, which supports CentOS 7.6. This upgrade may cause other packages depend on these packages be upgraded also. Take Mlnx-ofa as example, it is bound with DPDK. Per [1], MLNX_OFED 4.5-1.0.1.0 supports CentOS 7.6. Per [2], DPDK should be upgraded to 18.11, while our current DPDK is 17.11, and is bound with OVS. And OVS upgrade may affect Neutron. I need network team to help decide the upgrade strategy of DPDK/OVS. Thanks. [0]: https://storyboard.openstack.org/#!/story/2004521 [1]: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers [2]: https://doc.dpdk.org/guides-18.11/rel_notes/release_18_11.html Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From forrest.zhao at intel.com Mon Dec 17 02:30:23 2018 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Mon, 17 Dec 2018 02:30:23 +0000 Subject: [Starlingx-discuss] StarlingX Feature Prioritization for the next release In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3587A9@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB347EE9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB357840@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3587A9@ALA-MBD.corp.ad.wrs.com> Message-ID: <6345119E91D5C843A93D64F498ACFA13744972F2@SHSMSX101.ccr.corp.intel.com> Hi Brent, I have several open questions and inputs. Do you know where we could collect the usages cases of TSN (Time Sensitive Networking)? Are there requirements or gaps to support it in OpenStack upstream? I ask this because we'd like to understand more on the TSN usages cases and potential gaps in the context of 5G and edge computing. I propose to add SmartNIC in "New Functionality" section. This is because SmartNIC is becoming hot and requested by CSP and Telco customers. The first step is to enable it in OpenStack upstream project (e.g. Nova, Neutron), then to integrate it in StarlingX. Does it make sense to you? Thanks, Forrest From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Monday, December 17, 2018 8:47 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Feature Prioritization for the next release Folks, The TSC met last week to prioritize the feature content for the upcoming release. The results have been captured in the following ethercalc , https://ethercalc.openstack.org/tmv05jth0a5y This will be used for the planning meeting coming up on Jan 15-16th. Please direct any questions in the meantime to the mailing list. Thanks, Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From xuyun at jxresearch.com Mon Dec 17 02:31:25 2018 From: xuyun at jxresearch.com (=?gb2312?B?0OzUzA==?=) Date: Mon, 17 Dec 2018 10:31:25 +0800 Subject: [Starlingx-discuss] ovs-vswitchd consuming 100% CPU In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE5673F@SHSMSX101.ccr.corp.intel.com> References: <9992DAA7-F48C-4A83-B62A-83887FC015E5@jxresearch.com> <9700A18779F35F49AF027300A49E7C765FE5673F@SHSMSX101.ccr.corp.intel.com> Message-ID: <41F745AE-7B7F-46A4-8CE8-7EB9D9B81DC0@jxresearch.com> Hi Shuicheng, Thank you for your kind response. BR, Xu Yun > 在 2018年12月17日,上午8:28,Lin, Shuicheng 写道: > > Hi Yun, > It is by design. > " > Yes, 100% CPU utilization for DPDK cores. All the drivers are in polling mode, and we bind the thread to certain CPU cores, the core will always polling/working at 100% utilization. > " > http://mails.dpdk.org/archives/users/2016-August/000812.html > > > Best Regards > Shuicheng > > > -----Original Message----- > From: 徐蕴 [mailto:xuyun at jxresearch.com] > Sent: Thursday, December 13, 2018 6:45 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] ovs-vswitchd consuming 100% CPU > > Hi, > > I’ve managed to deploy an all-in-one node using the ISO downloaded from http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/, thank you for your help. > I noticed that ovs-vswitchd consuming 100% CPU all the time, is it normal for my configuration? My machine has two CPUs and 64G memory and 4 Intel I350 NICs. > > Br, > Xu Yun > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chenjie.xu at intel.com Mon Dec 17 05:51:18 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 17 Dec 2018 05:51:18 +0000 Subject: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming Message-ID: Hi Matt/Allain, For RFE "Add l2pop support for floating IP resources", Miguel left a comment as below: Maybe I am misunderstanding, but I am having difficulty understanding the need for this RFE: 1) In both use cases described above, you state that "When VM-1 pings FloatingIP-2 for the first time, it needs to know the MAC address for FloatingIP-2. Thus ARP request is sent out". Since FloatingIP-2 is not in the same subnet as Port1, wouldn't just VM send the the ICMP echo request directly to its default gateway and let L3 level protocols route the request to the correct destination? 2) The summary of your proposal is "The idea is that advertising the FDBs for floating IP when the FIP status changes to "ACTIVE" and withdraw the FDBs for floating IP whenever the status is set to "DOWN" or the resource is deleted or disassociated." Aren't we mixing L2 and L3 concepts here unnecessarily. The FDB entries in L2pop are meant to optimize the communication at L2. Floating IPs should be handled by L3. Am I missing something? Could you please help review and comment? The link for the RFE is below: https://bugs.launchpad.net/neutron/+bug/1803494 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Mon Dec 17 11:59:40 2018 From: Greg.Waines at windriver.com (Waines, Greg) Date: Mon, 17 Dec 2018 11:59:40 +0000 Subject: [Starlingx-discuss] distributed cloud deployment In-Reply-To: References: Message-ID: <6CDAD434-B0DE-4E46-97A4-9EF2BA9D17A5@windriver.com> The Starlingx Docs Team is in the process of identifying and prioritizing gaps in the docs.starlingx.io doc suite, which it will address for the following StarlingX release. Distributed Cloud is definitely one of the higher priority items. I suspect the Docs Team will agree on a plan for additional StarlingX User Docs soon; Mike Tullis is the prime. Greg. From: Curtis Date: Saturday, December 15, 2018 at 8:29 AM To: "Martin.Banszel at tieto.com" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] distributed cloud deployment On Wed, Dec 12, 2018 at 8:20 AM Banszel Martin > wrote: Hi all, I am interested in the distributed StarlingX deployment. Are there any guidelines how to deploy StarlingX in a distributed cloud? I have found the installation guide [0] which seems to support just a single dc installation – control, storage and compute nodes. Is there any support for zero-touch installation of StarlingX on remote nodes? Hi Martin, First, with regards to "zero touch", there is some work ongoing around that, see this recent thread [1]. Feel free to chime in with your particular use case or other comments. :) As far as the distributed cloud piece, I'm not quite sure where the project is with that. Perhaps someone else on the list can answer? I will try to check on this for you and update you when I have more details. Thanks, Curtis [1]: http://lists.starlingx.io/pipermail/starlingx-discuss/2018-December/002246.html Thank you, Best regards, Martin [0] https://docs.starlingx.io/installation_guide/index.html# _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Allain.Legacy at windriver.com Mon Dec 17 13:12:51 2018 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Mon, 17 Dec 2018 13:12:51 +0000 Subject: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming In-Reply-To: References: Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC534EC@ALA-MBD.corp.ad.wrs.com> I have responded to the comment in Launchpad. Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, December 17, 2018 12:51 AM To: Peters, Matt; Legacy, Allain Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, For RFE "Add l2pop support for floating IP resources", Miguel left a comment as below: Maybe I am misunderstanding, but I am having difficulty understanding the need for this RFE: 1) In both use cases described above, you state that "When VM-1 pings FloatingIP-2 for the first time, it needs to know the MAC address for FloatingIP-2. Thus ARP request is sent out". Since FloatingIP-2 is not in the same subnet as Port1, wouldn't just VM send the the ICMP echo request directly to its default gateway and let L3 level protocols route the request to the correct destination? 2) The summary of your proposal is "The idea is that advertising the FDBs for floating IP when the FIP status changes to "ACTIVE" and withdraw the FDBs for floating IP whenever the status is set to "DOWN" or the resource is deleted or disassociated." Aren't we mixing L2 and L3 concepts here unnecessarily. The FDB entries in L2pop are meant to optimize the communication at L2. Floating IPs should be handled by L3. Am I missing something? Could you please help review and comment? The link for the RFE is below: https://bugs.launchpad.net/neutron/+bug/1803494 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From Frank.Miller at windriver.com Mon Dec 17 13:55:44 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 17 Dec 2018 13:55:44 +0000 Subject: [Starlingx-discuss] Meeting Agenda: StarlingX Infrastructure Containerization Message-ID: We will hold our final 2018 meeting on the containerization subproject today at the usual time. One of the topics will be a discussion on the public docker registry and how we will manage docker images. The full agenda is posted here: https://etherpad.openstack.org/p/stx-containerization If anyone would like to add an agenda topic please update the etherpad. Frank -----Original Appointment----- From: Miller, Frank Sent: Thursday, November 29, 2018 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: StarlingX Infrastructure Containerization When: Occurs every Monday effective 12/3/2018 until 3/25/2019 from 11:00 AM to 11:30 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Mon Dec 17 14:17:24 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Mon, 17 Dec 2018 14:17:24 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> Message-ID: Hello Yi, The initial (temporary) configuration for external access will be via kickstart/DHCP. The remote install will set the default interface configuration to use DHCP, and the current interface configuration that is performed during config_controller will be performed during the host unlock. This will ensure the network connection is not disrupted while the Ansible playbook is executing and ensure the initial host is configured in the same way as other hosts in the cluster. -Matt From: "Wang, Yi C" Date: Friday, December 14, 2018 at 10:21 PM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RE: Deployment Improvements Proposal Hi Matt, Thanks for your answers! Here is one further question about my #3 question. As you said, the operator will have the capability to run ansible playbook remotely. For now, the first controller network is configured during config_controller, so to support running playbook remotely for bootstrap, at which stage will the network will be configured, installation stage by anaconda? Thanks. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 10:43 PM To: Wang, Yi C Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Deployment Improvements Proposal See inline. From: "Wang, Yi C" > Date: Friday, December 14, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: Deployment Improvements Proposal Hi Matt, I just went through your slides. And I have a few questions. I appreciate if you can share more information about your proposal. Many thanks! 1. We know config_controller will do many things, like bootstrap configuration and controller configuration together with required hieradata generation. All the jobs of config_controller will be taken over by Ansible, or just part of them? MP> Yes most of these tasks will be handled by the Ansible playbook. However, much of the existing capabilities may be leveraged in the implementation to avoid re-writing everything. The details will be outlined in the forthcoming spec. 2. Does WindRiver has plan to replace Puppet with Ansible for all configuration jobs in the future? MP> There are no specific plans to replace Puppet for all configuration management. However, there are several features being actively developed in StarlingX that will be changing the existing Puppet manifests (e.g. OpenStack Containerization). 3. For the first controller, we still need local execution of Ansible playbook for initial bootstrap. Is my understanding correct? MP> This is one of the main drivers for changing some of the existing config_controller and Puppet manifest handling. The operator will have the ability to run the Ansible playbook locally or remotely. BR. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 3:11 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From kailun.qin at intel.com Mon Dec 17 14:28:55 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Mon, 17 Dec 2018 14:28:55 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain ; Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From Allain.Legacy at windriver.com Mon Dec 17 15:11:24 2018 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Mon, 17 Dec 2018 15:11:24 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From Ken.Young at windriver.com Mon Dec 17 15:19:22 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Mon, 17 Dec 2018 15:19:22 +0000 Subject: [Starlingx-discuss] Banned C-Functions Message-ID: <6AF88FC9-05EF-4878-834C-2A584C66BCEB@windriver.com> All, As was discussed on the community call, the Starling X security team has been working on a banned c-function policy to help avoid the introduction of security vulnerabilities. Up to now, this policy has been a draft. We have resolved all outstanding issues with the policy and we are currently looking for community feedback on the policy before asking the cores to enact the policy. It can be found here: https://wiki.openstack.org/wiki/StarlingX/Security/Banned_C_Functions The goal is to gather and resolve any community issues by January 9th. These can be discussed either on the mailing list or in the community meetings on Wednesday, 10 AM EDT. After this point, the ask would be for the cores to ensure that no new instances of banned functions are added to the code. Note that a clean-up of the current instances is not current planned. Once the policy has been in place and well understood by the community, a low priority hardening project will be launched to remove the current instances of these functions. This has been added to the security backlog. A big thank you for Cindy Xie pulled this together for us. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Mon Dec 17 15:35:41 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Mon, 17 Dec 2018 15:35:41 +0000 Subject: [Starlingx-discuss] Public Static Analysis Message-ID: Cesar, The security team is formally asking the build team to plan and continue the efforts to enable public static analysis of the Starling X flock. We understand the ask here may be non-trivial so we are looking for a phased rollout. Can we: * Get a prototype operational based upon the work already completed by the build team? * Get a readout of the effort to complete the rollout of public static analysis * Get some proposals on how we continue the effort I would like to have this be a topic of discussion in Thursday’s build meeting please. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Mon Dec 17 10:34:45 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Mon, 17 Dec 2018 10:34:45 +0000 Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming In-Reply-To: <70A7408C6E1BFB41B192A929744D8523BAC5211C@ALA-MBD.corp.ad.wrs.com> References: <51F8F06E-D06E-4DDA-AABF-D69B622EFD56@windriver.com> <331FE402-1858-451D-8506-92E3E1033612@windriver.com> <70A7408C6E1BFB41B192A929744D8523BAC4D60F@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5211C@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Allain, Thank you for your advice! By following your suggestion, the FDB for floating IP can be installed on compute node. However there is a problem within the use case for RFE “Add l2pop support for floating IP resources”. The use case can be summarized as two VM instances, residing on different networks, communicate via their respective FIP addresses. The flows for floating IP on compute node: [cid:image006.jpg at 01D49637.2C7DDA00] To use the flows for floating IP, the VM on compute node needs to have the same dl_vlan (local VLAN ID) which is 1. According to your previous email, the VM must be launched on the external network. Thus for the use case, the two VM resides on different external networks where the floating IPs for two VMs are. Thus another VM’s dl_vlan won’t be 1 and the flows for floating IP can’t be matched. Based on the above analysis, the two VM instances, residing on different networks, can’t communicate via their respective FIP addresses. Could you please help review and comment? Best Regards, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Friday, December 14, 2018 8:55 PM To: Xu, Chenjie Cc: Peters, Matt ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming An FDB entry for a Floating IP resource would only be required on nodes with ports attached to the related external network. Normally that means only the network nodes that are hosting virtual routers attached to those external networks rather than compute nodes. However, there are less frequently used scenarios that involve customers launching VM instances directly on external networks and in those scenarios the external network will be instantiated in some compute nodes as well as the network nodes. In your test scenario, the external network will only be present on the network node and therefore only the agent running on the network node will be capable of processing any related FDB entries. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 14, 2018 3:24 AM To: Legacy, Allain Cc: Peters, Matt; starlingx-discuss at lists.starlingx.io Subject: FW: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Allain, As you suggested, I’m testing the 2 changes together. However I find that the fdb for floatingip can’t be installed on the br-tun of the compute node. Because the network-id in fdb is not in the LocalVlanManager. The details are below and could you please help review and comment? The environment: latest DevStack with 1 controller node and 1 compute node The network topology is below: [cid:image002.jpg at 01D49632.CEB88F40] Steps: 1. Create an external network external-net neutron net-create external-net --router:external True --provider:network_type vxlan 2. Create a subnet on external-net neutron subnet-create external-net 192.168.25.0/24 --name external-subnet --allocation-pool start=192.168.25.200,end=192.168.25.250 3. Create an internal network, create a subnet on net4 neutron net-create net4 neutron subnet-create net4 192.168.2.0/24 --name subnet4 4. Create a router, set router gateway as external-net, add subnet4 to router neutron router-create router neutron router-gateway-set $router-id $external-net-id neutron router-interface-add $router-id $subnet4-id 5. Create a VM vm-1 on net4 (vm-1 runs on compute node) 6. Allocate floating IP FIP-1 on external-net through horizon 7. Associate FIP-1 with vm-1 through horizon The fdb is below: [cid:image003.jpg at 01D49632.CEB88F40] The OVS agent on the compute node receives the FDB: [cid:image004.jpg at 01D49632.CEB88F40] In OVS agent, LocalVlanManager is used to map tunnel ids or vlan ids to internal vlans: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/vlanmanager.py After OVS agent receiving the FDB, it will try to get LocalVlanMapping from LocalVlanMannager: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L558 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L237 However the network-id f2ebf82a-e788-4456-8516-c95b12f91d49 is not in the LocalVlanManager. Thus the fdb can’t be installed on br-tun. The mapping in the LocalVlanManager in the OVS agent which is on the compute node is below: [cid:image005.jpg at 01D49632.CEB88F40] Analysis: The vm-1 is created on internal network net4 . When creating vm-1, a port in the net4 will be bound to vm-1. Thus the network-id can be added to the LocalVlanManager. The network id for floating IP is used in FDB but the corresponding network is an external network. And this external network’s network id is not in the LocalVlanManager in the OVS agent which is on the compute node. Best Regard, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Friday, December 7, 2018 10:20 PM To: Xu, Chenjie >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming The change that is being reviewed here was originally a part of a larger commit (9f926a5d253). They should be implemented together or at least tested together. I seem to remember that there was information missing in case 1 that prevented a proper FDB notification from being generated. Please retest your scenarios and capture the input parameters to add_fdb_entries(), remove_fdb_entries(), and update_fdb_entries() in neutron/plugins/ml2/drivers/l2pop/rpc.py:L2populationAgentNotifyAPI to be sure that expected notifications are published. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 07, 2018 3:42 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, Ryan Tidwell comments on this patch and he thinks that AFTER_DELETE notification can be used to trigger l2pop. https://review.openstack.org/#/c/611261/ https://review.openstack.org/#/c/611261/4/neutron/db/l3_db.py From the comment in the following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/services/l3_router/service_providers/l2pop.py#L276 It seems that the router_id and port_id in AFTER_DELETE notification are None. As a result of that, the last_known_router_id and last_fixed_port_id should be used to construct FDB entries which are used to remove FDBs on each host. However, I print the notification in the following 2 cases: Case-1: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Delete fip-1 Case-2: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Disassociate fip-1 with vm-1 4) Delete fip-1 The notification for case1 and case 2 are attached. router_id and port_id are not None in case-1 and are None in case-2. Thus in case-1, AFTER_DELETE notification can be used. In case-2, FDB will be removed by step 3, thus no need to remove again. Based on the above analysis, I think we can use AFTER_DELETE notification. Could you please comment and review? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, November 12, 2018 11:19 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io; Legacy, Allain > Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The latest RFE looks good to me. Regards, Matt From: "Xu, Chenjie" > Date: Monday, November 12, 2018 at 1:23 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" >, Allain Legacy > Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, The RFE has been updated and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, November 9, 2018 9:22 PM To: Xu, Chenjie >; Legacy, Allain > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The RFE looks good. The use cases are clear and detailed. I only have a few minor review comments (see attached). Regards, Matt From: "Xu, Chenjie" > Date: Thursday, November 1, 2018 at 4:28 AM To: "Peters, Matt" >, Allain Legacy > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, We analyze the patch 9f926a5 related to l2pop. An RFE “Add l2pop support for floating ip resources” has been written and is attached. The test case is provided by Allain. Could you please help to review and comment? Thanks very much! Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 5143 bytes Desc: image002.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image003.jpg Type: image/jpeg Size: 11643 bytes Desc: image003.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image004.jpg Type: image/jpeg Size: 11184 bytes Desc: image004.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image005.jpg Type: image/jpeg Size: 10148 bytes Desc: image005.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image006.jpg Type: image/jpeg Size: 54933 bytes Desc: image006.jpg URL: From Allain.Legacy at windriver.com Mon Dec 17 14:32:49 2018 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Mon, 17 Dec 2018 14:32:49 +0000 Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming In-Reply-To: References: <51F8F06E-D06E-4DDA-AABF-D69B622EFD56@windriver.com> <331FE402-1858-451D-8506-92E3E1033612@windriver.com> <70A7408C6E1BFB41B192A929744D8523BAC4D60F@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5211C@ALA-MBD.corp.ad.wrs.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC535EE@ALA-MBD.corp.ad.wrs.com> Sorry, I think I confused things by mentioning the scenario of launching VM instances directly on an external network. The original scenario that was being discussed was two different VM instances, on two different tenant networks, communicating via their respective FIP address over the same external tenant network. In that scenario, it is the respective virtual routers of both VM instances that will be sending out ARP requests for the destination FIP addresses. It is those broadcast ARP requests that need to be eliminated by using updated L2POP FDB information. Therefore, the FDB updates need to be generated properly at the neutron-server and received by the neutron-layer2-agent on the node which is hosting the virtual routers; not necessarily the nodes hosting the VM instances. In the following diagram, that means that the FDB on network node 1 and 2 need to be updated with information for FIP1 and FIP2. The FDB on compute node 1 and 2 do not need to be updated because there are no ports directly on the external network on those nodes. FIP1 FIP2 + + | | +-------+ +------+ | | +------+ +-------+ | | | | | | | | | | | VM1 +---network1--+ R1 +-+-ext-network-+-+ R2 +----network2-----+ VM2 | | | | | | | | | +---+---+ +---+--+ +---+--+ +---+---+ | | | | | | | | v v v v compute network network compute node 1 node 1 node 2 node 2 The other scenario that I mentioned, which was to launch VM instances directly on the external network, was only to illustrate that it is possible for a VM to communicate directly for another server’s FIP rather than having to go through a virtual router. In this type of scenario, the FDB of the node hosting the VM instance that resides directly on the external network needs to be updated whenever the FIP changes state. In the following diagram, that means that the FDB on compute2 needs to be updated with information for FIP1, and the FDB on network node 1 needs to be updated with information for VM2’s port on the external network. The FDB on compute node 1 does not need to be updated for VM2’s port information because there are no ports on compute node 1 that are directly attached to the external network. FIP1 + | +-------+ +------+ | +-------+ | | | | | | | | VM1 +---network1--+ R1 +-+-ext-network---+ VM2 | | | | | | | +---+---+ +---+--+ +---+---+ | | | | | | v v v compute network compute node 1 node 1 node 2 Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, December 17, 2018 5:35 AM To: Legacy, Allain Cc: Peters, Matt; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Allain, Thank you for your advice! By following your suggestion, the FDB for floating IP can be installed on compute node. However there is a problem within the use case for RFE “Add l2pop support for floating IP resources”. The use case can be summarized as two VM instances, residing on different networks, communicate via their respective FIP addresses. The flows for floating IP on compute node: [cid:image002.jpg at 01D495EB.7889B190] To use the flows for floating IP, the VM on compute node needs to have the same dl_vlan (local VLAN ID) which is 1. According to your previous email, the VM must be launched on the external network. Thus for the use case, the two VM resides on different external networks where the floating IPs for two VMs are. Thus another VM’s dl_vlan won’t be 1 and the flows for floating IP can’t be matched. Based on the above analysis, the two VM instances, residing on different networks, can’t communicate via their respective FIP addresses. Could you please help review and comment? Best Regards, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Friday, December 14, 2018 8:55 PM To: Xu, Chenjie > Cc: Peters, Matt >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming An FDB entry for a Floating IP resource would only be required on nodes with ports attached to the related external network. Normally that means only the network nodes that are hosting virtual routers attached to those external networks rather than compute nodes. However, there are less frequently used scenarios that involve customers launching VM instances directly on external networks and in those scenarios the external network will be instantiated in some compute nodes as well as the network nodes. In your test scenario, the external network will only be present on the network node and therefore only the agent running on the network node will be capable of processing any related FDB entries. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 14, 2018 3:24 AM To: Legacy, Allain Cc: Peters, Matt; starlingx-discuss at lists.starlingx.io Subject: FW: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Allain, As you suggested, I’m testing the 2 changes together. However I find that the fdb for floatingip can’t be installed on the br-tun of the compute node. Because the network-id in fdb is not in the LocalVlanManager. The details are below and could you please help review and comment? The environment: latest DevStack with 1 controller node and 1 compute node The network topology is below: [cid:image002.jpg at 01D49632.CEB88F40] Steps: 1. Create an external network external-net neutron net-create external-net --router:external True --provider:network_type vxlan 2. Create a subnet on external-net neutron subnet-create external-net 192.168.25.0/24 --name external-subnet --allocation-pool start=192.168.25.200,end=192.168.25.250 3. Create an internal network, create a subnet on net4 neutron net-create net4 neutron subnet-create net4 192.168.2.0/24 --name subnet4 4. Create a router, set router gateway as external-net, add subnet4 to router neutron router-create router neutron router-gateway-set $router-id $external-net-id neutron router-interface-add $router-id $subnet4-id 5. Create a VM vm-1 on net4 (vm-1 runs on compute node) 6. Allocate floating IP FIP-1 on external-net through horizon 7. Associate FIP-1 with vm-1 through horizon The fdb is below: [cid:image003.jpg at 01D49632.CEB88F40] The OVS agent on the compute node receives the FDB: [cid:image004.jpg at 01D49632.CEB88F40] In OVS agent, LocalVlanManager is used to map tunnel ids or vlan ids to internal vlans: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/vlanmanager.py After OVS agent receiving the FDB, it will try to get LocalVlanMapping from LocalVlanMannager: https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/openvswitch/agent/ovs_neutron_agent.py#L558 https://github.com/openstack/neutron/blob/master/neutron/plugins/ml2/drivers/l2pop/rpc_manager/l2population_rpc.py#L237 However the network-id f2ebf82a-e788-4456-8516-c95b12f91d49 is not in the LocalVlanManager. Thus the fdb can’t be installed on br-tun. The mapping in the LocalVlanManager in the OVS agent which is on the compute node is below: [cid:image005.jpg at 01D49632.CEB88F40] Analysis: The vm-1 is created on internal network net4 . When creating vm-1, a port in the net4 will be bound to vm-1. Thus the network-id can be added to the LocalVlanManager. The network id for floating IP is used in FDB but the corresponding network is an external network. And this external network’s network id is not in the LocalVlanManager in the OVS agent which is on the compute node. Best Regard, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Friday, December 7, 2018 10:20 PM To: Xu, Chenjie >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming The change that is being reviewed here was originally a part of a larger commit (9f926a5d253). They should be implemented together or at least tested together. I seem to remember that there was information missing in case 1 that prevented a proper FDB notification from being generated. Please retest your scenarios and capture the input parameters to add_fdb_entries(), remove_fdb_entries(), and update_fdb_entries() in neutron/plugins/ml2/drivers/l2pop/rpc.py:L2populationAgentNotifyAPI to be sure that expected notifications are published. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Friday, December 07, 2018 3:42 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, Ryan Tidwell comments on this patch and he thinks that AFTER_DELETE notification can be used to trigger l2pop. https://review.openstack.org/#/c/611261/ https://review.openstack.org/#/c/611261/4/neutron/db/l3_db.py From the comment in the following line: https://github.com/starlingx-staging/stx-neutron/blob/master/neutron/services/l3_router/service_providers/l2pop.py#L276 It seems that the router_id and port_id in AFTER_DELETE notification are None. As a result of that, the last_known_router_id and last_fixed_port_id should be used to construct FDB entries which are used to remove FDBs on each host. However, I print the notification in the following 2 cases: Case-1: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Delete fip-1 Case-2: 1) Allocate floating ip fip-1 2) Associate fip-1 with vm-1 3) Disassociate fip-1 with vm-1 4) Delete fip-1 The notification for case1 and case 2 are attached. router_id and port_id are not None in case-1 and are None in case-2. Thus in case-1, AFTER_DELETE notification can be used. In case-2, FDB will be removed by step 3, thus no need to remove again. Based on the above analysis, I think we can use AFTER_DELETE notification. Could you please comment and review? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, November 12, 2018 11:19 PM To: Xu, Chenjie > Cc: starlingx-discuss at lists.starlingx.io; Legacy, Allain > Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The latest RFE looks good to me. Regards, Matt From: "Xu, Chenjie" > Date: Monday, November 12, 2018 at 1:23 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" >, Allain Legacy > Subject: RE: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt, The RFE has been updated and is attached. Could you please help review and comment? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, November 9, 2018 9:22 PM To: Xu, Chenjie >; Legacy, Allain > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Chenjie, The RFE looks good. The use cases are clear and detailed. I only have a few minor review comments (see attached). Regards, Matt From: "Xu, Chenjie" > Date: Thursday, November 1, 2018 at 4:28 AM To: "Peters, Matt" >, Allain Legacy > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Analysis of patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, We analyze the patch 9f926a5 related to l2pop. An RFE “Add l2pop support for floating ip resources” has been written and is attached. The test case is provided by Allain. Could you please help to review and comment? Thanks very much! Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image008.jpg Type: image/jpeg Size: 5143 bytes Desc: image008.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image009.jpg Type: image/jpeg Size: 11643 bytes Desc: image009.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image010.jpg Type: image/jpeg Size: 11184 bytes Desc: image010.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image011.jpg Type: image/jpeg Size: 10148 bytes Desc: image011.jpg URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.jpg Type: image/jpeg Size: 36643 bytes Desc: image002.jpg URL: From Ghada.Khalil at windriver.com Mon Dec 17 16:37:58 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 17 Dec 2018 16:37:58 +0000 Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE56853@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE56853@SHSMSX101.ccr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4971AD@ALA-MBD.corp.ad.wrs.com> Hi Shuicheng, You are correct. The Mellanox drivers are tied to DPDK as well as the kernel. At a high level, I see no option, but to upgrade DPDK/OVS to 18.11 to align with the newer kernel and mellanox drivers. Is there a version available for ovs/ovs-dpdk that supports 18.11 yet? If not, is there information on when one would be available? I added this as an agenda item in the next networking team meeting on Dec 20 at 9:15am Eastern Time. https://etherpad.openstack.org/p/stx-networking We will discuss this in more detail then. Feel free to join us. Zoom details are on the wiki: https://wiki.openstack.org/wiki/Starlingx/Meetings#0615am_PDT_.2F_1415_UTC_-_Networking_Team_Call_.28Bi-weekly.29 Regards, Ghada -------------- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 16, 2018 7:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded Hi all, We are working on kernel upgrade task recently [0]. After upgrade the kernel, we find several modules cannot pass build, due to data structure/function api change in kernel. Here is the module list cannot pass build with the new kernel: Mlnx-ofa_kernel Intel-i40e Intel-i40evf Tpmdd Intel-ixgbe drbd openvswitch To fix the build failure, I plan to upgrade these packages to newer version, which supports CentOS 7.6. This upgrade may cause other packages depend on these packages be upgraded also. Take Mlnx-ofa as example, it is bound with DPDK. Per [1], MLNX_OFED 4.5-1.0.1.0 supports CentOS 7.6. Per [2], DPDK should be upgraded to 18.11, while our current DPDK is 17.11, and is bound with OVS. And OVS upgrade may affect Neutron. I need network team to help decide the upgrade strategy of DPDK/OVS. Thanks. [0]: https://storyboard.openstack.org/#!/story/2004521 [1]: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers [2]: https://doc.dpdk.org/guides-18.11/rel_notes/release_18_11.html Best Regards Shuicheng From Brent.Rowsell at windriver.com Mon Dec 17 16:45:02 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 17 Dec 2018 16:45:02 +0000 Subject: [Starlingx-discuss] StarlingX Feature Prioritization for the next release In-Reply-To: <6345119E91D5C843A93D64F498ACFA13744972F2@SHSMSX101.ccr.corp.intel.com> References: <2588653EBDFFA34B982FAF00F1B4844EBB347EE9@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB357840@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3587A9@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA13744972F2@SHSMSX101.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB35A1B3@ALA-MBD.corp.ad.wrs.com> Hi Forrest, Comments inline Thanks, Brent From: Zhao, Forrest [mailto:forrest.zhao at intel.com] Sent: Sunday, December 16, 2018 9:30 PM To: Rowsell, Brent Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] StarlingX Feature Prioritization for the next release Hi Brent, I have several open questions and inputs. Do you know where we could collect the usages cases of TSN (Time Sensitive Networking)? Are there requirements or gaps to support it in OpenStack upstream? I ask this because we'd like to understand more on the TSN usages cases and potential gaps in the context of 5G and edge computing. I propose to add SmartNIC in "New Functionality" section. This is because SmartNIC is becoming hot and requested by CSP and Telco customers. The first step is to enable it in OpenStack upstream project (e.g. Nova, Neutron), then to integrate it in StarlingX. Does it make sense to you? [BR] I think the next step would be for you to write up a proposal to review with the TSC. This should include smart nics you are proposing to support, integration model with vswitch, virtual machines etc. and dependencies on other projects. Thanks, Forrest From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Monday, December 17, 2018 8:47 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX Feature Prioritization for the next release Folks, The TSC met last week to prioritize the feature content for the upcoming release. The results have been captured in the following ethercalc , https://ethercalc.openstack.org/tmv05jth0a5y This will be used for the planning meeting coming up on Jan 15-16th. Please direct any questions in the meantime to the mailing list. Thanks, Brent -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Dec 17 18:21:06 2018 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 17 Dec 2018 12:21:06 -0600 Subject: [Starlingx-discuss] Banned C-Functions In-Reply-To: <6AF88FC9-05EF-4878-834C-2A584C66BCEB@windriver.com> References: <6AF88FC9-05EF-4878-834C-2A584C66BCEB@windriver.com> Message-ID: On 12/17/2018 9:19 AM, Young, Ken wrote: > All, > > As was discussed on the community call, the Starling X security team has > been working on a banned c-function policy to help avoid the > introduction of security vulnerabilities.  Up to now, this policy has > been a draft.  We have resolved all outstanding issues with the policy > and we are currently looking for community feedback on the policy before > asking the cores to enact the policy.  It can be found here: > > _https://wiki.openstack.org/wiki/StarlingX/Security/Banned_C_Functions___ > > The goal is to gather and resolve any community issues by January 9^th > .  These can be discussed either on the mailing list or in the community > meetings on Wednesday, 10 AM EDT.  After this point, the ask would be > for the cores to ensure that no /new/ instances of banned functions are > added to the code. The "sscanf" one doesn't suggest what to use instead. Also, "sscanf" is not necessarily unbounded, it allows the caller to specify field widths, but they're optional. So it might make sense to allow with approval from core. The other problem with all the "scanf" family is that the arithmatic conversions don't protect against arithmatic overflow, so the "strto*" type functions are more robust for use with unknown inputs. What about scanf, fscanf, vscanf, vsscanf? What about tmpfile() and mktemp() which are safe to use but can easily introduce security issues? (Should use mkstemp() instead.) What about gethostbyaddr() and gethostbyname() which are non-reentrant and don't support IPv6 well? (Replaced by getaddrinfo() and freeaddrinfo().) strncat() should also be inspected for overflow. A call to "strncat(s1, s2, n) can end up writing strlen(s1)+n+1 characters to the buffer. setjmp()/longjmp() should be reviewed *extremely* carefully, especially if combined with threaded code. system() should be used very cautiously Chris From ada.cabrales at intel.com Mon Dec 17 22:10:59 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 17 Dec 2018 22:10:59 +0000 Subject: [Starlingx-discuss] [ Test ] PyTest framework overview Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B3BB@FMSMSX114.amr.corp.intel.com> Hello, StarlingX community Continuing with the topic of an unified testing framework, Numan will present PyTest and the good things they've found about it. Join us this Friday. Ada Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2067 bytes Desc: not available URL: From ada.cabrales at intel.com Mon Dec 17 23:46:17 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 17 Dec 2018 23:46:17 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/17/2018 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B994@FMSMSX114.amr.corp.intel.com> Agenda for 12/17 * Sanity testing: coverage improvement - JC * Update request to CENGN - storage of the sanity logs - Ken * Reminder - Meeting this Friday for checking PyTest Last meeting of the year, will resume on Jan 8 * Opens Regards Ada From ada.cabrales at intel.com Mon Dec 17 23:50:30 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 17 Dec 2018 23:50:30 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/18/2018 -> fixing date Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B9AF@FMSMSX114.amr.corp.intel.com> The meeting is tomorrow, Dec 18, not 17. Sorry about that :) > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, December 17, 2018 5:46 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/17/2018 > > Agenda for 12/18 > * Sanity testing: coverage improvement - JC > * Update request to CENGN - storage of the sanity logs - Ken > * Reminder - Meeting this Friday for checking PyTest > Last meeting of the year, will resume on Jan 8 > * Opens > > > Regards > Ada > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From kailun.qin at intel.com Tue Dec 18 00:36:16 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Tue, 18 Dec 2018 00:36:16 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> Message-ID: Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun ; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From juan.carlos.alonso at intel.com Tue Dec 18 00:38:29 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 18 Dec 2018 00:38:29 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181216 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7D44F@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-16 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Today's ISO still facing the issue. Workaround used to avoid use SFTP service. Issue fixed and merged. SFTP service still not working. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From yi.c.wang at intel.com Tue Dec 18 00:38:49 2018 From: yi.c.wang at intel.com (Wang, Yi C) Date: Tue, 18 Dec 2018 00:38:49 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> Message-ID: Thanks Matt! Now it is clear for me. From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Monday, December 17, 2018 10:17 PM To: Wang, Yi C Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Deployment Improvements Proposal Hello Yi, The initial (temporary) configuration for external access will be via kickstart/DHCP. The remote install will set the default interface configuration to use DHCP, and the current interface configuration that is performed during config_controller will be performed during the host unlock. This will ensure the network connection is not disrupted while the Ansible playbook is executing and ensure the initial host is configured in the same way as other hosts in the cluster. -Matt From: "Wang, Yi C" > Date: Friday, December 14, 2018 at 10:21 PM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: Deployment Improvements Proposal Hi Matt, Thanks for your answers! Here is one further question about my #3 question. As you said, the operator will have the capability to run ansible playbook remotely. For now, the first controller network is configured during config_controller, so to support running playbook remotely for bootstrap, at which stage will the network will be configured, installation stage by anaconda? Thanks. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 10:43 PM To: Wang, Yi C > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: Deployment Improvements Proposal See inline. From: "Wang, Yi C" > Date: Friday, December 14, 2018 at 3:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: RE: Deployment Improvements Proposal Hi Matt, I just went through your slides. And I have a few questions. I appreciate if you can share more information about your proposal. Many thanks! 1. We know config_controller will do many things, like bootstrap configuration and controller configuration together with required hieradata generation. All the jobs of config_controller will be taken over by Ansible, or just part of them? MP> Yes most of these tasks will be handled by the Ansible playbook. However, much of the existing capabilities may be leveraged in the implementation to avoid re-writing everything. The details will be outlined in the forthcoming spec. 2. Does WindRiver has plan to replace Puppet with Ansible for all configuration jobs in the future? MP> There are no specific plans to replace Puppet for all configuration management. However, there are several features being actively developed in StarlingX that will be changing the existing Puppet manifests (e.g. OpenStack Containerization). 3. For the first controller, we still need local execution of Ansible playbook for initial bootstrap. Is my understanding correct? MP> This is one of the main drivers for changing some of the existing config_controller and Puppet manifest handling. The operator will have the ability to run the Ansible playbook locally or remotely. BR. Yi From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Friday, December 14, 2018 3:11 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Dec 18 01:17:11 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 18 Dec 2018 01:17:11 +0000 Subject: [Starlingx-discuss] multi-os call minutes 12/17/18 Message-ID: <9A85D2917C58154C960D95352B22818BB1ED5F37@fmsmsx117.amr.corp.intel.com> Dec 17th Multi-OS meeting * We really wish that our friends from the community would join this meeting * Our job is to provide the infrastructure to support multi-OS. Intel has an interest in getting Clear linux supported and will contribute to that. Other community members can use the same infra to support their OS of choice. * In order to build the infrastructure we have to think about the requirements of the other OS's - e.g. .debs vs rpms... * Victor presented a draft strategy proposal o https://docs.google.com/presentation/d/1oJ2MkPHWNaOYVNG2L04Y66FPTp2WzFuyJMsJ-eHVHUc/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Tue Dec 18 02:03:18 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Tue, 18 Dec 2018 02:03:18 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20181216 In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C7D44F@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C7D44F@FMSMSX108.amr.corp.intel.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE56C41@SHSMSX101.ccr.corp.intel.com> Hi all, Zhipeng and I checked the openssh-config package in this ISO, the sftp fix is not included yet. Maybe we should add a manifest file to snapshot the git revision for the image. Best Regards Shuicheng From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Tuesday, December 18, 2018 8:38 AM To: starlingx Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181216 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-16 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Today's ISO still facing the issue. Workaround used to avoid use SFTP service. Issue fixed and merged. SFTP service still not working. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1808054 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Tue Dec 18 07:30:51 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 18 Dec 2018 07:30:51 +0000 Subject: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming In-Reply-To: <70A7408C6E1BFB41B192A929744D8523BAC534EC@ALA-MBD.corp.ad.wrs.com> References: <70A7408C6E1BFB41B192A929744D8523BAC534EC@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Allain, Thank you for your comment! I will update the use cases in the RFE. Sorry for my misunderstanding! Best Regards, Xu, Chenjie From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 9:13 PM To: Xu, Chenjie ; Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming I have responded to the comment in Launchpad. Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Monday, December 17, 2018 12:51 AM To: Peters, Matt; Legacy, Allain Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming Hi Matt/Allain, For RFE "Add l2pop support for floating IP resources", Miguel left a comment as below: Maybe I am misunderstanding, but I am having difficulty understanding the need for this RFE: 1) In both use cases described above, you state that "When VM-1 pings FloatingIP-2 for the first time, it needs to know the MAC address for FloatingIP-2. Thus ARP request is sent out". Since FloatingIP-2 is not in the same subnet as Port1, wouldn't just VM send the the ICMP echo request directly to its default gateway and let L3 level protocols route the request to the correct destination? 2) The summary of your proposal is "The idea is that advertising the FDBs for floating IP when the FIP status changes to "ACTIVE" and withdraw the FDBs for floating IP whenever the status is set to "DOWN" or the resource is deleted or disassociated." Aren't we mixing L2 and L3 concepts here unnecessarily. The FDB entries in L2pop are meant to optimize the communication at L2. Floating IPs should be handled by L3. Am I missing something? Could you please help review and comment? The link for the RFE is below: https://bugs.launchpad.net/neutron/+bug/1803494 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From kailun.qin at intel.com Tue Dec 18 11:41:52 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Tue, 18 Dec 2018 11:41:52 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> Message-ID: Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' ; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From Allain.Legacy at windriver.com Tue Dec 18 13:23:17 2018 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Tue, 18 Dec 2018 13:23:17 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC543E0@ALA-MBD.corp.ad.wrs.com> The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From Ken.Young at windriver.com Tue Dec 18 15:02:29 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Tue, 18 Dec 2018 15:02:29 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/18/2018 -> fixing date In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B9AF@FMSMSX114.amr.corp.intel.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B9AF@FMSMSX114.amr.corp.intel.com> Message-ID: <0E948055-11BF-4E84-9563-596616B972BA@windriver.com> Ada, I have a conflict today and cannot make you meeting today. I am meeting with CENGN on Thursday and should have an update on log storage after that. Regards, Ken Y On 2018-12-17, 6:51 PM, "Cabrales, Ada" wrote: The meeting is tomorrow, Dec 18, not 17. Sorry about that :) > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, December 17, 2018 5:46 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/17/2018 > > Agenda for 12/18 > * Sanity testing: coverage improvement - JC > * Update request to CENGN - storage of the sanity logs - Ken > * Reminder - Meeting this Friday for checking PyTest > Last meeting of the year, will resume on Jan 8 > * Opens > > > Regards > Ada > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From claire at openstack.org Tue Dec 18 16:13:46 2018 From: claire at openstack.org (Claire Massey) Date: Tue, 18 Dec 2018 10:13:46 -0600 Subject: [Starlingx-discuss] CFP Open Until January 23, Open Infrastructure Summit in Denver Message-ID: <5668ABF7-6492-4FB7-B8A8-38F282262CEE@openstack.org> Hi everyone, FYI - the CFP is now open for the first Open Infrastructure Summit (formerly the OpenStack Summit) which will be held in Denver, Colorado April 29 - May 1, 2019. Wednesday, *January 23* is the deadline to Submit presentations . The Open Infrastructure Summit is organized by OSF and designed to be a place where open source infrastructure communities can come together and collaborate in the open. StarlingX will have a large and prominent presence at the event so please submit talks to the CFP and plan to attend! SUBMIT YOUR PRESENTATION Important info: Based on previous Program Committee and attendee feedback, we have added / updated three Tracks: Security, Getting Started, and Open Development (previously Open Source Community). You can find the Track descriptions here . All of the OSF pilot projects —including Airship, Kata Containers, StarlingX and Zuul — will be front and center alongside other open source communities like Ansible, Cloud Foundry, Docker, Kubernetes, and many more. The Open Infrastructure Summit (formerly the OpenStack Summit), has evolved to recognize our diverse audience, and to signal to the market that the event is relevant for all IT infrastructure decision makers. If you’re interested in influencing the Summit content, apply to be a Programming Committee member *, where you can also find a full list of time requirements and expectations. Nominations will close on January 4, 2019. The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. *OSF Staff will serve as the Programming Committee for the Getting Started Track. Denver Summit registration and sponsor sales are currently open. Learn more and email summit at openstack.org with any questions. Please email speakersupport at openstack.org with any questions or feedback. Thanks, Claire -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Dec 18 16:17:51 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 18 Dec 2018 16:17:51 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/18/2018 -> fixing date In-Reply-To: <0E948055-11BF-4E84-9563-596616B972BA@windriver.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B9AF@FMSMSX114.amr.corp.intel.com> <0E948055-11BF-4E84-9563-596616B972BA@windriver.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4BF7B@FMSMSX114.amr.corp.intel.com> Thanks, Ken! We'll wait for the update then. Ada > -----Original Message----- > From: Young, Ken [mailto:Ken.Young at windriver.com] > Sent: Tuesday, December 18, 2018 9:02 AM > To: Cabrales, Ada ; 'starlingx- > discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] [ Test ] Meeting agenda - 12/18/2018 -> fixing > date > > Ada, > > I have a conflict today and cannot make you meeting today. I am meeting > with CENGN on Thursday and should have an update on log storage after > that. > > Regards, > Ken Y > > On 2018-12-17, 6:51 PM, "Cabrales, Ada" wrote: > > The meeting is tomorrow, Dec 18, not 17. > > Sorry about that :) > > > -----Original Message----- > > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > > Sent: Monday, December 17, 2018 5:46 PM > > To: 'starlingx-discuss at lists.starlingx.io' discuss at lists.starlingx.io> > > Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/17/2018 > > > > Agenda for 12/18 > > * Sanity testing: coverage improvement - JC > > * Update request to CENGN - storage of the sanity logs - Ken > > * Reminder - Meeting this Friday for checking PyTest > > Last meeting of the year, will resume on Jan 8 > > * Opens > > > > > > Regards > > Ada > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From cindy.xie at intel.com Tue Dec 18 07:43:32 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 18 Dec 2018 07:43:32 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E04CC3@SHSMSX104.ccr.corp.intel.com> Agenda for 12/19 meetings: 1. CentOS7.6 upgrade (including kernel 3.10.0.957) status update (Shuicheng/Martin) 2. Ceph upgrade status (Vivian/Dehao/Changcheng) 3. DevStack for Flocks plug-in (Dean/Yi) 4. Opens (all) Please let me know if you'd like to add more topics. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Shang, Dehao; 'Rowsell, Brent'; Wold, Saul; Waheed, Numan; Sun, Austin; Jones, Bruce E; Liu, ZhipengS; starlingx-discuss at lists.starlingx.io; Troyer, Dean; Hu, Yong; 'Khalil, Ghada'; Zhu, Vivian; Lin, Shuicheng; Somerville, Jim Cc: 'Young, Ken'; Hu, Wei W; Armstrong, Robert H; Martinez Monroy, Elio; 'Hellmann, Gil'; 'Chen, Jacky'; 'Eslimi, Dariush'; Lara, Cesar; Cobbley, David A; 'Waines, Greg'; Gomez, Juan P; Martinez Landa, Hayde; Arce Moreno, Abraham; Perez Rodriguez, Humberto I; Perez Carranza, Jose; 'Seiler, Glenn' Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 19, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From bruce.e.jones at intel.com Tue Dec 18 15:08:23 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 18 Dec 2018 15:08:23 +0000 Subject: [Starlingx-discuss] Distro.openstack Dec 18th meeting minutes Message-ID: <9A85D2917C58154C960D95352B22818BB1ED6017@fmsmsx117.amr.corp.intel.com> Meeting agenda and notes for the Dec 18th meeting * Welcome! * Goals * Complete the work described in Brent's Patch Resolution proposal for the non-Neutron OpenStack content in StarlingX * Rebase StarlingX to OpenStack master and release in May with Stein. * Keep current with OpenStack master on an on-going basis * Branch handling * Need feature branches created immediately to start pulling from OSF master . For projects: stx-integ, others? ? Hold off on branching for now, likely target mid-January. * Ensure all dependent services are updated if needed (e.g. rabbit, memcached, others?) * Build / test of the branch ? Who will do the builds? Who will handle the inevitable build issues? * We need to merge with containerized services - turn on config flag - no support for non-containers * Container team to do initial work to get the builds working again ? How do we handle carried patches? Ignore? Rebase? Selectively apply? No carried patches to be applied initially. Run against master and address issues if/when they arise. * The following critical refactoring needs to be accelerated as it is the bare minimum to move from track 2 ? Neutron host state management moved to the STX-NFV https://storyboard.openstack.org/#!/story/2003857 ? Nova service create moved to the STX-NFV https://storyboard.openstack.org/#!/story/2004583 ? Modeling the provider networks in sysinv Story: https://storyboard.openstack.org/#!/story/2004455 ? Need Neutron Spec implemented to cutover to track 2 (Which spec????) Bruce to find in chart deck. * Work item status and discussion * Intel response ? Can cover Yongli's 3 items for Stein ? Need the spec approved for vCPU model - how can we help? Risk very high for missing Stein. Jack to update to address latest comments, Yongli to review. AR Bruce to ask the Intel core to review the spec. ? Need the patches unsquashed for the 3 squashed bugs ? Intel looking into internal/external funding for the Nova bug list. We will need help reproducing issues ? Intel willing to discuss funding Horizon outsourcing to be PM'd by WR. Bruce to ask Denise - do the vendors we are considering have Horizon experience? UI experience? * Review the tracking spreadsheet: https://docs.google.com/spreadsheets/d/s/edit?usp=sharing ? Column U is blank. Do we care? Maybe - Brent thinks it's not needed, Dean thinks it would be good to have the back-pointers while doing the work. Handle on-demand as needed. ? Question - Move to a new sheet? Permissions issues with this version. * Let's move to a clean spreadsheet - Bruce to create. Should be R/O for anyone with the link and we should have control over who can edit it. * How do we want to track the Strategy part of this. In the past this has been conversations, emails or other informal (non-tracked) methods. Do we want to create short specs to submit for review? * Testing - for features we inherit from Upstream can the Test team provide regression test cases? Bruce to check with Ada & Numan. * Raw cache (lixiaoy1) Ovidiu and Lisa agreed on that we can replace StarlingX raw cache with Cinder image cache, and lists what we need to do in starlingx config and what we can enhance in Cinder image cache (add some tags in image properties to impact when the image is evicted from cache.) We need Brent's confirmation. More detailed info please see the email threads thread with subject "[Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching". * Long term strategy * Assuming STX r/2019.05 will rebase to Stein branches and STX master will continue on master -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Tue Dec 18 18:09:42 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 18 Dec 2018 18:09:42 +0000 Subject: [Starlingx-discuss] Missing repository in CENGN mirror Message-ID: <90d2b3af8faadcec886fa217cde3948ccdb3ec48.camel@intel.com> Hi, In this change[0], a new repository was added for some Go packages. This hit an issue in the mirror download process. The script tries to find this repository in the CENGN server, but as is not present the script fails and thus the entire mirror download. It seems that whenever a new repository is added we need to update the content of the CENGN server, so , what's the process to get this new repository in CENGN? Also we can modify the script to create the cache from valid servers, this could be a complementary solution. The error reported by the download_mirror.sh is this one: failure: repodata/repomd.xml from CENGN_Starlingx-C7.5.1804-paas-openshift: [Errno 256] No more mirrors to try. http://mirror.starlingx.cengn.ca:80/mirror/centos/centos/mirror.centos.org/centos/7.5.1804/paas/x86_64/openshift-origin311/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found finish 2nd round of RPM downloading with missing files! [0] https://review.openstack.org/#/c/624864/1 From ada.cabrales at intel.com Tue Dec 18 19:26:17 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 18 Dec 2018 19:26:17 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 12/18/2018 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4C203@FMSMSX114.amr.corp.intel.com> Agenda for 12/18 Attendees: JC, JP, Cristopher, Jose, Numan, Maria, Fernando, Bruce, Elio, Bill, Richo Sanity testing: coverage improvement - JC -> Current sanity has 24 tests, covering 4 virtual configs: simplex, duplex, multinode with local storage, multinode external storage - Proposal: * Recovery after dead office * Recovery after Power Down / Power Up controllers, computes and storage hosts * Lock / Unlock active controller reject and check alarms * Lock / Unlock standby controller, computes and storage hosts, check alarms and then SSH from controller to each host * Create an stack from a heat template * Controller swact with active instances * Controller swact triggered by host reboot with active instances * Launch instances from volume, from image and from snapshot, with different OS, and ping between them * Launch instances from volume, from image and from snapshot, with different OS, and perform suspend/resume, pause/unpause, stop/start, lock/unlock, reboot, rebuild and resize * Launch instances from volume, from image and from snapshot, with diferent OS, and set/unset instances properties * Live / Cold migration of instances launched from volume, from image and from snapshot, with diferent OS * Kill instances and check they goes to hard reboot and recovery * Evacuation of instances by rebooting a controller host or compute host * Down management network in active controller, in standby controller, in computes and in storage hosts, then check alarms and check swact takes action * Change the MTU value of data interfaces in all hosts * Kill services managed by SM on active controller, then check service recovery * Display HA system service group, service list and state of such services * Check there are not new alarms generated at the end of Sanity testing - Sanity should be fast, assuring the build is safe. Recovery after dead office and recovery after power down/power up nodes -> this could be part of an extended sanity - Exercise different types of VMs, with different types (as one with Windows) - There's no certainty on how long would it take to run,could be all night long - If Sanity fails, critical bug is created. If extended sanity fails, might not be a critical. -> Items from the distro.openstack team - Bruce - Rebasing to OpenStack master on mid-January - 3 features: SRIOV SR-IOV/PT best effort scheduling policy with Queens feature Replace vswitch affinity with Rocky feature DB purge - Request is to get help from testing team for making sure the 3 features are working correclty - Bruce to create the storyboards for the rebasing and the test creation FH> Adding some helpful links https://review.openstack.org/#/c/555000/3..3/specs/queens/implemented/share-pci-between-numa-nodes.rst https://review.openstack.org/#/c/541290/18/specs/rocky/approved/numa-aware-vswitches.rst,unified https://blueprints.launchpad.net/nova/+spec/purge-db -> Update request to CENGN - storage of the sanity logs - Ken Meeting next Thursday - update to be provided -> Reminder - Meeting this Friday for checking PyTest Last meeting of the year, will resume on Jan 8 -> Opens: Fernando: email sent to Numan asking for help on some security stuff. Numan to get back with the information. -- Ada From Frank.Miller at windriver.com Tue Dec 18 17:33:01 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 18 Dec 2018 17:33:01 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Infrastructure Containerization Message-ID: No meeting will be held on Dec 24th - next meeting will be Jan 7th ----------------------> For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5283 bytes Desc: not available URL: From Frank.Miller at windriver.com Tue Dec 18 17:33:03 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 18 Dec 2018 17:33:03 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Infrastructure Containerization Message-ID: No meeting will be held on Dec 31st - next meeting will be Jan 7th ----------------------> For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5284 bytes Desc: not available URL: From bruce.e.jones at intel.com Tue Dec 18 19:17:53 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 18 Dec 2018 19:17:53 +0000 Subject: [Starlingx-discuss] Distro.openstack Dec 18th meeting minutes In-Reply-To: <9A85D2917C58154C960D95352B22818BB1ED6017@fmsmsx117.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BB1ED6017@fmsmsx117.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BB1ED6624@fmsmsx117.amr.corp.intel.com> I have moved the tracking spreadsheet for the OpenStack patch resolution project to a new location [0]. This allowed us to separate it from the previous sheet and also address some permission issues. The new sheet should be visible by everyone. If you need write access to it please send me a note. This sheet should be used for the Nova, Neutron and Horizon rebasing & refactoring work. We will manage the Nova and Horizon work in the Distro.OpenStack meeting (lead by me) at 6AM PST Tuesdays. @Ghada, please move management of the Neutron work to this new sheet. We will continue to track the non-OpenStack patches in the "Master Patch" spreadsheet. I'm sorry for all the churn on this. Hopefully we won't need to change this again :) brucej [0] https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/edit?usp=sharing From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Tuesday, December 18, 2018 7:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Distro.openstack Dec 18th meeting minutes Meeting agenda and notes for the Dec 18th meeting * Welcome! * Goals * Complete the work described in Brent's Patch Resolution proposal for the non-Neutron OpenStack content in StarlingX * Rebase StarlingX to OpenStack master and release in May with Stein. * Keep current with OpenStack master on an on-going basis * Branch handling * Need feature branches created immediately to start pulling from OSF master . For projects: stx-integ, others? ? Hold off on branching for now, likely target mid-January. * Ensure all dependent services are updated if needed (e.g. rabbit, memcached, others?) * Build / test of the branch ? Who will do the builds? Who will handle the inevitable build issues? * We need to merge with containerized services - turn on config flag - no support for non-containers * Container team to do initial work to get the builds working again ? How do we handle carried patches? Ignore? Rebase? Selectively apply? No carried patches to be applied initially. Run against master and address issues if/when they arise. * The following critical refactoring needs to be accelerated as it is the bare minimum to move from track 2 ? Neutron host state management moved to the STX-NFV https://storyboard.openstack.org/#!/story/2003857 ? Nova service create moved to the STX-NFV https://storyboard.openstack.org/#!/story/2004583 ? Modeling the provider networks in sysinv Story: https://storyboard.openstack.org/#!/story/2004455 ? Need Neutron Spec implemented to cutover to track 2 (Which spec????) Bruce to find in chart deck. * Work item status and discussion * Intel response ? Can cover Yongli's 3 items for Stein ? Need the spec approved for vCPU model - how can we help? Risk very high for missing Stein. Jack to update to address latest comments, Yongli to review. AR Bruce to ask the Intel core to review the spec. ? Need the patches unsquashed for the 3 squashed bugs ? Intel looking into internal/external funding for the Nova bug list. We will need help reproducing issues ? Intel willing to discuss funding Horizon outsourcing to be PM'd by WR. Bruce to ask Denise - do the vendors we are considering have Horizon experience? UI experience? * Review the tracking spreadsheet: https://docs.google.com/spreadsheets/d/s/edit?usp=sharing ? Column U is blank. Do we care? Maybe - Brent thinks it's not needed, Dean thinks it would be good to have the back-pointers while doing the work. Handle on-demand as needed. ? Question - Move to a new sheet? Permissions issues with this version. * Let's move to a clean spreadsheet - Bruce to create. Should be R/O for anyone with the link and we should have control over who can edit it. * How do we want to track the Strategy part of this. In the past this has been conversations, emails or other informal (non-tracked) methods. Do we want to create short specs to submit for review? * Testing - for features we inherit from Upstream can the Test team provide regression test cases? Bruce to check with Ada & Numan. * Raw cache (lixiaoy1) Ovidiu and Lisa agreed on that we can replace StarlingX raw cache with Cinder image cache, and lists what we need to do in starlingx config and what we can enhance in Cinder image cache (add some tags in image properties to impact when the image is evicted from cache.) We need Brent's confirmation. More detailed info please see the email threads thread with subject "[Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching". * Long term strategy * Assuming STX r/2019.05 will rebase to Stein branches and STX master will continue on master -------------- next part -------------- An HTML attachment was scrubbed... URL: From felipe_57 at live.com.mx Tue Dec 18 23:14:05 2018 From: felipe_57 at live.com.mx (Felipe de Jesus Ruiz Garcia) Date: Tue, 18 Dec 2018 23:14:05 +0000 Subject: [Starlingx-discuss] Change url Fedora repodata Message-ID: Hi there I'm seeing  a issue building the StarlingX Docker image with the make command. The issue says that repodata is not found at: http://vault.centos.org/centos/7/cloud/Source/openstack-queens/repodata/  ... In fact the right url for the repodata changed to: http://vault.centos.org/centos/7/cloud/Source/openstack-pike/repodata/ I send a changing solving this https://review.openstack.org/#/c/626029/ Regards - Felipe Ruiz / Pipo / tranzemc From kailun.qin at intel.com Wed Dec 19 02:12:17 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Wed, 19 Dec 2018 02:12:17 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: <70A7408C6E1BFB41B192A929744D8523BAC543E0@ALA-MBD.corp.ad.wrs.com> References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC543E0@ALA-MBD.corp.ad.wrs.com> Message-ID: Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun ; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From chenjie.xu at intel.com Wed Dec 19 08:17:28 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 19 Dec 2018 08:17:28 +0000 Subject: [Starlingx-discuss] RFE Review Request for patch 88b7bc7 for StartlingX upstreaming Message-ID: Hi Ryan, Could you please help review the RFE for Neutron (https://bugs.launchpad.net/neutron/+bug/1806316) and leave a comment? This RFE proposes to add an RPC query API to l2pop. The API is used to allow an agent to query the full FDB for a list of network_id values. This is needed by the agents which want to support l2pop but don't have ports bound to themselves (Potential for BGP Dynamic Routing agent). The project stx-neutron-dynamic-routing is a fork project for neutron-dynamic-routing. The project stx-neutron-dynamic-routing implements the l2pop for BGP Dynamic Routing agent: https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1053 https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1180 The plan is to upstream these changes to neutron-dynamic-routing. However, in the short term this is not being prioritized nor has any attempt been made to approach the individual project teams about getting this accepted. Could you leave a comment on the potential use? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiaoyan.li at intel.com Wed Dec 19 06:27:35 2018 From: xiaoyan.li at intel.com (Li, Xiaoyan) Date: Wed, 19 Dec 2018 06:27:35 +0000 Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EBB25D46B@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5F8E8F@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5FAA8C@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D608275@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ovidiu, Yesterday we discussed raw cache in Distro.openstack Dec 18th meeting. And Brent agreed that we should replace StarlingX raw cache with Cinder image cache, and we need to do corresponding changes in stx config to enable Cinder image cache. We Intel will do the work and you people will assist us. Could you show us where the Cinder service is configed and started in stx config? The following is TODO copied from former emails. Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Best wishes Lisa From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Thursday, December 6, 2018 2:06 PM To: Poncea, Ovidiu ; Rowsell, Brent ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent, Please give your suggestions. And thank Ovidiu with the detailed summary! One correction here: With Cinder image cache, image_volume_cache_max_size_gb and image_volume_cache_max_count can be set 0, which means unlimited for both cache capacity and number of cached images. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, December 5, 2018 3:42 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank >; Church, Robert > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Thanks for providing clarifications! So, for our use cases, main problem is that glance’s raw caching is more controllable that cinder’s. If it’s not enough we need to improve it, if we can live with it then at a minimum it needs to be enabled though sysinv configuration and then remove the raw-caching from glance. See inline comments plus bellow summary and proposal, we need Brent’s input on this: I see two main solutions to the problem: A. Always enable cache, for any backend, but only cache glance images that have a certain attribute – this needs a cinder upstream change. Cache limit has to be removed (another cinder upstream change). We may also need a way to kick-start the caching in cinder & clean up cache (periodically and/or user triggered should be enough). B. Make enabling cache storage backend specific and configurable (through sysinv). Once cinder’s cache is enabled for a backend, cache everything. Size of the cache should be configurable. I would go for B. as it, most likely, doesn’t need upstream changes. [Li, Xiaoyan] Agree with B. But it doesn’t conflict with the requirements to set a property of an image like disable_cache, with this property Cinder won’t cache this image. I am concerned what kind of scenario/image it is suitable for? Summary of problems, TBD if we can live with them: · Images are not cached on creation – if we can’t live with it we may need a trigger to cinder on image creation or a way to manually kick-start the caching process. · Since first volume creation is slow for larger volumes this may timeout (keystone token expiration) – we had a customer using 200GB qcow2 windows images that would timeout on conversion. I don’t see a workaround for it, just ask him to manually do the conversion when importing very large images to glance. · we can’t provide a 100% guarantee that, once converted, successive creation won’t need to get converted again due to cache exhaustion. Can we live with it? Users may intermittently see slowdowns and wonder what’s going on. [Li, Xiaoyan] How about we can add a properties to this image/volume, Cinder will at last evict the cached image when cache exhausted. This need a cinder upstream to respect the property. · cache will waste space, if original images no longer exists there is no automated way to remove them from the cache – admin can clean up the cache manually if he so desires. We can either: 1. Live with it – assume that the space allocated to the cache is for the cache only or users can clean up cache by themselves. 2. Clean up cache through a cron job (although this is a cache, some caches are supposed to clean themselves up if cached data is no longer present). 3. Implement another mechanism to clean the cache when an image is deleted not at a later time (this is way too complex to upstream). · What happens with images that users don’t want to cache? Should we add a filter (glance property)? [Li, Xiaoyan] Allow users to add a property of the image. And need cinder upstream to respect the property. I vote for #2 as it does not seems too hard to implement. A once a day cron task can free up wasted space. [Li, Xiaoyan] This cron task probably can’t be included in Cinder. Is it OK? Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Regards, Ovidiu From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Tuesday, November 27, 2018 4:30 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, As far as I’m concerned, Cinder image cache is an cache mechanism. So overall, users don’t need to clean it manually. Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Monday, November 26, 2018 11:15 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Lisa, Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me. Short comparison of the two: Raw caching Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool. Advantages: - user can select the images it wants to cache - user can monitor the progress and can check used space for each image (cli + dashboard). - on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use. - no wasted space - complete control by user Disadvantages: - There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work. - Ceph only Cinder's caching Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold. Advantages: - already upstream - works with all backends - all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes. - admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation) Disadvantages: 1. it's either globally enabled or disabled => needs sysinv configuration option 2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste 3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important. 4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit. 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) Mitigations of disadvantages above - possible solutions and alternatives: #1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify") [Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html I think it is enough. [Ovi] Nice, we need a configuration option per backend in sysinv to enable it. (most likely in the capabilities fields of storage-backends table. See ‘system storage-backend-*’ commands). #2: No workaround comes to my mind - we can probably live with it #3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it) [Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction. As it is cache and should be transparent for users, why do we need users to evict data? [Ovi] If we conclude than this is enough from data usage perspectives then we are ok with it. #4: Two options comes to mind: 1. To get some control we should not limit the cache size, given that we do propper cleanup in #3. [Li, Xiaoyan] Even we do cleanup, the limit can’t be removed. [Ovi] We may need to enhance this. :q 2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can’t it work with Kubernetes? [Ovi] I did not say it won’t work with kubernates ☺ What I said is that we need to provision the shadow tenant automatically when the feature is enabled. Questions, (maybe if you get time to play with cinder's caching to get a better understanding): 1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel? [Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first and then creates volumes. The primary key is not image_id+backend_storage. It is possible that several entries or volumes will be created in same backend storage. [Ovi] So, only the first volume creation is going to be slow? If that’s the case then parallel volume creation will work ok as only first volume creation will be slow. 2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend? [Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled. [Ovi] Then we need to enable it and configure cache size per backend, I guess. 3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect? [Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed. [Ovi] So after we make the changes, we re-apply the manifests and restart the services (reload the helm charts for k8s deployments) 4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user? [Li, Xiaoyan] Admin and shadow tenants can both do cleanup. Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Thursday, November 22, 2018 2:41 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent and Ovidiu, As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache. Please vote whether we can abandon raw cache in StarlingX. StarlingX Create an image cache in ceph when Glance creates an image. And delete the cached image in ceph when deleting the original image in Glance. Cinder: When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache. So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it. Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes. From my viewpoint, Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement. It is for all Cinder supported backend storage, not just for Ceph. Best wishes Lisa From: Li, Xiaoyan Sent: Monday, November 19, 2018 9:44 AM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, A cached image ( new volume from this image) is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image. All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368 https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82 A cached image is deleted when the configure space for cache is used up. So currently Cinder doesn’t delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache. https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117 https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351 Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Friday, November 16, 2018 4:57 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed. Thanks, Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Tuesday, November 13, 2018 9:19 AM To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail. And if I would like to remove the function in StarlingX, there are two methods: 1. Submit a patch to revert the changes in Glance and Cinder. 2. Ignore these patches during upgrading StarlingX/Cinder to new Cinder release. Which way do we prefer to? Best wishes Lisa From: Li, Xiaoyan Sent: Thursday, September 20, 2018 10:17 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, Brent The following are mechanism of Cinder volume cache. Creation of cached volume: It creates a cached volume in the backend storage when creating from an image. 1. Create_from_image: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890 2. Return image cache entry: If not existed, it creates a new entry. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746 3. Create a new image-volume and cache entry for it: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872 Use a cached volume when creating a volume: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes). https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164 Best wishes Lisa From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, September 6, 2018 10:02 AM To: Li, Xiaoyan >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching We would need to review this feature to ensure it provides equivalent functionality first. If it does, great, we can look at reverting and enabling this cinder functionality. Brent From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Wednesday, September 5, 2018 9:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi all, This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage. In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead. The problem is that we need to update Cinder config in StarlingX. Any comments? Best wishes Lisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From salerio at gmail.com Wed Dec 19 10:09:03 2018 From: salerio at gmail.com (Peter Smith) Date: Wed, 19 Dec 2018 10:09:03 +0000 Subject: [Starlingx-discuss] Problem building lentos mirror using supplied Dockerfile Message-ID: Hi, a fresh clone of six-tools master branch followed by make results in the following error while attempting to yum install gaolang. On investigation it seems that the URL does indeed no longer exist. http://vault.centos.org/centos/7/cloud/Source/openstack-queens/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article https://access.redhat.com/articles/1320623 Peter Smith -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Dec 19 13:21:12 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 19 Dec 2018 13:21:12 +0000 Subject: [Starlingx-discuss] Problem building lentos mirror using supplied Dockerfile In-Reply-To: References: Message-ID: <9700A18779F35F49AF027300A49E7C765FE57252@SHSMSX101.ccr.corp.intel.com> Hi Smith, Please cherry pick below patch to solve it. https://review.openstack.org/626029 Best Regards Shuicheng From: Peter Smith [mailto:salerio at gmail.com] Sent: Wednesday, December 19, 2018 6:09 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Problem building lentos mirror using supplied Dockerfile Hi, a fresh clone of six-tools master branch followed by make results in the following error while attempting to yum install gaolang. On investigation it seems that the URL does indeed no longer exist. http://vault.centos.org/centos/7/cloud/Source/openstack-queens/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found Trying other mirror. To address this issue please refer to the below knowledge base article https://access.redhat.com/articles/1320623 Peter Smith -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Dec 19 14:36:59 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 19 Dec 2018 14:36:59 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack Distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35DB65F5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E067B1@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 12/19 meetings: 1. CentOS7.6 upgrade (including kernel 3.10.0.957) status update (Shuicheng/Martin) kernel upgrade will be done in feature branch instead of master (original planned). Martin finished kernel upgrade (std/rt) with patch pending for review. It cannot be merged before the out-of-tree kernel drivers to be done. Same story as of kernel . Kernel upgrade is part of the 7.6 upgrade now, so until kernel is upgraded, feature branch will not be closed. We will have 3 major tasks be done at feature branch: - srpm/rpm upgrade ( https://storyboard.openstack.org/#!/story/2004522): started the task. Already have 2 sRPM patch ready for review. - Kernel upgrade (3.10.0.957, https://storyboard.openstack.org/#!/story/2004521) - Out of tree kernel module upgrade (use the same story: https://storyboard.openstack.org/#!/story/2004521) Need to review the OVS/DPDK upgrade strategy in network sub-project tomorrow. May use 2004521 and add one task. 2. Ceph upgrade status (Vivian/Dehao/Changcheng) Ceph rest-api src removed from 13.2.2, moved to Ceph-mgr. refactor the original design to implement the same functionality. Need to enable Ceph-mgr daemon first. Then to enable Ceph-rest-api plug-in. With the help from Ovidiu, made some progress. Need more help from GDC team for the deployment. Frank: ever brought up a storage dedicate system which can be used for comparison? Changcheng ever deployed virtual system. Need to have one up & running system to compare the logs; Need tutorial 1on1. Virtual environment with a bigger system can do a storage dedicated deployment . Abraham from GDC who did the deployment for a storage dedicated deployment will try again. wiki is not updated for a while - Abe is responsible to update that. Frank can provide 1 hour training for how stx-service up for Ceph. 3. DevStack for Flocks plug-in (Dean/Yi) 15 patches regarding Devstack plug-in merged: nfv, config, fm and update. Enabled 4 services of nfv, 3 services of config and 1 service of fault. still have 5 patches under review: ha, metal and gui. expect to enable 12 services for metal, gui is not updated for a while (pending Hayde). remaining service not enabled yet: nfv, metal and patch(update), more than 10 services to be enabled. can we use Devstack for pre-merge test? Need to check the functionality which is coupled tightly. 4. Opens (all) we will cancel the meeting for next week and the week after the next. Will talk to the team after new Year. Happy Holiday! -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Shang, Dehao; 'Rowsell, Brent'; Wold, Saul; Waheed, Numan; Sun, Austin; Jones, Bruce E; Liu, ZhipengS; starlingx-discuss at lists.starlingx.io; Troyer, Dean; Hu, Yong; 'Khalil, Ghada'; Zhu, Vivian; Lin, Shuicheng; Somerville, Jim Cc: 'Young, Ken'; Hu, Wei W; Armstrong, Robert H; Martinez Monroy, Elio; 'Hellmann, Gil'; 'Chen, Jacky'; 'Eslimi, Dariush'; Lara, Cesar; Cobbley, David A; 'Waines, Greg'; Gomez, Juan P; Martinez Landa, Hayde; Arce Moreno, Abraham; Perez Rodriguez, Humberto I; Perez Carranza, Jose; 'Seiler, Glenn' Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, December 19, 2018 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From bruce.e.jones at intel.com Wed Dec 19 15:27:11 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 19 Dec 2018 15:27:11 +0000 Subject: [Starlingx-discuss] Meeting notes from Dec 19th community call Message-ID: <9A85D2917C58154C960D95352B22818BB1ED6BE9@fmsmsx117.amr.corp.intel.com> Agenda and notes - Dec 19th call * This is our last meeting for 2018. Thank you all very much! * Please register for the January community meeting so we can get a count for logistics (meals, etc...): https://starlingx_jan2019meetup.eventbrite.com * Can/should we move the Multi-OS meeting time (currently 4PM PDT)? * Did not discuss, Bruce to raise on the mailing list * Ceph upgrade - how to resolve the issues the team is hitting? * Work is being managed in the distro.other meeting, please help if you can! * We held our first distro.openstack meeting. Thanks to all who attended. * We will be rebasing the system to OpenStack master without patches. In a branch. Not until late Jan at the earliest. Will be exciting * Planning and lots of work for addressing the key features highlighted by Brent is in progress. * Re-tagging launchpads and story boards -- from stx.2019.03 to stx.2019.05. Any concerns? * Release team (Bruce, Ghada) to meet and put a naming/tagging proposal together * OpenStack Days UK is interested in StarlingX * Denver Summit CFP is out! * Next steps for Release Planning * TSC List of Priority Features Defined: https://ethercalc.openstack.org/tmv05jth0a5y * In early January (before Chandler meeting), define the PL for each feature item. Each PL updates the plan. ? Ghada to come up with format (ethercalc / google sheet?) ? Many of the features on the list span multiple team. Agreed to having a Single Contact Person per feature * Responsible for organizing the feature, managing the plan and bringing the involved teams together as needed -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Dec 19 17:50:19 2018 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 19 Dec 2018 09:50:19 -0800 Subject: [Starlingx-discuss] Linter for Go? Message-ID: Folks, I am curious, do we have a linter for Go enabled in Zuul? Does it make sense to add one now that we are seeing some Go code being added to the project? Sau! From erich.cordoba.malibran at intel.com Wed Dec 19 17:52:30 2018 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 19 Dec 2018 17:52:30 +0000 Subject: [Starlingx-discuss] Linter for Go? In-Reply-To: References: Message-ID: Some Go projects uses gometalinter, it supports a broad set of linters https://github.com/alecthomas/gometalinter -Erich On 12/19/18, 11:50 AM, "Saul Wold" wrote: Folks, I am curious, do we have a linter for Go enabled in Zuul? Does it make sense to add one now that we are seeing some Go code being added to the project? Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Wed Dec 19 18:00:48 2018 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 19 Dec 2018 18:00:48 +0000 Subject: [Starlingx-discuss] Multi-OS meeting Message-ID: <9A85D2917C58154C960D95352B22818BB1ED6D73@fmsmsx117.amr.corp.intel.com> We have been running a weekly call for the Multi-OS work but attendance has been low. If you would like to attend and help out, but have issues with the time of the call - please let me know what time would work for you. If there is enough interest, we can try to move the call. Thank you! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Dec 19 18:25:18 2018 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 19 Dec 2018 12:25:18 -0600 Subject: [Starlingx-discuss] Linter for Go? In-Reply-To: References: Message-ID: > On 12/19/18, 11:50 AM, "Saul Wold" wrote: > I am curious, do we have a linter for Go enabled in Zuul? Does it make > sense to add one now that we are seeing some Go code being added to the > project? The OpenStack Golang Project Testing Interface (PTI)[0] calls out using go fmt for this. I am not aware of any pre-existing upstream jobs we could crib from to do this. I did implement a reference/test for the PTI in the old golang-client project[1] with a make interface, using tox is easy enough. dt [0] https://governance.openstack.org/tc/reference/pti/golang.html [1] https://git.openstack.org/cgit/openstack/golang-client/tree/Makefile -- Dean Troyer dtroyer at gmail.com From cesar.lara at intel.com Wed Dec 19 20:53:10 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 19 Dec 2018 20:53:10 +0000 Subject: [Starlingx-discuss] [build] [meetings] Build team meeting minutes 12/13/2018 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710596BD7@fmsmsx104.amr.corp.intel.com> Build team meeting 12/13/2018 Attendees Victor, Jason, ken, Saul, Brent, Memo, Erich, Felipe, Scott, Luis, Dean, Cesar Agenda - ISO files retention follow up - Overall picture for next gen build system - Holiday schedule - Opens - Bug triage Notes ISO files retention follow up: Unless we have some disk space issues, we keep milestones ISO files, until we retire the release that these files are part of. Once a release is out we will age out the milestone ISOs. The proposal sent by Ken to the mailing list is as follows: N release age out all milestones related to N-2 (N being the release) Archive builds need to have a different discussion not necessarily with this team but at the TSC level along with the supported versions of the product Daily ISOs will be kept for 14 days. Overall picture for next gen build system: There was a presentation around the implementation of a multi-OS build system by VictorR and Saul some of the highlight of that conversation were: How do we build an image (ISO or container) needs to be the same process for any OS The build process should start with a pile of packages and configurations that we will feed in to the build mechanism. Direction should not be monolithic, for every distro, we start how those distros are being built, and then we figure out how we distribute those (installers, ISO, container) Sysconfig/ kickstart setup should be abstracted from the build process, depending on the OS We should follow a model where we have many Git repos and tags for release and archives. We also should take into account that what the ISO is going to shrink, there are others like the containers that will not be necessarily in the ISO. We need to consider a philosophy in which we provide the same OS in the host and the containers, so we avoid the support of different CVEs and packages for containers and hosts AR Victor to break down this into multiple specs not more than one. Holiday schedule - we will align to other sub teams and community meeting for the December break, we are cancelling our December 27th meeting and should evaluate the January 3rd 2019 one Bug triage- all bugs for Build are properly assigned and have people working on those Opens - no opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Dec 19 21:22:40 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 19 Dec 2018 15:22:40 -0600 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> Message-ID: Ken Any update on this? Do you still need this script for the CENGN release notes/changelog? Can we discuss this topic on tomorrow build meeting? Regards On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A wrote: > > Hello everybody, > > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > > This will give you the change log on stdout. > > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > > Please let me know what you all think about, feedback is really appreciated. > > - Guillermo Ponce > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ken.Young at windriver.com Wed Dec 19 21:31:04 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Wed, 19 Dec 2018 21:31:04 +0000 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> Message-ID: We haven't gotten to this yet but we will. Let's discuss tomorrow. /KenY On 2018-12-19, 4:23 PM, "Victor Rodriguez" wrote: Ken Any update on this? Do you still need this script for the CENGN release notes/changelog? Can we discuss this topic on tomorrow build meeting? Regards On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A wrote: > > Hello everybody, > > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > > This will give you the change log on stdout. > > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > > Please let me know what you all think about, feedback is really appreciated. > > - Guillermo Ponce > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cesar.lara at intel.com Wed Dec 19 21:38:50 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 19 Dec 2018 21:38:50 +0000 Subject: [Starlingx-discuss] [build] [meetings] Build team meeting Agenda 12/20/2018 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710596D4F@fmsmsx104.amr.corp.intel.com> Build team meeting Agenda 12/20/2018 - Release notes follow up - Public Static Analysis - next steps for path to multi-OS builds - opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From salerio at gmail.com Wed Dec 19 21:57:40 2018 From: salerio at gmail.com (Peter Smith) Date: Wed, 19 Dec 2018 21:57:40 +0000 Subject: [Starlingx-discuss] Problem building lentos mirror using supplied Dockerfile In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE57252@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE57252@SHSMSX101.ccr.corp.intel.com> Message-ID: Thanks On Wed, 19 Dec 2018 at 13:21, Lin, Shuicheng wrote: > Hi Smith, > > Please cherry pick below patch to solve it. > > https://review.openstack.org/626029 > > > > > > Best Regards > > Shuicheng > > > > *From:* Peter Smith [mailto:salerio at gmail.com] > *Sent:* Wednesday, December 19, 2018 6:09 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Problem building lentos mirror using > supplied Dockerfile > > > > Hi, a fresh clone of six-tools master branch followed by make results in > the following error while attempting to yum install gaolang. On > investigation it seems that the URL does indeed no longer exist. > > > > > http://vault.centos.org/centos/7/cloud/Source/openstack-queens/repodata/repomd.xml: > [Errno 14] HTTP Error 404 - Not Found > > Trying other mirror. > > To address this issue please refer to the below knowledge base article > > > > https://access.redhat.com/articles/1320623 > > > > Peter Smith > > > > -- Best Regards Peter -------------- next part -------------- An HTML attachment was scrubbed... URL: From rtidwell at suse.com Wed Dec 19 22:07:49 2018 From: rtidwell at suse.com (Ryan Tidwell) Date: Wed, 19 Dec 2018 16:07:49 -0600 Subject: [Starlingx-discuss] RFE Review Request for patch 88b7bc7 for StartlingX upstreaming In-Reply-To: References: Message-ID: <3002b8eb-3bd6-ef4a-b490-b9bab3e9de22@suse.com> Chenjie, Thanks for reaching out. I've taken a glance over the RFE, it's very detailed but lacks some context. Just to be sure I understand this correctly, is the idea here to use BGP EVPN as a method for providing l2pop functionality? If so, I've actually been thinking about this for some time now and I think it would be great to provide an alternative to the current l2pop implementation. I could certainly see supporting this functionality in some fashion in neutron-dynamic-routing. It looks like this RFE is one of several in the neutron space that would be required to achieve this functionaliy, is that correct? If I'm not understanding the use case correctly, could you explain what you have in mind? In the mean time I'll look this over again and leave some comments. -Ryan On 12/19/18 2:17 AM, Xu, Chenjie wrote: > > Hi Ryan, > > Could you please help review the RFE for Neutron > (https://bugs.launchpad.net/neutron/+bug/1806316) and leave a comment? > >   > > This RFE proposes to add an RPC query API to l2pop. The API is used to > allow an agent to query the full FDB for a list of network_id values. > This is needed by the agents which want to support l2pop but don't > have ports bound to themselves (*Potential for BGP Dynamic Routing > agent*). The project stx-neutron-dynamic-routing is a fork project for > neutron-dynamic-routing. The project stx-neutron-dynamic-routing > implements the l2pop for BGP Dynamic Routing agent: > > https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1053 > > https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1180 > >   > > The plan is to upstream these changes to neutron-dynamic-routing. > However, in the short term this is not being prioritized nor has any > attempt been made to approach the individual project teams about > getting this accepted. *Could you leave a comment on the potential use?* > >   > > Best Regards, > > Xu, Chenjie > -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenzz at certusnet.com.cn Thu Dec 20 01:55:40 2018 From: chenzz at certusnet.com.cn (chenzz) Date: Thu, 20 Dec 2018 09:55:40 +0800 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. References: <201812110931334306841@certusnet.com.cn> Message-ID: <201812200955395695071@certusnet.com.cn> hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Thu Dec 20 02:38:45 2018 From: yong.hu at intel.com (Hu, Yong) Date: Thu, 20 Dec 2018 02:38:45 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812200955395695071@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn> <201812200955395695071@certusnet.com.cn> Message-ID: Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Dec 20 02:47:20 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 20 Dec 2018 02:47:20 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812200955395695071@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn> <201812200955395695071@certusnet.com.cn> Message-ID: <9700A18779F35F49AF027300A49E7C765FE575E7@SHSMSX101.ccr.corp.intel.com> Hi, The cmd has been removed. Please replace it with “make all”. Hi Abraham, Could you help check the developer guide [0] and update it with latest instruction? Thanks. [0]: https://docs.starlingx.io/developer_guide/index.html Best Regards Shuicheng From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 9:56 AM To: starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenzz at certusnet.com.cn Thu Dec 20 05:53:34 2018 From: chenzz at certusnet.com.cn (chenzz) Date: Thu, 20 Dec 2018 13:53:34 +0800 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, Message-ID: <201812201353342045172@certusnet.com.cn> Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Hu, Yong Date: 2018-12-20 10:38 To: chenzz; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Thu Dec 20 06:25:23 2018 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 20 Dec 2018 06:25:23 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812201353342045172@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, <201812201353342045172@certusnet.com.cn> Message-ID: Hi : Could you run “which build-pkgs” in build container and check if the path ? And check if /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs exists ? Thanks. BR Austin Sun. From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong ; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Hu, Yong Date: 2018-12-20 10:38 To: chenzz; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz > Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss > Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Dec 20 06:28:03 2018 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 20 Dec 2018 06:28:03 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812201353342045172@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, <201812201353342045172@certusnet.com.cn> Message-ID: <9700A18779F35F49AF027300A49E7C765FE5765F@SHSMSX101.ccr.corp.intel.com> Hi, There are two container, and “build-pkgs” is in the build container only. The 1st container is for the mirror, it will not have this cmd. Best Regards Shuicheng From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong ; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Hu, Yong Date: 2018-12-20 10:38 To: chenzz; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz > Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss > Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Thu Dec 20 07:12:41 2018 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Thu, 20 Dec 2018 07:12:41 +0000 Subject: [Starlingx-discuss] RFE Review Request for patch 88b7bc7 for StartlingX Upstreaming Message-ID: Hi Ryan, Thank you for your reply! For your questions: 1. What's the context for this RFE: StarlingX extends Neutron, networking-bgpvpn and neutron-dynamic-routing as stx-neutron, stx-networking-bgpvpn and stx-neutron-dynamic-routing. These extensions enable Neutron network to learn external MAC/IP information from BGPVPN network and publish the external MAC/IP information internally via the L2POP mechanism. Similarly, internal MAC/IP information in Neutron network is published to remote systems via a subscription to L2POP RPC notifications and finally will be received by the BGP agent. 2. It looks like this RFE is one of several in the neutron space that would be required to achieve this functionality, is that correct? Yes, it is. To achieve what I describe in question one, 3 RFE has been posted to Neutron. 1). RFE "Enable other subprojects to extend l2pop fdb information" (https://bugs.launchpad.net/neutron/+bug/1793653) should be used by networking-bgpvpn to publish the (MAC, IP) information in BGPVPN network to Neutron network. 2). RFE "Add l2pop support for floating IP resources" (https://bugs.launchpad.net/neutron/+bug/1803494) extends l2pop FDB information to avoid the broadcast in the scenario that two VMs, residing on different network, communicate by their respective floating IP. 3). This RFE proposes to add an RPC query API to l2pop. This RFE is used by BGP Dynamic Routing agent to pull the l2pop FDB information. For the changes in stx-networking-bgpvpn and stx-neutron-dynamic-routing, the plan is to upstream them to networking-bgpvpn and neutron-dynamic-routing. However, in the short term this is not being prioritized nor has any attempt been made to approach the individual project teams about getting this accepted. I'm not sure if this is the same idea that using BGPVPN to provide an alternative to the current l2pop implementation or not. Looking forward to your reply! Best Regards, Xu, Chenjie From: Ryan Tidwell [mailto:rtidwell at suse.com] Sent: Thursday, December 20, 2018 6:08 AM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] RFE Review Request for patch 88b7bc7 for StartlingX upstreaming Chenjie, Thanks for reaching out. I've taken a glance over the RFE, it's very detailed but lacks some context. Just to be sure I understand this correctly, is the idea here to use BGP EVPN as a method for providing l2pop functionality? If so, I've actually been thinking about this for some time now and I think it would be great to provide an alternative to the current l2pop implementation. I could certainly see supporting this functionality in some fashion in neutron-dynamic-routing. It looks like this RFE is one of several in the neutron space that would be required to achieve this functionaliy, is that correct? If I'm not understanding the use case correctly, could you explain what you have in mind? In the mean time I'll look this over again and leave some comments. -Ryan On 12/19/18 2:17 AM, Xu, Chenjie wrote: Hi Ryan, Could you please help review the RFE for Neutron (https://bugs.launchpad.net/neutron/+bug/1806316) and leave a comment? This RFE proposes to add an RPC query API to l2pop. The API is used to allow an agent to query the full FDB for a list of network_id values. This is needed by the agents which want to support l2pop but don't have ports bound to themselves (Potential for BGP Dynamic Routing agent). The project stx-neutron-dynamic-routing is a fork project for neutron-dynamic-routing. The project stx-neutron-dynamic-routing implements the l2pop for BGP Dynamic Routing agent: https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1053 https://github.com/starlingx-staging/stx-neutron-dynamic-routing/blob/master/neutron_dynamic_routing/services/bgp/agent/bgp_dragent.py#L1180 The plan is to upstream these changes to neutron-dynamic-routing. However, in the short term this is not being prioritized nor has any attempt been made to approach the individual project teams about getting this accepted. Could you leave a comment on the potential use? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Thu Dec 20 07:51:38 2018 From: yong.hu at intel.com (Hu, Yong) Date: Thu, 20 Dec 2018 07:51:38 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE5765F@SHSMSX101.ccr.corp.intel.com> References: <201812110931334306841@certusnet.com.cn> <201812200955395695071@certusnet.com.cn> <201812201353342045172@certusnet.com.cn> <9700A18779F35F49AF027300A49E7C765FE5765F@SHSMSX101.ccr.corp.intel.com> Message-ID: <5B5A801F-EE3B-40B1-B510-E522D1DBB4B0@intel.com> And moreover, if everything goes well, “./tb.sh exec” will help you go into the build container. Inside the build container you should have the complete build environment. From: "Lin, Shuicheng" Date: Thursday, 20 December 2018 at 2:28 PM To: chenzz , "Hu, Yong" , starlingx-discuss Subject: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi, There are two container, and “build-pkgs” is in the build container only. The 1st container is for the mirror, it will not have this cmd. Best Regards Shuicheng From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong ; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Hu, Yong Date: 2018-12-20 10:38 To: chenzz; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz > Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss > Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Dec 20 00:58:14 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 20 Dec 2018 00:58:14 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack Distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E06E00@SHSMSX104.ccr.corp.intel.com> Enjoy your holiday off! * * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5548 bytes Desc: not available URL: From cindy.xie at intel.com Thu Dec 20 00:58:36 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 20 Dec 2018 00:58:36 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack Distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E06E23@SHSMSX104.ccr.corp.intel.com> Enjoy your holiday off! * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5543 bytes Desc: not available URL: From serverascode at gmail.com Thu Dec 20 12:23:22 2018 From: serverascode at gmail.com (Curtis) Date: Thu, 20 Dec 2018 07:23:22 -0500 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: Message-ID: On Thu, Dec 13, 2018 at 2:42 PM Peters, Matt wrote: > Hello, > > Attached are the slides I presented during the TSC call on Dec 13, 2018 > for the proposed improvements to the StarlingX initial bootstrap and system > inventory. As indicated on the call, a detailed stx-spec will follow, but > wanted to share the high-level changes being proposed before the arrival of > the spec to get some early feedback. > Hi Matt, One question I have is around Zero Touch Provisioning (ZTP). Is the overall concept being put forward here a solution for the ZTP problem or is it a replacement of config_controller with an (remotely run?) Ansible playbook, ie. presuming the OS is already available in some capacity. Thanks, Curtis > > > Regards, Matt > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Thu Dec 20 12:33:42 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 20 Dec 2018 12:33:42 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: Message-ID: Hello Curtis, No, the proposal is not specifically addressing ZTP. The proposed changes are to make improvements for the initial host deployment so that it permits running locally or remotely and also uses more standard technologies for deployment. In addition, the system inventory changes expand the capabilities for identifying and classifying hardware, simplifying the installation and commissioning steps, which could be leveraged for improved deployment automation. -Matt From: Curtis Date: Thursday, December 20, 2018 at 7:23 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Improvements Proposal On Thu, Dec 13, 2018 at 2:42 PM Peters, Matt > wrote: Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Hi Matt, One question I have is around Zero Touch Provisioning (ZTP). Is the overall concept being put forward here a solution for the ZTP problem or is it a replacement of config_controller with an (remotely run?) Ansible playbook, ie. presuming the OS is already available in some capacity. Thanks, Curtis Regards, Matt _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Dec 20 15:31:18 2018 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 20 Dec 2018 07:31:18 -0800 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: <7a4dd372-c5a0-1a44-f8c6-49543e6f9977@linux.intel.com> References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> <7a4dd372-c5a0-1a44-f8c6-49543e6f9977@linux.intel.com> Message-ID: bump On 12/14/18 10:40 AM, Saul Wold wrote: > > See more inline > > On 12/14/18 6:43 AM, Peters, Matt wrote: >> See inline. >> >> *From: *"Wang, Yi C" >> *Date: *Friday, December 14, 2018 at 3:53 AM >> *To: *"Peters, Matt" >> *Cc: *"starlingx-discuss at lists.starlingx.io" >> >> *Subject: *RE: Deployment Improvements Proposal >> >> Hi Matt, >> >> I just went through your slides. And I have a few questions. I >> appreciate if you can share more information about your proposal. Many >> thanks! >> >> 1. We know config_controller will do many things, like bootstrap >> configuration and controller configuration together with required >> hieradata generation. All the jobs of config_controller will be  taken >> over by Ansible, or just part of them? >> >> /MP> Yes most of these tasks will be handled by the Ansible playbook. >> However, much of the existing capabilities may be leveraged in the >> implementation to avoid re-writing everything.  The details will be >> outlined in the forthcoming spec./ >> > We will look forward to the coming spec(s). > > Will you be addressing how to handle different OS setup?  Ie will this > move some of the existing kickstart related configuration into the > Ansible playbook?  I am just starting to look at Anisble, so I am not > sure how much early system configuration it can take over from kickstart > type of scripting. > > This is one of the challenges with supporting multiple os distributions, > not just the build side, but the installation and configuration. > >> 2. Does WindRiver has plan to replace Puppet with Ansible for all >> configuration jobs in the future? >> >> /MP> There are no specific plans to replace Puppet for all >> configuration management.  However, there are several features being >> actively developed in StarlingX that will be changing the existing >> Puppet manifests (e.g. OpenStack Containerization)./ >> > I think this has been mentioned already, a concern is that > containerization won't solve all problems, it just moves where and how > the configuration work happens. I think we may still need to address how > containers are handled as we need to address different OSes inside of > the containers. > >> 3. For the first controller, we still need local execution of Ansible >> playbook for initial bootstrap. Is my understanding correct? >> >> /MP> This is one of the main drivers for changing some of the existing >> config_controller and Puppet manifest handling.  The operator will >> have the ability to run the Ansible playbook locally or remotely. / >> > > Another question is will this work further reduce the need for the > configuration related packages (again multi-os related)?  Can we move > the system utility configuration into this Deployment work? > > Thanks >    Sau! > >> BR. >> >> Yi >> >> *From:*Peters, Matt [mailto:Matt.Peters at windriver.com] >> *Sent:* Friday, December 14, 2018 3:11 AM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Deployment Improvements Proposal >> >> Hello, >> >> Attached are the slides I presented during the TSC call on Dec 13, >> 2018 for the proposed improvements to the StarlingX initial bootstrap >> and system inventory.  As indicated on the call, a detailed stx-spec >> will follow, but wanted to share the high-level changes being proposed >> before the arrival of the spec to get some early feedback. >> >> Regards, Matt >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Thu Dec 20 15:50:43 2018 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 20 Dec 2018 15:50:43 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> <7a4dd372-c5a0-1a44-f8c6-49543e6f9977@linux.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB36097F@ALA-MBD.corp.ad.wrs.com> Saul, See inline, Brent -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Thursday, December 20, 2018 10:31 AM To: starlingx-discuss at lists.starlingx.io; Peters, Matt Subject: Re: [Starlingx-discuss] Deployment Improvements Proposal bump On 12/14/18 10:40 AM, Saul Wold wrote: > > See more inline > > On 12/14/18 6:43 AM, Peters, Matt wrote: >> See inline. >> >> *From: *"Wang, Yi C" >> *Date: *Friday, December 14, 2018 at 3:53 AM >> *To: *"Peters, Matt" >> *Cc: *"starlingx-discuss at lists.starlingx.io" >> >> *Subject: *RE: Deployment Improvements Proposal >> >> Hi Matt, >> >> I just went through your slides. And I have a few questions. I >> appreciate if you can share more information about your proposal. >> Many thanks! >> >> 1. We know config_controller will do many things, like bootstrap >> configuration and controller configuration together with required >> hieradata generation. All the jobs of config_controller will be   >> taken over by Ansible, or just part of them? >> >> /MP> Yes most of these tasks will be handled by the Ansible playbook. >> However, much of the existing capabilities may be leveraged in the >> implementation to avoid re-writing everything.  The details will be >> outlined in the forthcoming spec./ >> > We will look forward to the coming spec(s). > > Will you be addressing how to handle different OS setup?  Ie will this > move some of the existing kickstart related configuration into the > Ansible playbook?  I am just starting to look at Anisble, so I am not > sure how much early system configuration it can take over from > kickstart type of scripting. > > This is one of the challenges with supporting multiple os > distributions, not just the build side, but the installation and configuration. > [BR] As we discussed a couple of days ago, this does not replace the need for kick starts. >> 2. Does WindRiver has plan to replace Puppet with Ansible for all >> configuration jobs in the future? >> >> /MP> There are no specific plans to replace Puppet for all >> configuration management.  However, there are several features being >> actively developed in StarlingX that will be changing the existing >> Puppet manifests (e.g. OpenStack Containerization)./ >> > I think this has been mentioned already, a concern is that > containerization won't solve all problems, it just moves where and how > the configuration work happens. I think we may still need to address > how containers are handled as we need to address different OSes inside > of the containers. > [BR] This is really outside the scope of this feature and needs to be covered under the umbrella of the multi-os project >> 3. For the first controller, we still need local execution of Ansible >> playbook for initial bootstrap. Is my understanding correct? >> >> /MP> This is one of the main drivers for changing some of the >> existing config_controller and Puppet manifest handling.  The >> operator will have the ability to run the Ansible playbook locally or >> remotely. / >> > > Another question is will this work further reduce the need for the > configuration related packages (again multi-os related)?  Can we move > the system utility configuration into this Deployment work? [BR] What tool are you referring to ? > > Thanks >    Sau! > >> BR. >> >> Yi >> >> *From:*Peters, Matt [mailto:Matt.Peters at windriver.com] >> *Sent:* Friday, December 14, 2018 3:11 AM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Deployment Improvements Proposal >> >> Hello, >> >> Attached are the slides I presented during the TSC call on Dec 13, >> 2018 for the proposed improvements to the StarlingX initial bootstrap >> and system inventory.  As indicated on the call, a detailed stx-spec >> will follow, but wanted to share the high-level changes being >> proposed before the arrival of the spec to get some early feedback. >> >> Regards, Matt >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Matt.Peters at windriver.com Thu Dec 20 15:51:41 2018 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 20 Dec 2018 15:51:41 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: <3E8825BD-709B-45BD-8890-717740295A16@windriver.com> <7a4dd372-c5a0-1a44-f8c6-49543e6f9977@linux.intel.com> Message-ID: Hi Saul, Sorry, I missed this somehow. See inline. On 2018-12-20, 10:31 AM, "Saul Wold" wrote: bump On 12/14/18 10:40 AM, Saul Wold wrote: > > See more inline > > On 12/14/18 6:43 AM, Peters, Matt wrote: >> See inline. >> >> *From: *"Wang, Yi C" >> *Date: *Friday, December 14, 2018 at 3:53 AM >> *To: *"Peters, Matt" >> *Cc: *"starlingx-discuss at lists.starlingx.io" >> >> *Subject: *RE: Deployment Improvements Proposal >> >> Hi Matt, >> >> I just went through your slides. And I have a few questions. I >> appreciate if you can share more information about your proposal. Many >> thanks! >> >> 1. We know config_controller will do many things, like bootstrap >> configuration and controller configuration together with required >> hieradata generation. All the jobs of config_controller will be taken >> over by Ansible, or just part of them? >> >> /MP> Yes most of these tasks will be handled by the Ansible playbook. >> However, much of the existing capabilities may be leveraged in the >> implementation to avoid re-writing everything. The details will be >> outlined in the forthcoming spec./ >> > We will look forward to the coming spec(s). > > Will you be addressing how to handle different OS setup? Ie will this > move some of the existing kickstart related configuration into the > Ansible playbook? I am just starting to look at Anisble, so I am not > sure how much early system configuration it can take over from kickstart > type of scripting. > > This is one of the challenges with supporting multiple os distributions, > not just the build side, but the installation and configuration. > MP> The current scope is targeting the config_controller logic, so should not be impacting the current kickstart scripts. Incrementally, if it makes sense to move some of the kickstart logic to the Playbook, that can be considered. I would also imagine that some of the kickstart logic may need to be moved to Puppet since that is not being replaced by this proposal. >> 2. Does WindRiver has plan to replace Puppet with Ansible for all >> configuration jobs in the future? >> >> /MP> There are no specific plans to replace Puppet for all >> configuration management. However, there are several features being >> actively developed in StarlingX that will be changing the existing >> Puppet manifests (e.g. OpenStack Containerization)./ >> > I think this has been mentioned already, a concern is that > containerization won't solve all problems, it just moves where and how > the configuration work happens. I think we may still need to address how > containers are handled as we need to address different OSes inside of > the containers. MP> Agreed it doesn't solve it, but it does change how the configuration data is supplied. The containerized service configuration is supplied via Helm overrides (or K8S configmaps), I was just calling out that some of the existing Puppet manifests will be removed as part of the OpenStack containerization features. > >> 3. For the first controller, we still need local execution of Ansible >> playbook for initial bootstrap. Is my understanding correct? >> >> /MP> This is one of the main drivers for changing some of the existing >> config_controller and Puppet manifest handling. The operator will >> have the ability to run the Ansible playbook locally or remotely. / >> > > Another question is will this work further reduce the need for the > configuration related packages (again multi-os related)? Can we move > the system utility configuration into this Deployment work? MP> I'm not familiar with the details of each of the packages. I think this would be out of scope for the current proposed changes. However, I think they could be scrubbed to see if anything could be moved to either Puppet or Ansible depending on the phase of the deployment. > > Thanks > Sau! > >> BR. >> >> Yi >> >> *From:*Peters, Matt [mailto:Matt.Peters at windriver.com] >> *Sent:* Friday, December 14, 2018 3:11 AM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Deployment Improvements Proposal >> >> Hello, >> >> Attached are the slides I presented during the TSC call on Dec 13, >> 2018 for the proposed improvements to the StarlingX initial bootstrap >> and system inventory. As indicated on the call, a detailed stx-spec >> will follow, but wanted to share the high-level changes being proposed >> before the arrival of the spec to get some early feedback. >> >> Regards, Matt >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Thu Dec 20 16:13:15 2018 From: scott.little at windriver.com (Scott Little) Date: Thu, 20 Dec 2018 11:13:15 -0500 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> Message-ID: <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> We have been generating a change log for the last few successful builds. e.g. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt Format is - one change per line - tab delimited fields - Fields:             ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently On 18-12-19 04:31 PM, Young, Ken wrote: > We haven't gotten to this yet but we will. Let's discuss tomorrow. > > /KenY > > On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: > > Ken > > Any update on this? Do you still need this script for the CENGN > release notes/changelog? > > Can we discuss this topic on tomorrow build meeting? > > Regards > > On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A > <guillermo.a.ponce.castaneda at intel.com> wrote: > > > > Hello everybody, > > > > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > > > > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > > > > This will give you the change log on stdout. > > > > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > > > > Please let me know what you all think about, feedback is really appreciated. > > > > - Guillermo Ponce > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Thu Dec 20 16:28:28 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 20 Dec 2018 10:28:28 -0600 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> <CAK5mtewyDKV2ENO5gJv0Fgak9sKhm8RntK5P6WwBiqNNThoDwA@mail.gmail.com> <D9E85E89-9C31-4E1D-BE4C-A15B68016A90@windriver.com> <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> Message-ID: <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> On Thu, Dec 20, 2018 at 10:15 AM Scott Little <scott.little at windriver.com> wrote: > > We have been generating a change log for the last few successful builds. > > e.g. > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt > Do you have the link to th escript that generates this ? can we contribute ? > Format is > - one change per line > - tab delimited fields > - Fields: > <path to root of git> > <sha> > <commit date> > <author name> > <author email> > <Title> > > ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. > ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" > ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" > ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart > ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart > ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." > ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. > ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" > ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" > ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning > ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling > ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" > ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently > > > > On 18-12-19 04:31 PM, Young, Ken wrote: > > We haven't gotten to this yet but we will. Let's discuss tomorrow. > > > > /KenY > > > > On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: > > > > Ken > > > > Any update on this? Do you still need this script for the CENGN > > release notes/changelog? > > > > Can we discuss this topic on tomorrow build meeting? > > > > Regards > > > > On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A > > <guillermo.a.ponce.castaneda at intel.com> wrote: > > > > > > Hello everybody, > > > > > > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > > > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > > > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > > > > > > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > > > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > > > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > > > > > > This will give you the change log on stdout. > > > > > > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > > > > > > Please let me know what you all think about, feedback is really appreciated. > > > > > > - Guillermo Ponce > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Thu Dec 20 17:01:56 2018 From: scott.little at windriver.com (Scott Little) Date: Thu, 20 Dec 2018 12:01:56 -0500 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> <CAK5mtewyDKV2ENO5gJv0Fgak9sKhm8RntK5P6WwBiqNNThoDwA@mail.gmail.com> <D9E85E89-9C31-4E1D-BE4C-A15B68016A90@windriver.com> <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> Message-ID: <edc9dc7c-17a3-dba4-f6c0-f928fd92b42d@windriver.com> Currently, I have it as a jenkins script on CENGN.  We can discuss the wisdom of placing it in stx-tools at the next build meeting.  I'm a little concerned about blindly pulling scripts of a public git server and running them in a bot like jenkins, even if it's a repo we theoretically control. ---------------------- < snip> --------------------- MY_REPO_ROOT=/localdisk/designer/$USER/$BRANCH MY_WORKSPACE=/localdisk/loadbuild/$USER/$BRANCH/$TIMESTAMP cd $MY_REPO_ROOT for e in $(find . -type d -name .git) do  pushd $e/..  f=$(/usr/bin/dirname $e)  echo "$f"  g=$(printf "%-48s" $f)  c=$(grep $(echo $f | sed 's:/:[/]:g' | sed 's:$:[^a-zA-Z0-9/_-]:' | sed 's:^[.][.]:^[.][.]:' | sed 's:^[.]:^[.]:') $MY_WORKSPACE/../LAST_COMMITS | awk ' { print $2 } ')  git log --pretty=tformat:"$g  %H  %ci%x09%cn%x09%ce%x09%s" --date=iso --after $(date --date='yesterday' +%Y-%m-%d) > $MY_WORKSPACE/CHANGELOG.PART  if [ "x$c" != "x" ] ; then    git log --pretty=tformat:"$g  %H  %ci%x09%cn%x09%ce%x09%s" $c.. >> $MY_WORKSPACE/CHANGELOG || true  else    cat $MY_WORKSPACE/CHANGELOG.PART >> $MY_WORKSPACE/CHANGELOG  fi  popd done \rm $MY_WORKSPACE/CHANGELOG.PART for e in $(find . -type d -name .git) do  pushd $e/..  f=$(/usr/bin/dirname $e)  echo "$f"  g=`printf "%-48s" $f`  git log --pretty=tformat:"$g %H" -n 1 >> $MY_WORKSPACE/LAST_COMMITS  popd done \cp $MY_WORKSPACE/LAST_COMMITS $MY_WORKSPACE/../LAST_COMMITS On 18-12-20 11:28 AM, Victor Rodriguez wrote: > On Thu, Dec 20, 2018 at 10:15 AM Scott Little > <scott.little at windriver.com> wrote: >> We have been generating a change log for the last few successful builds. >> >> e.g. >> >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt >> > Do you have the link to th escript that generates this ? can we contribute ? >> Format is >> - one change per line >> - tab delimited fields >> - Fields: >> <path to root of git> >> <sha> >> <commit date> >> <author name> >> <author email> >> <Title> >> >> ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. >> ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" >> ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" >> ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart >> ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart >> ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." >> ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. >> ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" >> ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" >> ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning >> ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling >> ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" >> ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently >> >> >> >> On 18-12-19 04:31 PM, Young, Ken wrote: >>> We haven't gotten to this yet but we will. Let's discuss tomorrow. >>> >>> /KenY >>> >>> On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: >>> >>> Ken >>> >>> Any update on this? Do you still need this script for the CENGN >>> release notes/changelog? >>> >>> Can we discuss this topic on tomorrow build meeting? >>> >>> Regards >>> >>> On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A >>> <guillermo.a.ponce.castaneda at intel.com> wrote: >>> > >>> > Hello everybody, >>> > >>> > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. >>> > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: >>> > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 >>> > >>> > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. >>> > It requires two parameters, the old manifest.xml and the new one and it should be run like this: >>> > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml >>> > >>> > This will give you the change log on stdout. >>> > >>> > On our Jenkins script we save a file with this and e-mail it to the team afterwards. >>> > >>> > Please let me know what you all think about, feedback is really appreciated. >>> > >>> > - Guillermo Ponce >>> > >>> > _______________________________________________ >>> > Starlingx-discuss mailing list >>> > Starlingx-discuss at lists.starlingx.io >>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From pongsawat at sysware.asia Thu Dec 20 17:41:11 2018 From: pongsawat at sysware.asia (Pongsawat Payungwong) Date: Fri, 21 Dec 2018 00:41:11 +0700 Subject: [Starlingx-discuss] failed nova enable --stx-sx Message-ID: <CAMsuUfjz5xUXxZkD1D++jnFyaA8RV=AObR0DppABHNXpDd-z2w@mail.gmail.com> Hi All, we have encountered with testing StarlingX -Simplex (SX ) on Physical Hardware, need some advice for troubleshooting. Image: http://mirror.starlingx.cengn.ca/ 20181216T060000Z -config_controller --passed -controller-0 provisioning --passed no any error -when system reboot, nova-compute no "enable" need comments: -for SX, can starlingX support Single Disk (with Partition)? because all failed hardware has 1 or 2 physical disk -Any Plan for low profile hardware (atom,xeon-D), --container -as tested, starlingx platform need 4core, and about 14GB Ram,....any way tuning down for small edge hardware -is it mandatory for NICs to support DPDK (provider network)? -can starlingx external log connector, such as ELK, Splunk,grafana,nagios We have prepared hardware for engineer starlingx lab environment, Testing Hardware: https://github.com/sysware-asia/stx-sx-testing 1. system-csw-3040 x 1 unit ---failed , single Disk use partition, Dpdk okay 2. system-hpe x 4 unit --failed, single Disk use partition, no dpdk 3. system-vpex 1 unit --failed, single Disk use partition, dpdk okay 4. system-smc x 4 unit --PASSED/All Functional works... 3 Physical Disk ... plan for test controller-storage (tested 4 unit all works) Spec. 1.System-CSW-3040 CPU= 1 x CPU (Intel(R) Xeon(R)/ 8 Core RAM= 32 GB Storage: 1x HDD (1TB) RAIDCard (): NO NIC: 6x 1GE(i210), 2.system-hp CPU= 2 x CPU Intel(R) Xeon(R) Gold 6130 CPU @ 2.10GHz /16 Core RAM= 64 GB Storage:2 x HDD 600 GB (10K) RAIDCard (): YES, RAID1 NIC: 4 x1GE (BCM5719), 2 x 10GE(BCM57810) --not supported DPDK 3.SWTH-CPE VPE CPU= 1 x CPU (Intel(R) Xeon(R) D-2187NT CPU @ 2.00GHzz) / 16 Core RAM= 64 GB Storage: 1x SSD (1.2TB) RAIDCard (): NO NIC: 4 x 2 (i350), 2 x 10GE(X7222) 4. system-smc This StarlingX- SX is running properly CPU= 2 x CPU (Intel(R) Xeon(R) CPU E5-2670 0 @ 2.60GHz) /8 Core RAM= 256GB Storage: 1x SSD (400GB), 3x HDD 1TB (7200) RAIDCard (): NO NIC: 2 x 2 (i350), 2 x 10GE(82599ES) Any suggestions are welcome. -- Warmest Regards, Pongsawat Payungwong Sysware Technology -- This message contains confidential information and is intended only for the individual named. If you are not the named addressee you should not disseminate, distribute or copy this e-mail. Please notify the sender immediately by e-mail if you have received this e-mail by mistake and delete this e-mail from your system. E-mail transmission cannot be guaranteed to be secure or error-free or virus-free as information could be intercepted, corrupted, lost, destroyed, arrive late or incomplete, or contain viruses. The sender therefore does not accept liability for any errors or omissions in the contents of this message, which arise as a result of e-mail transmission. If verification is required please request a hard-copy version. Sysware (Thailand) Co., Ltd., Bangkok, Thailand. www.sysware.asia <http://www.sysware.asia> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/beddb952/attachment-0001.html> From ildiko.vancsa at gmail.com Thu Dec 20 19:08:32 2018 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 20 Dec 2018 20:08:32 +0100 Subject: [Starlingx-discuss] Edge group use cases mapping to MVP architectures - FEEDBACK NEEDED Message-ID: <9940349E-5364-40A2-A391-8CF64841EC6A@gmail.com> Hi, Hereby I would like to forward you the mapping of use cases to MVP architectures that Gergely Csatari is working on. Please provide feedback on this work item to make sure we are considering all the aspects. Thanks and Best Regards, Gergely and Ildikó In the Edge Coputing Group we are collecting use cases for the edge cloud infrastructure. They are recorded in our wiki [1] and they describe high level scenarios when an edge cloud infrastructure would be needed. During the second Denver PTG discussions we drafted two MVP architectures what we could build from the current functionality of OpenStack with some slight modifications [2]. These are based on the work of James and his team from Oath. We differentiate between a distributed [3] and a centralized [4] control plane architecture scenarios. In one of the Berlin Forum sessions we were asked to map the MVP architecture scenarios to the use cases so I made an initial mapping and now I’m looking for feedback. This mapping only means, that the listed use case can be implemented using the MVP architecture scenarios. It should be noted, that none of the MVP architecture scenarios provide solution for edge cloud infrastructure upgrade or centralized management. Here I list the use cases and the mapped architecture scenarios: Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X [5] Both distributed [3] and centralized [4] Universal customer premise equipment (uCPE) for Enterprise Network Services[6] Both distributed [3] and centralized [4] Unmanned Aircraft Systems (Drones) [7] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Cloud Storage Gateway - Storage at the Edge [8] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Open Caching - stream/store data at the edge [9] Both distributed [3] and centralized [4] Smart City as Software-Defined closed-loop system [10] The use case is not complete enough to figure out Augmented Reality -- Sony Gaming Network [11] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Analytics/control at the edge [12] The use case is not complete enough to figure out Manage retail chains - chick-fil-a [13] The use case is not complete enough to figure out At this moment chick-fil-a uses a different Kubernetes cluster in every edge location and they manage them using Git [14] Smart Home [15] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Data Collection - Smart cooler/cold chain tracking [16] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event VPN Gateway Service Delivery [17] The use case is not complete enough to figure out [1]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases [2]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures [3]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Distributed_Control_Plane_Scenario [4]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Centralized_Control_Plane_Scenario [5]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Mobile_service_provider_5G.2F4G_virtual_RAN_deployment_and_Edge_Cloud_B2B2X. [6]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Universal_customer_premise_equipment_.28uCPE.29_for_Enterprise_Network_Services [7]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Unmanned_Aircraft_Systems_.28Drones.29 [8]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Cloud_Storage_Gateway_-_Storage_at_the_Edge [9]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Open_Caching_-_stream.2Fstore_data_at_the_edge [10]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Smart_City_as_Software-Defined_closed-loop_system [11]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Augmented_Reality_--_Sony_Gaming_Network [12]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Analytics.2Fcontrol_at_the_edge [13]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Manage_retail_chains_-_chick-fil-a [14]: https://schd.ws/hosted_files/kccna18/34/GitOps.pdf [15]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Smart_Home [16]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Data_Collection_-_Smart_cooler.2Fcold_chain_tracking [17]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#VPN_Gateway_Service_Delivery _______________________________________________ Edge-computing mailing list Edge-computing at lists.openstack.org http://lists.openstack.org/cgi-bin/mailman/listinfo/edge-computing From vm.rod25 at gmail.com Thu Dec 20 20:45:39 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 20 Dec 2018 14:45:39 -0600 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security Message-ID: <CAK5mtezidsKF61Mpn3YcKbYgB1U0tqGmFC-_Y0AV-fSVRZR7Vg@mail.gmail.com> Hi StarlingX community We can all agree that security is an important feature to be taken into consideration in any SW project. In the aim of improving the security of the StarlingX project, we have been taking the task to propose the use of some compiler flags that prevent and detect some security holes, especially by buffer overflow that could lead into ROP attacks. The list of flags that we are proposing are : Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" Format string vulnerabilities: CFLAGS="-Wformat -Wformat-security" Stack execution protection: LDFLAGS="-z noexecstack" Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" These are being analyzed in the following Gerrit reviews (thanks a lot for all the good feedback) https://review.openstack.org/#/c/623608/ https://review.openstack.org/#/c/623603/ https://review.openstack.org/#/c/623601/ https://review.openstack.org/#/c/623599/ As requested in the Gerrit reviews, there is a proper need to first understand what these compiler flags do and what is the impact they have at the functional and performance area of the project. This is a preliminary report, we will be following up with a test plan for functional & performance test plans for the services as a next step. This report includes: * Detailed description of what the compiler flag does * Code example that shows how does it work to prevent attacks * If there is a change in the binary, we create a microbenchmark that shows us how the flag impact the performance https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security As a result of the microbenchmark, the performance impact is not relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more details on the HW and SW specification upon requests) The areas of the code we are suggesting on the patches are: * stx-ha * stx-metal * stx-nfv * stx-fault We do take care that these flags are not breaking the following areas after being applied. * Build process of the image * Sanity test cases after the image is created (Ada can give more details on the sanity report of the image generated with these flags) If running the sanity tests are not enough to prove that a change in compiler flags do not affect functionality, please gave us the right path to follow. As mentioned before, this is a preliminary report, and that we will be following up with a test plan for functional & performance test plans for the services as a next step. Hope this email helps to clarify some questions related to the flags and start the follow-up discussion. Regards Victor Rodriguez From vm.rod25 at gmail.com Thu Dec 20 21:16:19 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 20 Dec 2018 15:16:19 -0600 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <edc9dc7c-17a3-dba4-f6c0-f928fd92b42d@windriver.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> <CAK5mtewyDKV2ENO5gJv0Fgak9sKhm8RntK5P6WwBiqNNThoDwA@mail.gmail.com> <D9E85E89-9C31-4E1D-BE4C-A15B68016A90@windriver.com> <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> <edc9dc7c-17a3-dba4-f6c0-f928fd92b42d@windriver.com> Message-ID: <CAK5mtey5NSAonn61eiUuSB09+0M_mQNdxL5qbPa=nWZ+dzb-kw@mail.gmail.com> On Thu, Dec 20, 2018 at 11:03 AM Scott Little <scott.little at windriver.com> wrote: > > Currently, I have it as a jenkins script on CENGN. We can discuss the > wisdom of placing it in stx-tools at the next build meeting. Sounds good > I'm a little concerned about blindly pulling scripts of a public git server > and running them in a bot like jenkins, even if it's a repo we > theoretically control. At least to have a point where we can send PR to improve the output of the log change This is the changelog we generate with the other script : https://hastebin.com/wodibelano.sql Open for comunity consideration regards Here is an example of wh > ---------------------- < snip> --------------------- > > MY_REPO_ROOT=/localdisk/designer/$USER/$BRANCH > MY_WORKSPACE=/localdisk/loadbuild/$USER/$BRANCH/$TIMESTAMP > > cd $MY_REPO_ROOT > > for e in $(find . -type d -name .git) > do > pushd $e/.. > f=$(/usr/bin/dirname $e) > echo "$f" > g=$(printf "%-48s" $f) > c=$(grep $(echo $f | sed 's:/:[/]:g' | sed 's:$:[^a-zA-Z0-9/_-]:' | > sed 's:^[.][.]:^[.][.]:' | sed 's:^[.]:^[.]:') > $MY_WORKSPACE/../LAST_COMMITS | awk ' { print $2 } ') > git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" --date=iso > --after $(date --date='yesterday' +%Y-%m-%d) > $MY_WORKSPACE/CHANGELOG.PART > > if [ "x$c" != "x" ] ; then > git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" $c.. >> > $MY_WORKSPACE/CHANGELOG || true > else > cat $MY_WORKSPACE/CHANGELOG.PART >> $MY_WORKSPACE/CHANGELOG > fi > popd > done > \rm $MY_WORKSPACE/CHANGELOG.PART > > > for e in $(find . -type d -name .git) > do > pushd $e/.. > f=$(/usr/bin/dirname $e) > echo "$f" > g=`printf "%-48s" $f` > git log --pretty=tformat:"$g %H" -n 1 >> $MY_WORKSPACE/LAST_COMMITS > popd > done > > \cp $MY_WORKSPACE/LAST_COMMITS $MY_WORKSPACE/../LAST_COMMITS > > > > On 18-12-20 11:28 AM, Victor Rodriguez wrote: > > On Thu, Dec 20, 2018 at 10:15 AM Scott Little > > <scott.little at windriver.com> wrote: > >> We have been generating a change log for the last few successful builds. > >> > >> e.g. > >> > >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt > >> > > Do you have the link to th escript that generates this ? can we contribute ? > >> Format is > >> - one change per line > >> - tab delimited fields > >> - Fields: > >> <path to root of git> > >> <sha> > >> <commit date> > >> <author name> > >> <author email> > >> <Title> > >> > >> ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. > >> ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" > >> ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" > >> ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart > >> ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart > >> ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." > >> ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. > >> ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" > >> ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" > >> ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning > >> ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling > >> ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" > >> ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently > >> > >> > >> > >> On 18-12-19 04:31 PM, Young, Ken wrote: > >>> We haven't gotten to this yet but we will. Let's discuss tomorrow. > >>> > >>> /KenY > >>> > >>> On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: > >>> > >>> Ken > >>> > >>> Any update on this? Do you still need this script for the CENGN > >>> release notes/changelog? > >>> > >>> Can we discuss this topic on tomorrow build meeting? > >>> > >>> Regards > >>> > >>> On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A > >>> <guillermo.a.ponce.castaneda at intel.com> wrote: > >>> > > >>> > Hello everybody, > >>> > > >>> > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > >>> > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > >>> > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > >>> > > >>> > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > >>> > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > >>> > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > >>> > > >>> > This will give you the change log on stdout. > >>> > > >>> > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > >>> > > >>> > Please let me know what you all think about, feedback is really appreciated. > >>> > > >>> > - Guillermo Ponce > >>> > > >>> > _______________________________________________ > >>> > Starlingx-discuss mailing list > >>> > Starlingx-discuss at lists.starlingx.io > >>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >>> > >>> > >>> _______________________________________________ > >>> Starlingx-discuss mailing list > >>> Starlingx-discuss at lists.starlingx.io > >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >> > >> > >> _______________________________________________ > >> Starlingx-discuss mailing list > >> Starlingx-discuss at lists.starlingx.io > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From cesar.lara at intel.com Thu Dec 20 21:59:50 2018 From: cesar.lara at intel.com (Lara, Cesar) Date: Thu, 20 Dec 2018 21:59:50 +0000 Subject: [Starlingx-discuss] [build] [meetings] Build team meeting minutes 12/20/2018 Message-ID: <0B566C62EC792145B40E29EFEBF1AB4710598247@fmsmsx104.amr.corp.intel.com> Build team meeting 12/20/2018 Attendees Jason, Scott, Ken, Mario, Erich, Victor, Hayde, Marcela, Chuy, Memo, Luis, Pipo, Cesar Agenda - Release notes follow up - Public Static Analysis - next steps for path to multi-OS builds - opens Notes Release notes follow up: today we are generating a change log, this is being published in the Cengn mirror along with the ISO file, we need to further investigate if those notes are good for the validation team as this was a requirement from them, or if we need to add more information to that report. AR - Scott to reply back to the mailing list thread regarding the release notes Public Static Analysis: the effort is ongoing, we are just waiting for the team on coverity scan team to review our requests for static analysis. This a process that takes some time and not sure how it will fit our automation efforts, since we haven't been able to try it yet. Today, in our request, we are covering C and C++ code as we don't see any Python support on the free online tool yet. Ken offered a resource mid-January if we decided to have a dedicated person to this effort moving forward. next steps for path to multi-OS builds: We are aligned, a decision was made by the TSC to only support CentOS, Clear and Ubuntu for our multi-OS strategy. The best way to create a proposal around this will be to create a PoC for the multi-OS build system just to dimension how big is the effort and the potential issues we might encounter along the way. We also need to include the Non-OpenStack distro team as they are key stakeholders in the multi-OS effort. Opens- Build is currently broken at Cengn since the latest changes around container setup and enablement. Erich did encounter an Issue where new repositories were added to the mirror and were not synced with Cengn mirror, making the builds to fail on intel premises. These issues may not be related and we think the main root cause could be the symlink on the CentOS repo pointing to CentOS 7 not including latest versions of packages or removing older versions that we need. Scott to follow up on both issues. This meeting will be on break through December and the next scheduled session will be on January 10th 2019 Happy holidays and happy new year! Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181220/11760e38/attachment-0001.html> From Ken.Young at windriver.com Thu Dec 20 22:15:20 2018 From: Ken.Young at windriver.com (Young, Ken) Date: Thu, 20 Dec 2018 22:15:20 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/18/2018 -> fixing date In-Reply-To: <0E948055-11BF-4E84-9563-596616B972BA@windriver.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4B9AF@FMSMSX114.amr.corp.intel.com> <0E948055-11BF-4E84-9563-596616B972BA@windriver.com> Message-ID: <6CB2CF1A-6EED-4CE7-B330-7549831A29C0@windriver.com> Ada, Had a meeting with CENGN today. We are a go for getting you set up for dropping logs onto the CENGN mirror. Who on your team can I connect with Scott to work on this functionality? Also, we will need a 4098 bit key pair to enable access to the server. Then we will need the public key to create access to the server. Please connect with Scott and I offline to exchange the key. Regards, Ken Y On 2018-12-18, 10:02 AM, "Young, Ken" <Ken.Young at windriver.com> wrote: Ada, I have a conflict today and cannot make you meeting today. I am meeting with CENGN on Thursday and should have an update on log storage after that. Regards, Ken Y On 2018-12-17, 6:51 PM, "Cabrales, Ada" <ada.cabrales at intel.com> wrote: The meeting is tomorrow, Dec 18, not 17. Sorry about that :) > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, December 17, 2018 5:46 PM > To: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io> > Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 12/17/2018 > > Agenda for 12/18 > * Sanity testing: coverage improvement - JC > * Update request to CENGN - storage of the sanity logs - Ken > * Reminder - Meeting this Friday for checking PyTest > Last meeting of the year, will resume on Jan 8 > * Opens > > > Regards > Ada > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From serverascode at gmail.com Fri Dec 21 13:08:42 2018 From: serverascode at gmail.com (Curtis) Date: Fri, 21 Dec 2018 08:08:42 -0500 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: <CAK5mtezidsKF61Mpn3YcKbYgB1U0tqGmFC-_Y0AV-fSVRZR7Vg@mail.gmail.com> References: <CAK5mtezidsKF61Mpn3YcKbYgB1U0tqGmFC-_Y0AV-fSVRZR7Vg@mail.gmail.com> Message-ID: <CAJ_JamCVxZf0q3w9276Kky027ApQL7SwqpL7wE2SjqRsZhphQw@mail.gmail.com> On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez <vm.rod25 at gmail.com> wrote: > Hi StarlingX community > > We can all agree that security is an important feature to be taken > into consideration in any SW project. In the aim of improving the > security of the StarlingX project, we have been taking the task to > propose the use of some compiler flags that prevent and detect some > security holes, especially by buffer overflow that could lead into ROP > attacks. > > The list of flags that we are proposing are : > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” > > Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" > Format string vulnerabilities: CFLAGS="-Wformat -Wformat-security" > Stack execution protection: LDFLAGS="-z noexecstack" > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > These are being analyzed in the following Gerrit reviews (thanks a lot > for all the good feedback) > > https://review.openstack.org/#/c/623608/ > https://review.openstack.org/#/c/623603/ > https://review.openstack.org/#/c/623601/ > https://review.openstack.org/#/c/623599/ > > As requested in the Gerrit reviews, there is a proper need to first > understand what these compiler flags do and what is the impact they > have at the functional and performance area of the project. This is a > preliminary report, we will be following up with a test plan for > functional & performance test plans for the services as a next step. > This report includes: > > * Detailed description of what the compiler flag does > * Code example that shows how does it work to prevent attacks > * If there is a change in the binary, we create a microbenchmark that > shows us how the flag impact the performance > > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security > > As a result of the microbenchmark, the performance impact is not > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > details on the HW and SW specification upon requests) > > The areas of the code we are suggesting on the patches are: > > * stx-ha > * stx-metal > * stx-nfv > * stx-fault > > We do take care that these flags are not breaking the following areas > after being applied. > > * Build process of the image > * Sanity test cases after the image is created > (Ada can give more details on the sanity report of the image generated > with these flags) > > If running the sanity tests are not enough to prove that a change in > compiler flags do not affect functionality, please gave us the right > path to follow. > > As mentioned before, this is a preliminary report, and that we will be > following up with a test plan for functional & performance test plans > for the services as a next step. > > Hope this email helps to clarify some questions related to the flags > and start the follow-up discussion. > Thanks for the context Victor, it's very helpful to me. One thing I want to mention is something the Kata Containers team was talking about at the Berlin OpenStack summit, which is when many small performance hits start to add up. They have to be careful to ensure they don't have a bunch of smallish looking changes that add up to a large performance hit over a longer period of time. Overall I'm sure the StarlingX project would like to have some performance testing, if we don't already, though that can be challenging for an open source project. I had mentioned OPNFV's Functest and related projects on the TSC call, but now seeing which components are affected I'm not sure that would be directly helpful. I look forward to further discussions around this area. Thanks, Curtis > > Regards > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/d054110c/attachment.html> From vm.rod25 at gmail.com Fri Dec 21 16:20:34 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Fri, 21 Dec 2018 10:20:34 -0600 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <CAK5mtey5NSAonn61eiUuSB09+0M_mQNdxL5qbPa=nWZ+dzb-kw@mail.gmail.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> <CAK5mtewyDKV2ENO5gJv0Fgak9sKhm8RntK5P6WwBiqNNThoDwA@mail.gmail.com> <D9E85E89-9C31-4E1D-BE4C-A15B68016A90@windriver.com> <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> <edc9dc7c-17a3-dba4-f6c0-f928fd92b42d@windriver.com> <CAK5mtey5NSAonn61eiUuSB09+0M_mQNdxL5qbPa=nWZ+dzb-kw@mail.gmail.com> Message-ID: <CAK5mtex9WL1YdvaR88ogDEGZ4pMNp2H5e_uwbsgzKiUH7iZP2Q@mail.gmail.com> On Thu, Dec 20, 2018 at 3:16 PM Victor Rodriguez <vm.rod25 at gmail.com> wrote: > > On Thu, Dec 20, 2018 at 11:03 AM Scott Little > <scott.little at windriver.com> wrote: > > > > Currently, I have it as a jenkins script on CENGN. We can discuss the > > wisdom of placing it in stx-tools at the next build meeting. > Sounds good > > > I'm a little concerned about blindly pulling scripts of a public git server > > and running them in a bot like jenkins, even if it's a repo we > > theoretically control. > > At least to have a point where we can send PR to improve the output of > the log change > > This is the changelog we generate with the other script : > https://hastebin.com/wodibelano.sql > Sorry, hastebin has a day or so of duration, here is a github gist https://gist.github.com/VictorRodriguez/404c1e19c19db765dfeb82ee481db6ad regards > Open for comunity consideration > > regards > > Here is an example of wh > > ---------------------- < snip> --------------------- > > > > MY_REPO_ROOT=/localdisk/designer/$USER/$BRANCH > > MY_WORKSPACE=/localdisk/loadbuild/$USER/$BRANCH/$TIMESTAMP > > > > cd $MY_REPO_ROOT > > > > for e in $(find . -type d -name .git) > > do > > pushd $e/.. > > f=$(/usr/bin/dirname $e) > > echo "$f" > > g=$(printf "%-48s" $f) > > c=$(grep $(echo $f | sed 's:/:[/]:g' | sed 's:$:[^a-zA-Z0-9/_-]:' | > > sed 's:^[.][.]:^[.][.]:' | sed 's:^[.]:^[.]:') > > $MY_WORKSPACE/../LAST_COMMITS | awk ' { print $2 } ') > > git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" --date=iso > > --after $(date --date='yesterday' +%Y-%m-%d) > $MY_WORKSPACE/CHANGELOG.PART > > > > if [ "x$c" != "x" ] ; then > > git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" $c.. >> > > $MY_WORKSPACE/CHANGELOG || true > > else > > cat $MY_WORKSPACE/CHANGELOG.PART >> $MY_WORKSPACE/CHANGELOG > > fi > > popd > > done > > \rm $MY_WORKSPACE/CHANGELOG.PART > > > > > > for e in $(find . -type d -name .git) > > do > > pushd $e/.. > > f=$(/usr/bin/dirname $e) > > echo "$f" > > g=`printf "%-48s" $f` > > git log --pretty=tformat:"$g %H" -n 1 >> $MY_WORKSPACE/LAST_COMMITS > > popd > > done > > > > \cp $MY_WORKSPACE/LAST_COMMITS $MY_WORKSPACE/../LAST_COMMITS > > > > > > > > On 18-12-20 11:28 AM, Victor Rodriguez wrote: > > > On Thu, Dec 20, 2018 at 10:15 AM Scott Little > > > <scott.little at windriver.com> wrote: > > >> We have been generating a change log for the last few successful builds. > > >> > > >> e.g. > > >> > > >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt > > >> > > > Do you have the link to th escript that generates this ? can we contribute ? > > >> Format is > > >> - one change per line > > >> - tab delimited fields > > >> - Fields: > > >> <path to root of git> > > >> <sha> > > >> <commit date> > > >> <author name> > > >> <author email> > > >> <Title> > > >> > > >> ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. > > >> ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" > > >> ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" > > >> ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart > > >> ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart > > >> ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." > > >> ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. > > >> ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" > > >> ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" > > >> ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning > > >> ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling > > >> ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" > > >> ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently > > >> > > >> > > >> > > >> On 18-12-19 04:31 PM, Young, Ken wrote: > > >>> We haven't gotten to this yet but we will. Let's discuss tomorrow. > > >>> > > >>> /KenY > > >>> > > >>> On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: > > >>> > > >>> Ken > > >>> > > >>> Any update on this? Do you still need this script for the CENGN > > >>> release notes/changelog? > > >>> > > >>> Can we discuss this topic on tomorrow build meeting? > > >>> > > >>> Regards > > >>> > > >>> On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A > > >>> <guillermo.a.ponce.castaneda at intel.com> wrote: > > >>> > > > >>> > Hello everybody, > > >>> > > > >>> > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > > >>> > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > > >>> > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > > >>> > > > >>> > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > > >>> > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > > >>> > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > > >>> > > > >>> > This will give you the change log on stdout. > > >>> > > > >>> > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > > >>> > > > >>> > Please let me know what you all think about, feedback is really appreciated. > > >>> > > > >>> > - Guillermo Ponce > > >>> > > > >>> > _______________________________________________ > > >>> > Starlingx-discuss mailing list > > >>> > Starlingx-discuss at lists.starlingx.io > > >>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > >>> > > >>> > > >>> _______________________________________________ > > >>> Starlingx-discuss mailing list > > >>> Starlingx-discuss at lists.starlingx.io > > >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > >> > > >> > > >> _______________________________________________ > > >> Starlingx-discuss mailing list > > >> Starlingx-discuss at lists.starlingx.io > > >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > From scott.little at windriver.com Fri Dec 21 16:37:41 2018 From: scott.little at windriver.com (Scott Little) Date: Fri, 21 Dec 2018 11:37:41 -0500 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <CAK5mtex9WL1YdvaR88ogDEGZ4pMNp2H5e_uwbsgzKiUH7iZP2Q@mail.gmail.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> <CAK5mtewyDKV2ENO5gJv0Fgak9sKhm8RntK5P6WwBiqNNThoDwA@mail.gmail.com> <D9E85E89-9C31-4E1D-BE4C-A15B68016A90@windriver.com> <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> <edc9dc7c-17a3-dba4-f6c0-f928fd92b42d@windriver.com> <CAK5mtey5NSAonn61eiUuSB09+0M_mQNdxL5qbPa=nWZ+dzb-kw@mail.gmail.com> <CAK5mtex9WL1YdvaR88ogDEGZ4pMNp2H5e_uwbsgzKiUH7iZP2Q@mail.gmail.com> Message-ID: <2d6cba57-8f0f-0d5f-f66c-5b8c81f1d177@windriver.com> Prettier for human eyes.  Not as convenient for processing downstream scripts.  Perhaps publishing both formats would have value? Can you post your script somewhere? Scott On 18-12-21 11:20 AM, Victor Rodriguez wrote: > On Thu, Dec 20, 2018 at 3:16 PM Victor Rodriguez <vm.rod25 at gmail.com> wrote: >> On Thu, Dec 20, 2018 at 11:03 AM Scott Little >> <scott.little at windriver.com> wrote: >>> Currently, I have it as a jenkins script on CENGN. We can discuss the >>> wisdom of placing it in stx-tools at the next build meeting. >> Sounds good >> >>> I'm a little concerned about blindly pulling scripts of a public git server >>> and running them in a bot like jenkins, even if it's a repo we >>> theoretically control. >> At least to have a point where we can send PR to improve the output of >> the log change >> >> This is the changelog we generate with the other script : >> https://hastebin.com/wodibelano.sql >> > Sorry, hastebin has a day or so of duration, here is a github gist > > https://gist.github.com/VictorRodriguez/404c1e19c19db765dfeb82ee481db6ad > > regards > > >> Open for comunity consideration >> >> regards >> >> Here is an example of wh >>> ---------------------- < snip> --------------------- >>> >>> MY_REPO_ROOT=/localdisk/designer/$USER/$BRANCH >>> MY_WORKSPACE=/localdisk/loadbuild/$USER/$BRANCH/$TIMESTAMP >>> >>> cd $MY_REPO_ROOT >>> >>> for e in $(find . -type d -name .git) >>> do >>> pushd $e/.. >>> f=$(/usr/bin/dirname $e) >>> echo "$f" >>> g=$(printf "%-48s" $f) >>> c=$(grep $(echo $f | sed 's:/:[/]:g' | sed 's:$:[^a-zA-Z0-9/_-]:' | >>> sed 's:^[.][.]:^[.][.]:' | sed 's:^[.]:^[.]:') >>> $MY_WORKSPACE/../LAST_COMMITS | awk ' { print $2 } ') >>> git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" --date=iso >>> --after $(date --date='yesterday' +%Y-%m-%d) > $MY_WORKSPACE/CHANGELOG.PART >>> >>> if [ "x$c" != "x" ] ; then >>> git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" $c.. >> >>> $MY_WORKSPACE/CHANGELOG || true >>> else >>> cat $MY_WORKSPACE/CHANGELOG.PART >> $MY_WORKSPACE/CHANGELOG >>> fi >>> popd >>> done >>> \rm $MY_WORKSPACE/CHANGELOG.PART >>> >>> >>> for e in $(find . -type d -name .git) >>> do >>> pushd $e/.. >>> f=$(/usr/bin/dirname $e) >>> echo "$f" >>> g=`printf "%-48s" $f` >>> git log --pretty=tformat:"$g %H" -n 1 >> $MY_WORKSPACE/LAST_COMMITS >>> popd >>> done >>> >>> \cp $MY_WORKSPACE/LAST_COMMITS $MY_WORKSPACE/../LAST_COMMITS >>> >>> >>> >>> On 18-12-20 11:28 AM, Victor Rodriguez wrote: >>>> On Thu, Dec 20, 2018 at 10:15 AM Scott Little >>>> <scott.little at windriver.com> wrote: >>>>> We have been generating a change log for the last few successful builds. >>>>> >>>>> e.g. >>>>> >>>>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt >>>>> >>>> Do you have the link to th escript that generates this ? can we contribute ? >>>>> Format is >>>>> - one change per line >>>>> - tab delimited fields >>>>> - Fields: >>>>> <path to root of git> >>>>> <sha> >>>>> <commit date> >>>>> <author name> >>>>> <author email> >>>>> <Title> >>>>> >>>>> ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. >>>>> ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" >>>>> ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" >>>>> ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart >>>>> ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart >>>>> ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." >>>>> ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. >>>>> ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" >>>>> ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" >>>>> ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning >>>>> ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling >>>>> ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" >>>>> ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently >>>>> >>>>> >>>>> >>>>> On 18-12-19 04:31 PM, Young, Ken wrote: >>>>>> We haven't gotten to this yet but we will. Let's discuss tomorrow. >>>>>> >>>>>> /KenY >>>>>> >>>>>> On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: >>>>>> >>>>>> Ken >>>>>> >>>>>> Any update on this? Do you still need this script for the CENGN >>>>>> release notes/changelog? >>>>>> >>>>>> Can we discuss this topic on tomorrow build meeting? >>>>>> >>>>>> Regards >>>>>> >>>>>> On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A >>>>>> <guillermo.a.ponce.castaneda at intel.com> wrote: >>>>>> > >>>>>> > Hello everybody, >>>>>> > >>>>>> > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. >>>>>> > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: >>>>>> > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 >>>>>> > >>>>>> > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. >>>>>> > It requires two parameters, the old manifest.xml and the new one and it should be run like this: >>>>>> > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml >>>>>> > >>>>>> > This will give you the change log on stdout. >>>>>> > >>>>>> > On our Jenkins script we save a file with this and e-mail it to the team afterwards. >>>>>> > >>>>>> > Please let me know what you all think about, feedback is really appreciated. >>>>>> > >>>>>> > - Guillermo Ponce >>>>>> > >>>>>> > _______________________________________________ >>>>>> > Starlingx-discuss mailing list >>>>>> > Starlingx-discuss at lists.starlingx.io >>>>>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>>>> >>>>>> >>>>>> _______________________________________________ >>>>>> Starlingx-discuss mailing list >>>>>> Starlingx-discuss at lists.starlingx.io >>>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>>>> >>>>> _______________________________________________ >>>>> Starlingx-discuss mailing list >>>>> Starlingx-discuss at lists.starlingx.io >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> From vm.rod25 at gmail.com Fri Dec 21 16:47:50 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Fri, 21 Dec 2018 10:47:50 -0600 Subject: [Starlingx-discuss] Release notes/change log creation script In-Reply-To: <2d6cba57-8f0f-0d5f-f66c-5b8c81f1d177@windriver.com> References: <64D76F71-F1B2-434F-9EA8-5F8066D26647@intel.com> <CAK5mtewyDKV2ENO5gJv0Fgak9sKhm8RntK5P6WwBiqNNThoDwA@mail.gmail.com> <D9E85E89-9C31-4E1D-BE4C-A15B68016A90@windriver.com> <9ecaf45c-46f0-aa0a-9539-f5676736258b@windriver.com> <CAK5mtezBGntPAqd73VG1oiC5D-bnyD4Tsy1uwLuKRR=n0zZYpg@mail.gmail.com> <edc9dc7c-17a3-dba4-f6c0-f928fd92b42d@windriver.com> <CAK5mtey5NSAonn61eiUuSB09+0M_mQNdxL5qbPa=nWZ+dzb-kw@mail.gmail.com> <CAK5mtex9WL1YdvaR88ogDEGZ4pMNp2H5e_uwbsgzKiUH7iZP2Q@mail.gmail.com> <2d6cba57-8f0f-0d5f-f66c-5b8c81f1d177@windriver.com> Message-ID: <CAK5mtewVQmRnKr6qK3w6hLhhDsP=Fp2wFKx9iEohByMUpaMfPg@mail.gmail.com> On Fri, Dec 21, 2018 at 10:39 AM Scott Little <scott.little at windriver.com> wrote: > > Prettier for human eyes. Thanks !! Not as convenient for processing downstream > scripts. Perhaps publishing both formats would have value? > Yes , that culd be :) > Can you post your script somewhere? > Gullermo's script : https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > Scott > > > > On 18-12-21 11:20 AM, Victor Rodriguez wrote: > > On Thu, Dec 20, 2018 at 3:16 PM Victor Rodriguez <vm.rod25 at gmail.com> wrote: > >> On Thu, Dec 20, 2018 at 11:03 AM Scott Little > >> <scott.little at windriver.com> wrote: > >>> Currently, I have it as a jenkins script on CENGN. We can discuss the > >>> wisdom of placing it in stx-tools at the next build meeting. > >> Sounds good > >> > >>> I'm a little concerned about blindly pulling scripts of a public git server > >>> and running them in a bot like jenkins, even if it's a repo we > >>> theoretically control. > >> At least to have a point where we can send PR to improve the output of > >> the log change > >> > >> This is the changelog we generate with the other script : > >> https://hastebin.com/wodibelano.sql > >> > > Sorry, hastebin has a day or so of duration, here is a github gist > > > > https://gist.github.com/VictorRodriguez/404c1e19c19db765dfeb82ee481db6ad > > > > regards > > > > > >> Open for comunity consideration > >> > >> regards > >> > >> Here is an example of wh > >>> ---------------------- < snip> --------------------- > >>> > >>> MY_REPO_ROOT=/localdisk/designer/$USER/$BRANCH > >>> MY_WORKSPACE=/localdisk/loadbuild/$USER/$BRANCH/$TIMESTAMP > >>> > >>> cd $MY_REPO_ROOT > >>> > >>> for e in $(find . -type d -name .git) > >>> do > >>> pushd $e/.. > >>> f=$(/usr/bin/dirname $e) > >>> echo "$f" > >>> g=$(printf "%-48s" $f) > >>> c=$(grep $(echo $f | sed 's:/:[/]:g' | sed 's:$:[^a-zA-Z0-9/_-]:' | > >>> sed 's:^[.][.]:^[.][.]:' | sed 's:^[.]:^[.]:') > >>> $MY_WORKSPACE/../LAST_COMMITS | awk ' { print $2 } ') > >>> git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" --date=iso > >>> --after $(date --date='yesterday' +%Y-%m-%d) > $MY_WORKSPACE/CHANGELOG.PART > >>> > >>> if [ "x$c" != "x" ] ; then > >>> git log --pretty=tformat:"$g %H %ci%x09%cn%x09%ce%x09%s" $c.. >> > >>> $MY_WORKSPACE/CHANGELOG || true > >>> else > >>> cat $MY_WORKSPACE/CHANGELOG.PART >> $MY_WORKSPACE/CHANGELOG > >>> fi > >>> popd > >>> done > >>> \rm $MY_WORKSPACE/CHANGELOG.PART > >>> > >>> > >>> for e in $(find . -type d -name .git) > >>> do > >>> pushd $e/.. > >>> f=$(/usr/bin/dirname $e) > >>> echo "$f" > >>> g=`printf "%-48s" $f` > >>> git log --pretty=tformat:"$g %H" -n 1 >> $MY_WORKSPACE/LAST_COMMITS > >>> popd > >>> done > >>> > >>> \cp $MY_WORKSPACE/LAST_COMMITS $MY_WORKSPACE/../LAST_COMMITS > >>> > >>> > >>> > >>> On 18-12-20 11:28 AM, Victor Rodriguez wrote: > >>>> On Thu, Dec 20, 2018 at 10:15 AM Scott Little > >>>> <scott.little at windriver.com> wrote: > >>>>> We have been generating a change log for the last few successful builds. > >>>>> > >>>>> e.g. > >>>>> > >>>>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181214T060000Z/outputs/CHANGELOG.txt > >>>>> > >>>> Do you have the link to th escript that generates this ? can we contribute ? > >>>>> Format is > >>>>> - one change per line > >>>>> - tab delimited fields > >>>>> - Fields: > >>>>> <path to root of git> > >>>>> <sha> > >>>>> <commit date> > >>>>> <author name> > >>>>> <author email> > >>>>> <Title> > >>>>> > >>>>> ./cgcs-root/stx/git/distributedcloud d1e5526d8468b06cc62b81bf2158a565cac94a66 2018-12-12 20:46:58 +0000 Alex Kozyrev alex.kozyrev at windriver.com Add Barbican user to the list of subcloud users. > >>>>> ./cgcs-root/stx/stx-config 9453402511c7858bb1cf52428808b5689a3d95cd 2018-12-13 21:59:38 +0000 Gerrit Code Review review at openstack.org Merge "Enhancements for the rbd-provisioner helm chart" > >>>>> ./cgcs-root/stx/stx-config e8203ff83e733f4b934f3c6d03c8935386003310 2018-12-13 19:50:26 +0000 Gerrit Code Review review at openstack.org Merge "Don't delete nova-ks-endpoint job in nova chart" > >>>>> ./cgcs-root/stx/stx-config 6b2be98f0ddbbee4c507f01d509d5e42153ee606 2018-12-13 19:13:15 +0000 Irina Mihai irina.mihai at windriver.com Enhancements for the rbd-provisioner helm chart > >>>>> ./cgcs-root/stx/stx-config 2f3e6d9915361c1ca065491418510a7680391023 2018-12-13 10:36:07 -0500 Angie Wang angie.wang at windriver.com Don't delete nova-ks-endpoint job in nova chart > >>>>> ./cgcs-root/stx/stx-fault 41f6f2e675f1a7e07b107296eec1e616b6945461 2018-12-13 18:27:59 +0000 Gerrit Code Review review at openstack.org Merge "Standardize install target for fm-common." > >>>>> ./cgcs-root/stx/stx-fault f6d95a0a9d367284c491ae74b1f2ec1c3c531309 2018-12-05 15:06:56 -0600 Erich Cordoba erich.cordoba.malibran at intel.com Standardize install target for fm-common. > >>>>> ./cgcs-root/stx/stx-integ d320036b0be13833bd4dfb8ff9e0f71c1e77473e 2018-12-13 22:52:55 +0000 Gerrit Code Review review at openstack.org Merge "fix tpm certificate handling" > >>>>> ./cgcs-root/stx/stx-integ 14f168ac4b4fdfcb45bdff7e0d82715ec9a2c589 2018-12-13 20:22:44 +0000 Gerrit Code Review review at openstack.org Merge "Fix collectd Memory plugin Strict Mode learning" > >>>>> ./cgcs-root/stx/stx-integ 0ec172537192932c11f7a9cdc799fbc7e49a22e1 2018-12-13 09:31:03 -0500 Eric MacDonald eric.macdonald at windriver.com Fix collectd Memory plugin Strict Mode learning > >>>>> ./cgcs-root/stx/stx-integ 81fded989a237a9b8a3b2998684fd9c0c689f077 2018-12-12 14:48:49 -0500 Paul-Emile Element Paul-Emile.Element at windriver.com fix tpm certificate handling > >>>>> ./cgcs-root/stx/stx-nfv 3ce422d5a2d572b639eba52001444ce3d11a9bec 2018-12-13 21:46:00 +0000 Gerrit Code Review review at openstack.org Merge "Allow VIM to manage services independently" > >>>>> ./cgcs-root/stx/stx-nfv b6f7a850592cc3ba90b7a00874e3a629fffee26a 2018-12-13 08:08:13 -0500 Kevin Smith kevin.smith at windriver.com Allow VIM to manage services independently > >>>>> > >>>>> > >>>>> > >>>>> On 18-12-19 04:31 PM, Young, Ken wrote: > >>>>>> We haven't gotten to this yet but we will. Let's discuss tomorrow. > >>>>>> > >>>>>> /KenY > >>>>>> > >>>>>> On 2018-12-19, 4:23 PM, "Victor Rodriguez" <vm.rod25 at gmail.com> wrote: > >>>>>> > >>>>>> Ken > >>>>>> > >>>>>> Any update on this? Do you still need this script for the CENGN > >>>>>> release notes/changelog? > >>>>>> > >>>>>> Can we discuss this topic on tomorrow build meeting? > >>>>>> > >>>>>> Regards > >>>>>> > >>>>>> On Thu, Dec 6, 2018 at 10:56 AM Ponce Castaneda, Guillermo A > >>>>>> <guillermo.a.ponce.castaneda at intel.com> wrote: > >>>>>> > > >>>>>> > Hello everybody, > >>>>>> > > >>>>>> > I want to share with you the following script that we use internally to create a Change Log everytime we generate a new StarlingX ISO. > >>>>>> > Here, internally, we have a Jenkins server that creates (or tries to) a new ISO every day, and with this ISO we also create a manifest.xml file, and by using another job that is triggered just as the ISO Job finishes we create the Change Log by using the following script: > >>>>>> > https://gist.github.com/gaponcec/99f19e2bc972761e11ccba2260622d10 > >>>>>> > > >>>>>> > The script has the following requirements: Argparse, gitpython, xmljson, dictdiffer and PTable. > >>>>>> > It requires two parameters, the old manifest.xml and the new one and it should be run like this: > >>>>>> > $ python3 create_change_log.py-o old_manifest.xml -n new_manifest.xml > >>>>>> > > >>>>>> > This will give you the change log on stdout. > >>>>>> > > >>>>>> > On our Jenkins script we save a file with this and e-mail it to the team afterwards. > >>>>>> > > >>>>>> > Please let me know what you all think about, feedback is really appreciated. > >>>>>> > > >>>>>> > - Guillermo Ponce > >>>>>> > > >>>>>> > _______________________________________________ > >>>>>> > Starlingx-discuss mailing list > >>>>>> > Starlingx-discuss at lists.starlingx.io > >>>>>> > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >>>>>> > >>>>>> > >>>>>> _______________________________________________ > >>>>>> Starlingx-discuss mailing list > >>>>>> Starlingx-discuss at lists.starlingx.io > >>>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >>>>> > >>>>> _______________________________________________ > >>>>> Starlingx-discuss mailing list > >>>>> Starlingx-discuss at lists.starlingx.io > >>>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > >>> > From Frank.Miller at windriver.com Fri Dec 21 19:48:50 2018 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 21 Dec 2018 19:48:50 +0000 Subject: [Starlingx-discuss] FYI: Many cores on vacation until Jan 2nd Message-ID: <A43F4E51FB41274EA099B52A6EE25DC196BAAE8E@ALA-MBD.corp.ad.wrs.com> StarlingX Community: Just a heads up that for many of our repos, most of the Cores will be on holidays until January 2nd. It may not be possible for new commits to be merged until January 2nd. For those take time off have a very nice break and see you in the new year.... Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/f2936d32/attachment.html> From Ghada.Khalil at windriver.com Fri Dec 21 21:21:54 2018 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 21 Dec 2018 21:21:54 +0000 Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded References: <9700A18779F35F49AF027300A49E7C765FE56853@SHSMSX101.ccr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA499103@ALA-MBD.corp.ad.wrs.com> Hi Shuicheng, As discussed in the networking meeting yesterday, please apply a patch to the CentOS 7.6 feature branch to disable the mellanox drivers temporarily in the openvswitch package: This is the explicit patch in STX that enables it currently: https://github.com/openstack/stx-integ/blob/master/networking/openvswitch/centos/meta_patches/0005-enable-mlx-pmds.patch Please see if this addresses the compile issues you are facing. The longer term plan will be to upgrade to a new version of openvswitch which has support for DPDK 18.11. Looking at the ovs releases, it seems that the next major release is planned for mid-February. http://docs.openvswitch.org/en/latest/internals/release-process/#release-scheduling Please note that my team is out of the office until Jan 2. If you need help before then, please contact Forrest. Regards, Ghada -----Original Message----- From: Khalil, Ghada Sent: Monday, December 17, 2018 11:38 AM To: 'Lin, Shuicheng'; starlingx-discuss at lists.starlingx.io Subject: RE: Kernel upgrade status & DPDK need be upgraded Hi Shuicheng, You are correct. The Mellanox drivers are tied to DPDK as well as the kernel. At a high level, I see no option, but to upgrade DPDK/OVS to 18.11 to align with the newer kernel and mellanox drivers. Is there a version available for ovs/ovs-dpdk that supports 18.11 yet? If not, is there information on when one would be available? I added this as an agenda item in the next networking team meeting on Dec 20 at 9:15am Eastern Time. https://etherpad.openstack.org/p/stx-networking We will discuss this in more detail then. Feel free to join us. Zoom details are on the wiki: https://wiki.openstack.org/wiki/Starlingx/Meetings#0615am_PDT_.2F_1415_UTC_-_Networking_Team_Call_.28Bi-weekly.29 Regards, Ghada -------------- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 16, 2018 7:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded Hi all, We are working on kernel upgrade task recently [0]. After upgrade the kernel, we find several modules cannot pass build, due to data structure/function api change in kernel. Here is the module list cannot pass build with the new kernel: Mlnx-ofa_kernel Intel-i40e Intel-i40evf Tpmdd Intel-ixgbe drbd openvswitch To fix the build failure, I plan to upgrade these packages to newer version, which supports CentOS 7.6. This upgrade may cause other packages depend on these packages be upgraded also. Take Mlnx-ofa as example, it is bound with DPDK. Per [1], MLNX_OFED 4.5-1.0.1.0 supports CentOS 7.6. Per [2], DPDK should be upgraded to 18.11, while our current DPDK is 17.11, and is bound with OVS. And OVS upgrade may affect Neutron. I need network team to help decide the upgrade strategy of DPDK/OVS. Thanks. [0]: https://storyboard.openstack.org/#!/story/2004521 [1]: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers [2]: https://doc.dpdk.org/guides-18.11/rel_notes/release_18_11.html Best Regards Shuicheng From kailun.qin at intel.com Fri Dec 21 22:51:59 2018 From: kailun.qin at intel.com (Qin, Kailun) Date: Fri, 21 Dec 2018 22:51:59 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: <B79033180B571C49888593DE9D1AB78E0599FC6E@shsmsx102.ccr.corp.intel.com> References: <B79033180B571C49888593DE9D1AB78E05972119@shsmsx102.ccr.corp.intel.com> <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <B79033180B571C49888593DE9D1AB78E05973AAA@shsmsx102.ccr.corp.intel.com> <B79033180B571C49888593DE9D1AB78E0599E7E1@shsmsx102.ccr.corp.intel.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> <B79033180B571C49888593DE9D1AB78E0599F509@shsmsx102.ccr.corp.intel.com> <70A7408C6E1BFB41B192A929744D8523BAC543E0@ALA-MBD.corp.ad.wrs.com> <B79033180B571C49888593DE9D1AB78E0599FC6E@shsmsx102.ccr.corp.intel.com> Message-ID: <B79033180B571C49888593DE9D1AB78E059A2E35@shsmsx102.ccr.corp.intel.com> Hi Allain, Matt, I discussed this RFE in the Neutron driver meeting last night. It was a heated discussion and took up almost all the meeting time. However, the Neutron driver team thought the delay approach was not reliable and it won't perform predictably in all situations (no perfect setting for every deployment), along with some other concerns. They would prefer ways like purge_queue in rabbitmq/possibly in oslo_messaging (https://www.rabbitmq.com/rabbitmqctl.8.html#purge_queue) OR use a resource queue as the l3-agent does, if we do want the RFE to move forward. Please kindly see the MM for further details: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-12-21-14.00.log.html. What do you think or suggest? Thanks. BR, Kailun From: Qin, Kailun Sent: Wednesday, December 19, 2018 10:12 AM To: Legacy, Allain <Allain.Legacy at windriver.com>; Peters, Matt <Matt.Peters at windriver.com> Cc: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun <kailun.qin at intel.com<mailto:kailun.qin at intel.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>> Cc: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' <Allain.Legacy at windriver.com<mailto:Allain.Legacy at windriver.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>> Cc: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun <kailun.qin at intel.com<mailto:kailun.qin at intel.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>> Cc: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain <Allain.Legacy at windriver.com<mailto:Allain.Legacy at windriver.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>> Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun <kailun.qin at intel.com<mailto:kailun.qin at intel.com>>; Peters, Matt <Matt.Peters at windriver.com<mailto:Matt.Peters at windriver.com>> Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND]<http://www.windriver.com/> From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io> Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/72de203c/attachment-0001.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/72de203c/attachment-0001.png> From ada.cabrales at intel.com Fri Dec 21 22:59:15 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Fri, 21 Dec 2018 22:59:15 +0000 Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E41A@FMSMSX114.amr.corp.intel.com> Hello, We currently have 2 testing frameworks proposed: - Robot [0] - Sanity check at Intel's premises is done using it - Deployment on virtual environment, and running the tests is automated. - ~200 tests automated so far. - PyTest [1] - used by Wind River for their testing - A big amount of test cases automated (Numan, can you provide a number?) Both frameworks are similar, there must be some re-work required on one of the sides to align with the chosen one. What I would like to have, is an informed decision, bringing the best impact to the project and thinking about the future, not only the current picture. Even knowing these days are going to be quiet, I want to continue the conversation: which one best serves StarlingX? Regards Ada [0] http://robotframework.org/ [1] https://docs.pytest.org/en/latest/ From juan.carlos.alonso at intel.com Fri Dec 21 23:23:30 2018 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 21 Dec 2018 23:23:30 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20181221 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7DE3A@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2018-Dec-21 (link<http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20181221T060000Z/>) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Compute host personality changed from "compute" to "worker". Take it in consideration if you deploy the system manually. Update was sent to test suite but it is not merged yet. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/800a5b75/attachment.html> From Numan.Waheed at windriver.com Mon Dec 24 15:26:03 2018 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Mon, 24 Dec 2018 15:26:03 +0000 Subject: [Starlingx-discuss] Test Stories for stx.2019.05 and Test Infrastructure Message-ID: <3CAA827B7A79BA46B15B280EC82088FE4824E574@ALA-MBD.corp.ad.wrs.com> Hi, I have created three test stories for upcoming stx release and for test infrastructure. Please take a look and feel free to comment. 2004671: [Test] stx - Creation of Test Dashboard <https://storyboard.openstack.org/#!/story/2004671> https://storyboard.openstack.org/#!/story/2004671 2004672: [Test] stx - Creating Regression Test Suite for stx.2019.05 Release <https://storyboard.openstack.org/#!/story/2004672> https://storyboard.openstack.org/#!/story/2004672 2004674: [Test] stx - Creation of Test Artifacts Repository <https://storyboard.openstack.org/#!/story/2004674> https://storyboard.openstack.org/#!/story/2004674 Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181224/874e0c77/attachment.html> From chenzz at certusnet.com.cn Tue Dec 25 06:19:08 2018 From: chenzz at certusnet.com.cn (chenzz) Date: Tue, 25 Dec 2018 14:19:08 +0800 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, <C3BACC2D-9A94-46D5-A545-256C73C82EC2@intel.com>, <201812201353342045172@certusnet.com.cn>, <D85CA32710171642A61ACC671177FF566872C5D3@SHSMSX103.ccr.corp.intel.com> Message-ID: <201812251419079227001@certusnet.com.cn> Hi: I found there is no “build-pkgs ” command in the path variable; In addition,in my environment,the “build-pkgs” file is in the directory /root/cgcs-root/build-tools, not in the directory /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs. After manually executing “./build-pkgs” , the following error is prompted: Dependency cache is missing. Creating it now. Traceback (most recent call last): File "/root/cgcs-root/build-tools/create_dependancy_cache.py", line 75, in <module> workspace_repo_dirs[rt][bt]="%s/%s/rpmbuild/%sS" % (os.environ['MY_WORKSPACE'], bt, rt) File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'MY_WORKSPACE' Dependency cache created. build-pkgs-parallel ./build-pkgs: line 54: build-pkgs-parallel: command not found CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Sun, Austin Date: 2018-12-20 14:25 To: chenzz; Hu, Yong; starlingx-discuss Subject: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Could you run “which build-pkgs” in build container and check if the path ? And check if /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs exists ? Thanks. BR Austin Sun. From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong <yong.hu at intel.com>; starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Hu, Yong Date: 2018-12-20 10:38 To: chenzz; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz <chenzz at certusnet.com.cn> Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181225/5661041c/attachment-0001.html> From austin.sun at intel.com Tue Dec 25 06:51:31 2018 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 25 Dec 2018 06:51:31 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812251419079227001@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, <C3BACC2D-9A94-46D5-A545-256C73C82EC2@intel.com>, <201812201353342045172@certusnet.com.cn>, <D85CA32710171642A61ACC671177FF566872C5D3@SHSMSX103.ccr.corp.intel.com> <201812251419079227001@certusnet.com.cn> Message-ID: <D85CA32710171642A61ACC671177FF56687313B2@SHSMSX103.ccr.corp.intel.com> Hi : Are you run this command in Docker ? do you run command “bash tb.sh exec” to enter Docker ? Thanks. BR Austin Sun From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 25, 2018 2:19 PM To: Sun, Austin <austin.sun at intel.com>; Hu, Yong <yong.hu at intel.com>; starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi: I found there is no “build-pkgs ” command in the path variable; In addition,in my environment,the “build-pkgs” file is in the directory /root/cgcs-root/build-tools, not in the directory /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs. After manually executing “./build-pkgs” , the following error is prompted: Dependency cache is missing. Creating it now. Traceback (most recent call last): File "/root/cgcs-root/build-tools/create_dependancy_cache.py", line 75, in <module> workspace_repo_dirs[rt][bt]="%s/%s/rpmbuild/%sS" % (os.environ['MY_WORKSPACE'], bt, rt) File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'MY_WORKSPACE' Dependency cache created. build-pkgs-parallel ./build-pkgs: line 54: build-pkgs-parallel: command not found ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Sun, Austin<mailto:austin.sun at intel.com> Date: 2018-12-20 14:25 To: chenzz<mailto:chenzz at certusnet.com.cn>; Hu, Yong<mailto:yong.hu at intel.com>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Could you run “which build-pkgs” in build container and check if the path ? And check if /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs exists ? Thanks. BR Austin Sun. From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Hu, Yong<mailto:yong.hu at intel.com> Date: 2018-12-20 10:38 To: chenzz<mailto:chenzz at certusnet.com.cn>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz <chenzz at certusnet.com.cn<mailto:chenzz at certusnet.com.cn>> Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn<http://www.certusnet.com.cn/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181225/726a19c7/attachment-0001.html> From ada.cabrales at intel.com Fri Dec 21 22:04:36 2018 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Fri, 21 Dec 2018 22:04:36 +0000 Subject: [Starlingx-discuss] [ Test ] PyTest framework overview - minutes Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E3A5@FMSMSX114.amr.corp.intel.com> Meeting on 12/21 Attendees: Yang Liu, Elio, Jose, Bill, Bruce, JC, Numan, Victor * Presentation about PyTest, by Yang Liu from WindRiver o Criteria for choosing a test framework * Maintainability * Debugability * Flexibility * Scalability o Easy to write and maintain tests o Flexibility in Test execution o Strong debug support Thank you, Yang! An email thread for working on deciding which one to use will be sent to the mail list. - Ada Attaching the presentation for more information. Happy Holidays! Ada -----Original Appointment----- From: Cabrales, Ada Sent: Monday, December 17, 2018 4:11 PM To: Cabrales, Ada; 'starlingx-discuss at lists.starlingx.io' Cc: 'Waheed, Numan'; Hernandez Gonzalez, Fernando; Cobbley, David A; Jones, Bruce E; Armstrong, Robert H; Gomez, Juan P; Alonso, Juan Carlos; 'Rowsell, Brent'; Hu, Wei W; 'Young, Ken'; Perez Carranza, Jose; 'Carlos Cebrian'; Botello Ortega, Luis; Seiler, Glenn; Liu, Yang; Waines, Greg; Chen, Jacky; Martinez Monroy, Elio Subject: [ Test ] PyTest framework overview When: Friday, December 21, 2018 10:00 AM-11:00 AM (UTC-06:00) Guadalajara, Mexico City, Monterrey. Where: Zoom info included Hello, StarlingX community Continuing with the topic of an unified testing framework, Numan will present PyTest and the good things they've found about it. Join us this Friday. Ada Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: * Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 * Meeting ID: 342 730 236 * International numbers available: https://zoom.us/u/ed95sU7aQ -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/03fae7db/attachment-0001.html> -------------- next part -------------- A non-text attachment was scrubbed... Name: stx_AutomationFWComparison.pdf Type: application/pdf Size: 631425 bytes Desc: stx_AutomationFWComparison.pdf URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181221/03fae7db/attachment-0001.pdf> From chenzz at certusnet.com.cn Tue Dec 25 06:56:06 2018 From: chenzz at certusnet.com.cn (chenzz) Date: Tue, 25 Dec 2018 14:56:06 +0800 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, <C3BACC2D-9A94-46D5-A545-256C73C82EC2@intel.com>, <201812201353342045172@certusnet.com.cn>, <D85CA32710171642A61ACC671177FF566872C5D3@SHSMSX103.ccr.corp.intel.com>, <201812251419079227001@certusnet.com.cn>, <D85CA32710171642A61ACC671177FF56687313B2@SHSMSX103.ccr.corp.intel.com> Message-ID: <201812251456063332274@certusnet.com.cn> Hi: Yes, it’s running in the docker with the command “bash tb.sh exec” to enter the docker command CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Sun, Austin Date: 2018-12-25 14:51 To: chenzz; Hu, Yong; starlingx-discuss Subject: RE: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Are you run this command in Docker ? do you run command “bash tb.sh exec” to enter Docker ? Thanks. BR Austin Sun From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 25, 2018 2:19 PM To: Sun, Austin <austin.sun at intel.com>; Hu, Yong <yong.hu at intel.com>; starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi: I found there is no “build-pkgs ” command in the path variable; In addition,in my environment,the “build-pkgs” file is in the directory /root/cgcs-root/build-tools, not in the directory /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs. After manually executing “./build-pkgs” , the following error is prompted: Dependency cache is missing. Creating it now. Traceback (most recent call last): File "/root/cgcs-root/build-tools/create_dependancy_cache.py", line 75, in <module> workspace_repo_dirs[rt][bt]="%s/%s/rpmbuild/%sS" % (os.environ['MY_WORKSPACE'], bt, rt) File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'MY_WORKSPACE' Dependency cache created. build-pkgs-parallel ./build-pkgs: line 54: build-pkgs-parallel: command not found CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Sun, Austin Date: 2018-12-20 14:25 To: chenzz; Hu, Yong; starlingx-discuss Subject: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Could you run “which build-pkgs” in build container and check if the path ? And check if /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs exists ? Thanks. BR Austin Sun. From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong <yong.hu at intel.com>; starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn From: Hu, Yong Date: 2018-12-20 10:38 To: chenzz; starlingx-discuss Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz <chenzz at certusnet.com.cn> Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181225/a1e6fb8b/attachment-0001.html> From austin.sun at intel.com Wed Dec 26 01:05:08 2018 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 26 Dec 2018 01:05:08 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. In-Reply-To: <201812251456063332274@certusnet.com.cn> References: <201812110931334306841@certusnet.com.cn>, <201812200955395695071@certusnet.com.cn>, <C3BACC2D-9A94-46D5-A545-256C73C82EC2@intel.com>, <201812201353342045172@certusnet.com.cn>, <D85CA32710171642A61ACC671177FF566872C5D3@SHSMSX103.ccr.corp.intel.com>, <201812251419079227001@certusnet.com.cn>, <D85CA32710171642A61ACC671177FF56687313B2@SHSMSX103.ccr.corp.intel.com> <201812251456063332274@certusnet.com.cn> Message-ID: <D85CA32710171642A61ACC671177FF5668731A74@SHSMSX103.ccr.corp.intel.com> It seems your env is not configured successfully. Could you run ‘bash tb.sh env’ to check your config ? From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 25, 2018 2:56 PM To: Sun, Austin <austin.sun at intel.com>; Hu, Yong <yong.hu at intel.com>; starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi: Yes, it’s running in the docker with the command “bash tb.sh exec” to enter the docker command ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Sun, Austin<mailto:austin.sun at intel.com> Date: 2018-12-25 14:51 To: chenzz<mailto:chenzz at certusnet.com.cn>; Hu, Yong<mailto:yong.hu at intel.com>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Are you run this command in Docker ? do you run command “bash tb.sh exec” to enter Docker ? Thanks. BR Austin Sun From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 25, 2018 2:19 PM To: Sun, Austin <austin.sun at intel.com<mailto:austin.sun at intel.com>>; Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi: I found there is no “build-pkgs ” command in the path variable; In addition,in my environment,the “build-pkgs” file is in the directory /root/cgcs-root/build-tools, not in the directory /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs. After manually executing “./build-pkgs” , the following error is prompted: Dependency cache is missing. Creating it now. Traceback (most recent call last): File "/root/cgcs-root/build-tools/create_dependancy_cache.py", line 75, in <module> workspace_repo_dirs[rt][bt]="%s/%s/rpmbuild/%sS" % (os.environ['MY_WORKSPACE'], bt, rt) File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'MY_WORKSPACE' Dependency cache created. build-pkgs-parallel ./build-pkgs: line 54: build-pkgs-parallel: command not found ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Sun, Austin<mailto:austin.sun at intel.com> Date: 2018-12-20 14:25 To: chenzz<mailto:chenzz at certusnet.com.cn>; Hu, Yong<mailto:yong.hu at intel.com>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Could you run “which build-pkgs” in build container and check if the path ? And check if /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs exists ? Thanks. BR Austin Sun. From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Hu, Yong<mailto:yong.hu at intel.com> Date: 2018-12-20 10:38 To: chenzz<mailto:chenzz at certusnet.com.cn>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz <chenzz at certusnet.com.cn<mailto:chenzz at certusnet.com.cn>> Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn<http://www.certusnet.com.cn/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181226/4660ed90/attachment-0001.html> From yong.hu at intel.com Wed Dec 26 01:26:31 2018 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 26 Dec 2018 01:26:31 +0000 Subject: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Message-ID: <EBF0EB6F-DC12-4D79-82C0-99BBDCE404B8@intel.com> Looking at “/root/cgcs-root/build-tools”, are you using “root”? We cannot use root!! ec at ec-dpl-nuc04:~/stx-tools$ cat localrc # tbuilder localrc MYUNAME=your_normal_nonroot_user_name PROJECT=starlingx HOST_PREFIX=$HOME/starlingx/workspace HOST_MIRROR_DIR=$HOME/starlingx/mirror From: "Sun, Austin" <austin.sun at intel.com> Date: Wednesday, 26 December 2018 at 9:05 AM To: chenzz <chenzz at certusnet.com.cn>, "Hu, Yong" <yong.hu at intel.com>, starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: RE: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. It seems your env is not configured successfully. Could you run ‘bash tb.sh env’ to check your config ? From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 25, 2018 2:56 PM To: Sun, Austin <austin.sun at intel.com>; Hu, Yong <yong.hu at intel.com>; starlingx-discuss <starlingx-discuss at lists.starlingx.io> Subject: Re: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi: Yes, it’s running in the docker with the command “bash tb.sh exec” to enter the docker command ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Sun, Austin<mailto:austin.sun at intel.com> Date: 2018-12-25 14:51 To: chenzz<mailto:chenzz at certusnet.com.cn>; Hu, Yong<mailto:yong.hu at intel.com>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Are you run this command in Docker ? do you run command “bash tb.sh exec” to enter Docker ? Thanks. BR Austin Sun From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Tuesday, December 25, 2018 2:19 PM To: Sun, Austin <austin.sun at intel.com<mailto:austin.sun at intel.com>>; Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi: I found there is no “build-pkgs ” command in the path variable; In addition,in my environment,the “build-pkgs” file is in the directory /root/cgcs-root/build-tools, not in the directory /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs. After manually executing “./build-pkgs” , the following error is prompted: Dependency cache is missing. Creating it now. Traceback (most recent call last): File "/root/cgcs-root/build-tools/create_dependancy_cache.py", line 75, in <module> workspace_repo_dirs[rt][bt]="%s/%s/rpmbuild/%sS" % (os.environ['MY_WORKSPACE'], bt, rt) File "/usr/lib64/python2.7/UserDict.py", line 23, in __getitem__ raise KeyError(key) KeyError: 'MY_WORKSPACE' Dependency cache created. build-pkgs-parallel ./build-pkgs: line 54: build-pkgs-parallel: command not found ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Sun, Austin<mailto:austin.sun at intel.com> Date: 2018-12-20 14:25 To: chenzz<mailto:chenzz at certusnet.com.cn>; Hu, Yong<mailto:yong.hu at intel.com>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: RE: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi : Could you run “which build-pkgs” in build container and check if the path ? And check if /localdisk/designer/{you name}/starlingx/cgcs-root/build-tools/build-pkgs exists ? Thanks. BR Austin Sun. From: chenzz [mailto:chenzz at certusnet.com.cn] Sent: Thursday, December 20, 2018 1:54 PM To: Hu, Yong <yong.hu at intel.com<mailto:yong.hu at intel.com>>; starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Hi Yong, Thanks for your reply, one more question: in the following build packages steps, I try to execute “build-pkgs” command, it reports command not found, could you answer this question for me? BRs ________________________________ CertusNet 赛特斯信息科技股份有限公司 陈峥峥 | 现场工程师 南京市玄武区玄武大道699-22号18幢 手机:152 9559 2415 公司网址:www.certusnet.com.cn<http://www.certusnet.com.cn/> From: Hu, Yong<mailto:yong.hu at intel.com> Date: 2018-12-20 10:38 To: chenzz<mailto:chenzz at certusnet.com.cn>; starlingx-discuss<mailto:starlingx-discuss at lists.starlingx.io> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. Bill, I suppose you were running “make” in “stx-tools”. Here are some steps FYI. Especially you need to understand what I made in comments starting with #. --------------------- Start Here -------------------------------------------------- # update localrc if needed – MUST DO # use Makefile to build a docker image based on your settings # NOTES: in advance into Dockerfile, need to add http/https # proxies and "/etc/yum.conf", which will be used in CentOS make all # after quite a while (depending on network), the docker # image should be made, check it by: docker images # if the image is made successfully, to make a container: ./tb.sh run # after the container is made and running, to check by: docker ps # if the container is running, to login in shell in the # container, with user/path defined in your localrc ./tb.sh exec From: chenzz <chenzz at certusnet.com.cn<mailto:chenzz at certusnet.com.cn>> Date: Thursday, 20 December 2018 at 10:14 AM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>> Subject: Re: [Starlingx-discuss] Problems in build StatlingX mirror. Asking for help. hi there: My deployment environment is Ubuntu 16.04. When making iso images, I executed the ‘make base-build’ command, it reports:make: *** No rule to make target `base-build'. Stop. It seems out of order. So I executed the ‘make’ command to instead and it t seems could work, Please tell me how to execute the ‘make base-build’ command correctly. When I go to the command of ‘build-pkgs’, it reports command not found, I speculated it might be I executed the ‘make’ command,Please tell me how to solve this problem again. Thanks very much ________________________________ Bill chen | CertusNet Building 18, No. 699-22 Xuanwu Avenue, Xuanwu District, Nanjing, 210042, P.R.China Tel: +86 25 6642 3768 -XXXX Mobile: +86 15295592415 Web: www.certusnet.com.cn<http://www.certusnet.com.cn/> -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181226/bfa99b7b/attachment-0001.html> From zhiwei.he at intel.com Thu Dec 27 07:34:02 2018 From: zhiwei.he at intel.com (He, Zhiwei) Date: Thu, 27 Dec 2018 07:34:02 +0000 Subject: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node Message-ID: <09777F12A863964D9F4AE780D4FEA3E83E6337D5@SHSMSX104.ccr.corp.intel.com> Hi Expert; Issue descript : =================================================================================================================================== That greetings from intel NPG's Zhiwei, Now we try to using STX version of "stx-2018-11-13-nova-ibrs.iso", to enable computer node , ceph storage node etc Has a issue happens with "no any install progress information when the install node ", even install controller node, ceph storage node and computer node etc, fail , just show status is offline, It seem the install progress information has missing and progress status bar does not show on web page. Only have install success then will show "availability | online " The issue does not happen in titanium cloud 18.09. Issue logs: =================================================================================================================================== [root at controller-0 ~(keystone_admin)]# system host-show compute-1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | action | none | | administrative | locked | | availability | offline | | bm_ip | None | | bm_type | None | | bm_username | None | | boot_device | sda | | capabilities | {} | | config_applied | None | | config_status | None | | config_target | None | | console | ttyS0,115200 | | created_at | 2018-12-27T13:54:56.393550+00:00 | | hostname | compute-1 | | id | 24 | | install_output | text | | install_state | None | | install_state_info | None | | invprovision | None | | location | {} | | mgmt_ip | 192.168.204.15 | | mgmt_mac | b4:96:91:0d:ee:0c | | operational | disabled | | personality | compute | | reserved | False | | rootfs_device | sda | | serialid | None | | software_load | 18.10 | | subfunctions | compute,lowlatency | | task | | | tboot | false | | ttys_dcd | None | | updated_at | 2018-12-27T13:56:36.260324+00:00 | | uptime | 0 | | uuid | ec03234c-29b6-4df4-b027-f4f51a4ffb85 | | vim_progress_status | None | +---------------------+--------------------------------------+ Thanks! zhiwei -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181227/c9d4a00b/attachment.html> From cindy.xie at intel.com Thu Dec 27 09:10:52 2018 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 27 Dec 2018 09:10:52 +0000 Subject: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node In-Reply-To: <09777F12A863964D9F4AE780D4FEA3E83E6337D5@SHSMSX104.ccr.corp.intel.com> References: <09777F12A863964D9F4AE780D4FEA3E83E6337D5@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E1CF0C@SHSMSX104.ccr.corp.intel.com> + Yan: is this something related to stx-gui? From: He, Zhiwei [mailto:zhiwei.he at intel.com] Sent: Thursday, December 27, 2018 3:34 PM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io> Cc: Li, Baoqian <baoqian.li at intel.com> Subject: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node Hi Expert; Issue descript : =================================================================================================================================== That greetings from intel NPG's Zhiwei, Now we try to using STX version of "stx-2018-11-13-nova-ibrs.iso", to enable computer node , ceph storage node etc Has a issue happens with "no any install progress information when the install node ", even install controller node, ceph storage node and computer node etc, fail , just show status is offline, It seem the install progress information has missing and progress status bar does not show on web page. Only have install success then will show "availability | online " The issue does not happen in titanium cloud 18.09. Issue logs: =================================================================================================================================== [root at controller-0 ~(keystone_admin)]# system host-show compute-1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | action | none | | administrative | locked | | availability | offline | | bm_ip | None | | bm_type | None | | bm_username | None | | boot_device | sda | | capabilities | {} | | config_applied | None | | config_status | None | | config_target | None | | console | ttyS0,115200 | | created_at | 2018-12-27T13:54:56.393550+00:00 | | hostname | compute-1 | | id | 24 | | install_output | text | | install_state | None | | install_state_info | None | | invprovision | None | | location | {} | | mgmt_ip | 192.168.204.15 | | mgmt_mac | b4:96:91:0d:ee:0c | | operational | disabled | | personality | compute | | reserved | False | | rootfs_device | sda | | serialid | None | | software_load | 18.10 | | subfunctions | compute,lowlatency | | task | | | tboot | false | | ttys_dcd | None | | updated_at | 2018-12-27T13:56:36.260324+00:00 | | uptime | 0 | | uuid | ec03234c-29b6-4df4-b027-f4f51a4ffb85 | | vim_progress_status | None | +---------------------+--------------------------------------+ Thanks! zhiwei -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181227/f384fed5/attachment-0001.html> From Don.Penney at windriver.com Thu Dec 27 15:02:18 2018 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 27 Dec 2018 15:02:18 +0000 Subject: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35E1CF0C@SHSMSX104.ccr.corp.intel.com> References: <09777F12A863964D9F4AE780D4FEA3E83E6337D5@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F35E1CF0C@SHSMSX104.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA40C944@ALA-MBD.corp.ad.wrs.com> There are multiple components to the progress notifications. 1. The pxelinux.cfg files have a tisnotify=URL parameter that the patched Anaconda uses as a destination URL for notifications (look in /pxeboot/pxelinux.cfg/ files on the active controller) tisnotify=http://pxecontroller:6385/v1/ihosts/81adb329-0641-496e-acce-04a9d13cf865/install_progress 2. Anaconda has a patch (stx-integ/base/anaconda/centos/patches/0001-TIS-Progress-and-error-handling.patch) that adds notification support. While your host is installing, you should be able to access the installer shell and check that the source code is patched, or check the content of the squashfs.img (/www/pages/feed/rel-19.01/LiveOS/squashfs.img) Example: controller-1:~/dpenney# ORIG_SQUASHFS=/www/pages/feed/rel-19.01/LiveOS/squashfs.img controller-1:~/dpenney# mkdir squashfs.mnt controller-1:~/dpenney# mount -o loop -t squashfs $ORIG_SQUASHFS squashfs.mnt controller-1:~/dpenney# mkdir LiveOS controller-1:~/dpenney# cp squashfs.mnt/LiveOS/rootfs.img LiveOS/ controller-1:~/dpenney# umount squashfs.mnt controller-1:~/dpenney# mkdir squashfs.work controller-1:~/dpenney# mount -o loop LiveOS/rootfs.img squashfs.work controller-1:~/dpenney# cd squashfs.work controller-1:~/dpenney/squashfs.work# ls usr/lib64/python2.7/site-packages/pyanaconda/tisnotify.py usr/lib64/python2.7/site-packages/pyanaconda/tisnotify.py controller-1:~/dpenney/squashfs.work# cd .. controller-1:~/dpenney# umount squashfs.work 3. Sysinv records the install_progress notification in the database as install_state, whether it's a string state or a number (percentage) 4. The dashboard reads and displays the install_state, with a progress bar if it's a percentage (stx-gui/starlingx-dashboard/starlingx-dashboard/starlingx_dashboard/dashboards/admin/inventory/tables.py) If your installer rootfs (squashfs.img) doesn't have the patch, you won't get notifications. Similarly, if the restructured dashboard isn't loading this properly, you won't see a progress bar. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Thursday, December 27, 2018 4:11 AM To: He, Zhiwei; starlingx-discuss; Chen, Yan Cc: Li, Baoqian Subject: Re: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node + Yan: is this something related to stx-gui? From: He, Zhiwei [mailto:zhiwei.he at intel.com] Sent: Thursday, December 27, 2018 3:34 PM To: starlingx-discuss <starlingx-discuss at lists.starlingx.io> Cc: Li, Baoqian <baoqian.li at intel.com> Subject: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node Hi Expert; Issue descript : =================================================================================================================================== That greetings from intel NPG's Zhiwei, Now we try to using STX version of "stx-2018-11-13-nova-ibrs.iso", to enable computer node , ceph storage node etc Has a issue happens with "no any install progress information when the install node ", even install controller node, ceph storage node and computer node etc, fail , just show status is offline, It seem the install progress information has missing and progress status bar does not show on web page. Only have install success then will show "availability | online " The issue does not happen in titanium cloud 18.09. Issue logs: =================================================================================================================================== [root at controller-0 ~(keystone_admin)]# system host-show compute-1 +---------------------+--------------------------------------+ | Property | Value | +---------------------+--------------------------------------+ | action | none | | administrative | locked | | availability | offline | | bm_ip | None | | bm_type | None | | bm_username | None | | boot_device | sda | | capabilities | {} | | config_applied | None | | config_status | None | | config_target | None | | console | ttyS0,115200 | | created_at | 2018-12-27T13:54:56.393550+00:00 | | hostname | compute-1 | | id | 24 | | install_output | text | | install_state | None | | install_state_info | None | | invprovision | None | | location | {} | | mgmt_ip | 192.168.204.15 | | mgmt_mac | b4:96:91:0d:ee:0c | | operational | disabled | | personality | compute | | reserved | False | | rootfs_device | sda | | serialid | None | | software_load | 18.10 | | subfunctions | compute,lowlatency | | task | | | tboot | false | | ttys_dcd | None | | updated_at | 2018-12-27T13:56:36.260324+00:00 | | uptime | 0 | | uuid | ec03234c-29b6-4df4-b027-f4f51a4ffb85 | | vim_progress_status | None | +---------------------+--------------------------------------+ Thanks! zhiwei -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181227/257deb03/attachment-0001.html> From Bill.Zvonar at windriver.com Fri Dec 28 20:24:31 2018 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 28 Dec 2018 20:24:31 +0000 Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E41A@FMSMSX114.amr.corp.intel.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E41A@FMSMSX114.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC09D75EA@ALA-MBD.corp.ad.wrs.com> Hi Ada/Numan - apologies if this was discussed & I don't recall - is it an option for us to carry on with both (as long as they can both feed up into the same dashboard)? -----Original Message----- From: Cabrales, Ada <ada.cabrales at intel.com> Sent: Friday, December 21, 2018 5:59 PM To: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io> Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework Hello, We currently have 2 testing frameworks proposed: - Robot [0] - Sanity check at Intel's premises is done using it - Deployment on virtual environment, and running the tests is automated. - ~200 tests automated so far. - PyTest [1] - used by Wind River for their testing - A big amount of test cases automated (Numan, can you provide a number?) Both frameworks are similar, there must be some re-work required on one of the sides to align with the chosen one. What I would like to have, is an informed decision, bringing the best impact to the project and thinking about the future, not only the current picture. Even knowing these days are going to be quiet, I want to continue the conversation: which one best serves StarlingX? Regards Ada [0] http://robotframework.org/ [1] https://docs.pytest.org/en/latest/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Sat Dec 29 00:32:53 2018 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Fri, 28 Dec 2018 18:32:53 -0600 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: <CAJ_JamCVxZf0q3w9276Kky027ApQL7SwqpL7wE2SjqRsZhphQw@mail.gmail.com> References: <CAK5mtezidsKF61Mpn3YcKbYgB1U0tqGmFC-_Y0AV-fSVRZR7Vg@mail.gmail.com> <CAJ_JamCVxZf0q3w9276Kky027ApQL7SwqpL7wE2SjqRsZhphQw@mail.gmail.com> Message-ID: <CAK5mtew+jC02UnfUm+X2_NEWcKWf1CSJ9KKB_0tbvcoiMRV67A@mail.gmail.com> On Fri, Dec 21, 2018, 07:08 Curtis <serverascode at gmail.com wrote: > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez <vm.rod25 at gmail.com> > wrote: > >> Hi StarlingX community >> >> We can all agree that security is an important feature to be taken >> into consideration in any SW project. In the aim of improving the >> security of the StarlingX project, we have been taking the task to >> propose the use of some compiler flags that prevent and detect some >> security holes, especially by buffer overflow that could lead into ROP >> attacks. >> >> The list of flags that we are proposing are : >> >> Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” >> >> Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" >> Format string vulnerabilities: CFLAGS="-Wformat >> -Wformat-security" >> Stack execution protection: LDFLAGS="-z noexecstack" >> Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" >> >> >> These are being analyzed in the following Gerrit reviews (thanks a lot >> for all the good feedback) >> >> https://review.openstack.org/#/c/623608/ >> https://review.openstack.org/#/c/623603/ >> https://review.openstack.org/#/c/623601/ >> https://review.openstack.org/#/c/623599/ >> >> As requested in the Gerrit reviews, there is a proper need to first >> understand what these compiler flags do and what is the impact they >> have at the functional and performance area of the project. This is a >> preliminary report, we will be following up with a test plan for >> functional & performance test plans for the services as a next step. >> This report includes: >> >> * Detailed description of what the compiler flag does >> * Code example that shows how does it work to prevent attacks >> * If there is a change in the binary, we create a microbenchmark that >> shows us how the flag impact the performance >> >> >> https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security >> >> As a result of the microbenchmark, the performance impact is not >> relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more >> details on the HW and SW specification upon requests) >> >> The areas of the code we are suggesting on the patches are: >> >> * stx-ha >> * stx-metal >> * stx-nfv >> * stx-fault >> >> We do take care that these flags are not breaking the following areas >> after being applied. >> >> * Build process of the image >> * Sanity test cases after the image is created >> (Ada can give more details on the sanity report of the image generated >> with these flags) >> >> If running the sanity tests are not enough to prove that a change in >> compiler flags do not affect functionality, please gave us the right >> path to follow. >> >> As mentioned before, this is a preliminary report, and that we will be >> following up with a test plan for functional & performance test plans >> for the services as a next step. >> >> Hope this email helps to clarify some questions related to the flags >> and start the follow-up discussion. >> > > Thanks for the context Victor, it's very helpful to me. > Hi Curtis, glad it helps, it was fun to do the research > > One thing I want to mention is something the Kata Containers team was > talking about at the Berlin OpenStack summit, which is when many small > performance hits start to add up. They have to be careful to ensure they > don't have a bunch of smallish looking changes that add up to a large > performance hit over a longer period of time. > You are right, it's a valid point that we need to take care too > > Overall I'm sure the StarlingX project would like to have some performance > testing, if we don't already, though that can be challenging for an open > source project. I had mentioned OPNFV's Functest and related projects on > the TSC call, but now seeing which components are affected I'm not sure > that would be directly helpful. I look forward to further discussions > around this area. > Thanks for let me know that, I will take a look at OPNFV's functest and other projects before the next TSC of 2019 I will do my best to came up with a proposal for a better performance testing. Thanks Victor Rodriguez > > Thanks, > Curtis > > >> >> Regards >> >> Victor Rodriguez >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > > -- > Blog: serverascode.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181228/f776c87a/attachment.html> From Numan.Waheed at windriver.com Mon Dec 31 14:26:55 2018 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Mon, 31 Dec 2018 14:26:55 +0000 Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC09D75EA@ALA-MBD.corp.ad.wrs.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E41A@FMSMSX114.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC09D75EA@ALA-MBD.corp.ad.wrs.com> Message-ID: <3CAA827B7A79BA46B15B280EC82088FE4824FC1C@ALA-MBD.corp.ad.wrs.com> Yes. That is one possibility. As far as our investigation goes, reportportal.io dashboard has the capability to integrate with both PyTest and RobotFW. Thanks, Numan. -----Original Message----- From: Zvonar, Bill Sent: December-28-18 3:25 PM To: Cabrales, Ada <ada.cabrales at intel.com>; Waheed, Numan <Numan.Waheed at windriver.com>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io> Subject: RE: [ Test ][ discussion ] Unified test framework Hi Ada/Numan - apologies if this was discussed & I don't recall - is it an option for us to carry on with both (as long as they can both feed up into the same dashboard)? -----Original Message----- From: Cabrales, Ada <ada.cabrales at intel.com> Sent: Friday, December 21, 2018 5:59 PM To: 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io> Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework Hello, We currently have 2 testing frameworks proposed: - Robot [0] - Sanity check at Intel's premises is done using it - Deployment on virtual environment, and running the tests is automated. - ~200 tests automated so far. - PyTest [1] - used by Wind River for their testing - A big amount of test cases automated (Numan, can you provide a number?) Both frameworks are similar, there must be some re-work required on one of the sides to align with the chosen one. What I would like to have, is an informed decision, bringing the best impact to the project and thinking about the future, not only the current picture. Even knowing these days are going to be quiet, I want to continue the conversation: which one best serves StarlingX? Regards Ada [0] http://robotframework.org/ [1] https://docs.pytest.org/en/latest/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss