From himanshugoyal500 at gmail.com Tue Jan 1 10:04:15 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Tue, 1 Jan 2019 15:34:15 +0530 Subject: [Starlingx-discuss] Deployment Option Message-ID: Hi, Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node (Both nodes on different physical Machines). Many Thanks, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jan 2 00:58:25 2019 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 2 Jan 2019 00:58:25 +0000 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: Message-ID: <2BF69081-E489-4A2D-AF00-624CA869D486@intel.com> Yes, you can. Though, you’d better have 2 disks on each compute. From: Himanshu Goyal Date: Tuesday, 1 January 2019 at 6:06 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Deployment Option Hi, Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node (Both nodes on different physical Machines). Many Thanks, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Wed Jan 2 05:25:36 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 2 Jan 2019 05:25:36 +0000 Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded In-Reply-To: <151EE31B9FCCA54397A757BC674650F0BA499103@ALA-MBD.corp.ad.wrs.com> References: <9700A18779F35F49AF027300A49E7C765FE56853@SHSMSX101.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA499103@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE67FAC@SHSMSX101.ccr.corp.intel.com> Hi Ghada, Happy New Year! With the patch disabled, ovs/dpdk could pass build in CentOS 7.6 now. Here is the patch in review: https://review.openstack.org/627749 I plan to upgrade mellanox driver itself to 4.5-1.0.1.0, in order to fix the driver build failure. Then after OVS/DPDK new release upgrade, mellanox adapter will be supported again. Feel free to contact me if you have any question on it. Thanks. Best Regards Shuicheng -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Saturday, December 22, 2018 5:22 AM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: Kernel upgrade status & DPDK need be upgraded Hi Shuicheng, As discussed in the networking meeting yesterday, please apply a patch to the CentOS 7.6 feature branch to disable the mellanox drivers temporarily in the openvswitch package: This is the explicit patch in STX that enables it currently: https://github.com/openstack/stx-integ/blob/master/networking/openvswitch/centos/meta_patches/0005-enable-mlx-pmds.patch Please see if this addresses the compile issues you are facing. The longer term plan will be to upgrade to a new version of openvswitch which has support for DPDK 18.11. Looking at the ovs releases, it seems that the next major release is planned for mid-February. http://docs.openvswitch.org/en/latest/internals/release-process/#release-scheduling Please note that my team is out of the office until Jan 2. If you need help before then, please contact Forrest. Regards, Ghada -----Original Message----- From: Khalil, Ghada Sent: Monday, December 17, 2018 11:38 AM To: 'Lin, Shuicheng'; starlingx-discuss at lists.starlingx.io Subject: RE: Kernel upgrade status & DPDK need be upgraded Hi Shuicheng, You are correct. The Mellanox drivers are tied to DPDK as well as the kernel. At a high level, I see no option, but to upgrade DPDK/OVS to 18.11 to align with the newer kernel and mellanox drivers. Is there a version available for ovs/ovs-dpdk that supports 18.11 yet? If not, is there information on when one would be available? I added this as an agenda item in the next networking team meeting on Dec 20 at 9:15am Eastern Time. https://etherpad.openstack.org/p/stx-networking We will discuss this in more detail then. Feel free to join us. Zoom details are on the wiki: https://wiki.openstack.org/wiki/Starlingx/Meetings#0615am_PDT_.2F_1415_UTC_-_Networking_Team_Call_.28Bi-weekly.29 Regards, Ghada -------------- From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, December 16, 2018 7:55 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Kernel upgrade status & DPDK need be upgraded Hi all, We are working on kernel upgrade task recently [0]. After upgrade the kernel, we find several modules cannot pass build, due to data structure/function api change in kernel. Here is the module list cannot pass build with the new kernel: Mlnx-ofa_kernel Intel-i40e Intel-i40evf Tpmdd Intel-ixgbe drbd openvswitch To fix the build failure, I plan to upgrade these packages to newer version, which supports CentOS 7.6. This upgrade may cause other packages depend on these packages be upgraded also. Take Mlnx-ofa as example, it is bound with DPDK. Per [1], MLNX_OFED 4.5-1.0.1.0 supports CentOS 7.6. Per [2], DPDK should be upgraded to 18.11, while our current DPDK is 17.11, and is bound with OVS. And OVS upgrade may affect Neutron. I need network team to help decide the upgrade strategy of DPDK/OVS. Thanks. [0]: https://storyboard.openstack.org/#!/story/2004521 [1]: http://www.mellanox.com/page/products_dyn?product_family=26&mtag=linux_sw_drivers [2]: https://doc.dpdk.org/guides-18.11/rel_notes/release_18_11.html Best Regards Shuicheng From Ken.Young at windriver.com Wed Jan 2 14:18:11 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Wed, 2 Jan 2019 14:18:11 +0000 Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework In-Reply-To: <3CAA827B7A79BA46B15B280EC82088FE4824FC1C@ALA-MBD.corp.ad.wrs.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E41A@FMSMSX114.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC09D75EA@ALA-MBD.corp.ad.wrs.com> <3CAA827B7A79BA46B15B280EC82088FE4824FC1C@ALA-MBD.corp.ad.wrs.com> Message-ID: <44BB1A95-A795-4B26-ADEF-1D605321C56A@windriver.com> Numan, Do you have a pointer to the report portal? I am interested in the dashboard. Thanks! /KenY On 2018-12-31, 9:28 AM, "Waheed, Numan" wrote: Yes. That is one possibility. As far as our investigation goes, reportportal.io dashboard has the capability to integrate with both PyTest and RobotFW. Thanks, Numan. -----Original Message----- From: Zvonar, Bill Sent: December-28-18 3:25 PM To: Cabrales, Ada ; Waheed, Numan ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [ Test ][ discussion ] Unified test framework Hi Ada/Numan - apologies if this was discussed & I don't recall - is it an option for us to carry on with both (as long as they can both feed up into the same dashboard)? -----Original Message----- From: Cabrales, Ada Sent: Friday, December 21, 2018 5:59 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework Hello, We currently have 2 testing frameworks proposed: - Robot [0] - Sanity check at Intel's premises is done using it - Deployment on virtual environment, and running the tests is automated. - ~200 tests automated so far. - PyTest [1] - used by Wind River for their testing - A big amount of test cases automated (Numan, can you provide a number?) Both frameworks are similar, there must be some re-work required on one of the sides to align with the chosen one. What I would like to have, is an informed decision, bringing the best impact to the project and thinking about the future, not only the current picture. Even knowing these days are going to be quiet, I want to continue the conversation: which one best serves StarlingX? Regards Ada [0] http://robotframework.org/ [1] https://docs.pytest.org/en/latest/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Jan 2 14:36:04 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 2 Jan 2019 09:36:04 -0500 Subject: [Starlingx-discuss] Removing version from installer files In-Reply-To: References: Message-ID: <3b1b7fec-1c8e-721c-a24d-18633549ab3c@windriver.com> This did cause breakage for the official build at starlingx.cengn.ca over the holidays. I was badly behind on my starlingx-discuss reading due to other work deadlines, and missed this e-mail. In future I'd encourage folks to add a '[build]' and 'action required' to the title of e-mails of this type. Thanks Scott On 2018-12-07 12:15 p.m., Cordoba Malibran, Erich wrote: > Hi all, > > I'm sending this review to remove the installer version "stx-0.2" from > the filenames required to create the installer. This could be a > breaking change for some people as the tis-installer folder and the > files inside there are populated manually. > > The changes needed to not have a broken build is : > > mv tis-installer stx-installer > mv stx-installer/vmlinuz-stx-0.2 stx-installer/vmlinuz > mv stx-installer/squashfs.img-stx-0.2 stx-installer/squashfs.img > mv stx-installer/initrd.img-stx-0.2 stx-installer/initrd.img > > Thanks > > -Erich > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Numan.Waheed at windriver.com Wed Jan 2 15:13:19 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Wed, 2 Jan 2019 15:13:19 +0000 Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework In-Reply-To: <44BB1A95-A795-4B26-ADEF-1D605321C56A@windriver.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CD4E41A@FMSMSX114.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC09D75EA@ALA-MBD.corp.ad.wrs.com> <3CAA827B7A79BA46B15B280EC82088FE4824FC1C@ALA-MBD.corp.ad.wrs.com> <44BB1A95-A795-4B26-ADEF-1D605321C56A@windriver.com> Message-ID: <3CAA827B7A79BA46B15B280EC82088FE482507A5@ALA-MBD.corp.ad.wrs.com> http://reportportal.io/ -----Original Message----- From: Young, Ken Sent: January-02-19 9:18 AM To: Waheed, Numan ; Zvonar, Bill ; Cabrales, Ada ; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework Numan, Do you have a pointer to the report portal? I am interested in the dashboard. Thanks! /KenY On 2018-12-31, 9:28 AM, "Waheed, Numan" wrote: Yes. That is one possibility. As far as our investigation goes, reportportal.io dashboard has the capability to integrate with both PyTest and RobotFW. Thanks, Numan. -----Original Message----- From: Zvonar, Bill Sent: December-28-18 3:25 PM To: Cabrales, Ada ; Waheed, Numan ; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [ Test ][ discussion ] Unified test framework Hi Ada/Numan - apologies if this was discussed & I don't recall - is it an option for us to carry on with both (as long as they can both feed up into the same dashboard)? -----Original Message----- From: Cabrales, Ada Sent: Friday, December 21, 2018 5:59 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [ Test ][ discussion ] Unified test framework Hello, We currently have 2 testing frameworks proposed: - Robot [0] - Sanity check at Intel's premises is done using it - Deployment on virtual environment, and running the tests is automated. - ~200 tests automated so far. - PyTest [1] - used by Wind River for their testing - A big amount of test cases automated (Numan, can you provide a number?) Both frameworks are similar, there must be some re-work required on one of the sides to align with the chosen one. What I would like to have, is an informed decision, bringing the best impact to the project and thinking about the future, not only the current picture. Even knowing these days are going to be quiet, I want to continue the conversation: which one best serves StarlingX? Regards Ada [0] http://robotframework.org/ [1] https://docs.pytest.org/en/latest/ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ken.Young at windriver.com Wed Jan 2 16:35:08 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Wed, 2 Jan 2019 16:35:08 +0000 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: References: Message-ID: Victor, Security work is never completed. There is always a long list of inventive new vulnerabilities and a laundry list of hardening work to be completed. The vulnerability work, considering the severity, is generally urgent. Hardening work is not urgent but important. In this case, we are dealing with a hardening initiative that focuses on a small area of the code. The challenge is that these small change proposed have larger implications. As was pointed out on the gerrit reviews, performance and / or functional testing is required. My concern is that we affect the timing / behaviour of stx-ha and stx-metal such that they do not work together in some scenarios. This will need to be tested and is certainly larger than a sanity. Also, I am wondering if there is a way to phase the effort. For example, is there a way to break up the flag changes such that the warnings are separated from the flags which change the compiled code? That way, we are not trying to jam everything through at once. Hope this helps. Happy to discuss when you return from Holliday. Regards, Ken Y From: Victor Rodriguez Date: Friday, December 28, 2018 at 7:34 PM To: Curtis Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security On Fri, Dec 21, 2018, 07:08 Curtis wrote: On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez > wrote: Hi StarlingX community We can all agree that security is an important feature to be taken into consideration in any SW project. In the aim of improving the security of the StarlingX project, we have been taking the task to propose the use of some compiler flags that prevent and detect some security holes, especially by buffer overflow that could lead into ROP attacks. The list of flags that we are proposing are : Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" Format string vulnerabilities: CFLAGS="-Wformat -Wformat-security" Stack execution protection: LDFLAGS="-z noexecstack" Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" These are being analyzed in the following Gerrit reviews (thanks a lot for all the good feedback) https://review.openstack.org/#/c/623608/ https://review.openstack.org/#/c/623603/ https://review.openstack.org/#/c/623601/ https://review.openstack.org/#/c/623599/ As requested in the Gerrit reviews, there is a proper need to first understand what these compiler flags do and what is the impact they have at the functional and performance area of the project. This is a preliminary report, we will be following up with a test plan for functional & performance test plans for the services as a next step. This report includes: * Detailed description of what the compiler flag does * Code example that shows how does it work to prevent attacks * If there is a change in the binary, we create a microbenchmark that shows us how the flag impact the performance https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security As a result of the microbenchmark, the performance impact is not relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more details on the HW and SW specification upon requests) The areas of the code we are suggesting on the patches are: * stx-ha * stx-metal * stx-nfv * stx-fault We do take care that these flags are not breaking the following areas after being applied. * Build process of the image * Sanity test cases after the image is created (Ada can give more details on the sanity report of the image generated with these flags) If running the sanity tests are not enough to prove that a change in compiler flags do not affect functionality, please gave us the right path to follow. As mentioned before, this is a preliminary report, and that we will be following up with a test plan for functional & performance test plans for the services as a next step. Hope this email helps to clarify some questions related to the flags and start the follow-up discussion. Thanks for the context Victor, it's very helpful to me. Hi Curtis, glad it helps, it was fun to do the research One thing I want to mention is something the Kata Containers team was talking about at the Berlin OpenStack summit, which is when many small performance hits start to add up. They have to be careful to ensure they don't have a bunch of smallish looking changes that add up to a large performance hit over a longer period of time. You are right, it's a valid point that we need to take care too Overall I'm sure the StarlingX project would like to have some performance testing, if we don't already, though that can be challenging for an open source project. I had mentioned OPNFV's Functest and related projects on the TSC call, but now seeing which components are affected I'm not sure that would be directly helpful. I look forward to further discussions around this area. Thanks for let me know that, I will take a look at OPNFV's functest and other projects before the next TSC of 2019 I will do my best to came up with a proposal for a better performance testing. Thanks Victor Rodriguez Thanks, Curtis Regards Victor Rodriguez _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Wed Jan 2 20:31:21 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Wed, 2 Jan 2019 20:31:21 +0000 Subject: [Starlingx-discuss] Banned C-Functions In-Reply-To: References: <6AF88FC9-05EF-4878-834C-2A584C66BCEB@windriver.com> Message-ID: <06705293-6C89-4CFF-AFDE-31F84A45FE52@windriver.com> Chris, Thank you for the detailed review. I updated the policy for sscanf to allow for core review. I added scanf / vscanf to this line as well. I did not add fscanf / vnscanf; these functions are not on the banned c-list from Microsoft (https://msdn.microsoft.com/en-us/library/bb288454.aspx). I also updated to strncat to inspect for buffer overflow as well. Regarding the remaining function mentioned, these are not covered on the Microsoft banned c list. If there are issues with these functions, perhaps we can create a coding guide. Regards, Ken Y On 2018-12-17, 1:22 PM, "Chris Friesen" wrote: On 12/17/2018 9:19 AM, Young, Ken wrote: > All, > > As was discussed on the community call, the Starling X security team has > been working on a banned c-function policy to help avoid the > introduction of security vulnerabilities. Up to now, this policy has > been a draft. We have resolved all outstanding issues with the policy > and we are currently looking for community feedback on the policy before > asking the cores to enact the policy. It can be found here: > > _https://wiki.openstack.org/wiki/StarlingX/Security/Banned_C_Functions___ > > The goal is to gather and resolve any community issues by January 9^th > . These can be discussed either on the mailing list or in the community > meetings on Wednesday, 10 AM EDT. After this point, the ask would be > for the cores to ensure that no /new/ instances of banned functions are > added to the code. The "sscanf" one doesn't suggest what to use instead. Also, "sscanf" is not necessarily unbounded, it allows the caller to specify field widths, but they're optional. So it might make sense to allow with approval from core. The other problem with all the "scanf" family is that the arithmatic conversions don't protect against arithmatic overflow, so the "strto*" type functions are more robust for use with unknown inputs. What about scanf, fscanf, vscanf, vsscanf? What about tmpfile() and mktemp() which are safe to use but can easily introduce security issues? (Should use mkstemp() instead.) What about gethostbyaddr() and gethostbyname() which are non-reentrant and don't support IPv6 well? (Replaced by getaddrinfo() and freeaddrinfo().) strncat() should also be inspected for overflow. A call to "strncat(s1, s2, n) can end up writing strlen(s1)+n+1 characters to the buffer. setjmp()/longjmp() should be reviewed *extremely* carefully, especially if combined with threaded code. system() should be used very cautiously Chris _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Jan 2 20:40:35 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 2 Jan 2019 14:40:35 -0600 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: References: Message-ID: On Wed, Jan 2, 2019 at 10:35 AM Young, Ken wrote: > > Victor, > > > > Security work is never completed. There is always a long list of inventive new vulnerabilities and a laundry list of hardening work to be completed. The vulnerability work, considering the severity, is generally urgent. Hardening work is not urgent but important. In this case, we are dealing with a hardening initiative that focuses on a small area of the code. > > > > The challenge is that these small change proposed have larger implications. As was pointed out on the gerrit reviews, performance and / or functional testing is required. My concern is that we affect the timing / behaviour of stx-ha and stx-metal such that they do not work together in some scenarios. This will need to be tested and is certainly larger than a sanity. > Agree, our concern on the last TSC meeting was to come up with a proper framework to measure the performance impact of key changes in the project ( such as new compiler flags or new functionality options). The concern you have about timing /behavior of stx-ha and stx-metal is a key point that I would like to understand more, the idea is to improve security without affecting functionality at all > > > Also, I am wondering if there is a way to phase the effort. For example, is there a way to break up the flag changes such that the warnings are separated from the flags which change the compiled code? That way, we are not trying to jam everything through at once. We could came up with a V2 of the patches with just the warning flags and the fixes to those warnings, is that ok? > > > > Hope this helps. Happy to discuss when you return from Holliday. Sure, thanks for the feedback ( I will be fully back Monday ) > > > > Regards, > > Ken Y > > > > From: Victor Rodriguez > Date: Friday, December 28, 2018 at 7:34 PM > To: Curtis > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez wrote: > > Hi StarlingX community > > We can all agree that security is an important feature to be taken > into consideration in any SW project. In the aim of improving the > security of the StarlingX project, we have been taking the task to > propose the use of some compiler flags that prevent and detect some > security holes, especially by buffer overflow that could lead into ROP > attacks. > > The list of flags that we are proposing are : > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” > > Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" > Format string vulnerabilities: CFLAGS="-Wformat -Wformat-security" > Stack execution protection: LDFLAGS="-z noexecstack" > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > These are being analyzed in the following Gerrit reviews (thanks a lot > for all the good feedback) > > https://review.openstack.org/#/c/623608/ > https://review.openstack.org/#/c/623603/ > https://review.openstack.org/#/c/623601/ > https://review.openstack.org/#/c/623599/ > > As requested in the Gerrit reviews, there is a proper need to first > understand what these compiler flags do and what is the impact they > have at the functional and performance area of the project. This is a > preliminary report, we will be following up with a test plan for > functional & performance test plans for the services as a next step. > This report includes: > > * Detailed description of what the compiler flag does > * Code example that shows how does it work to prevent attacks > * If there is a change in the binary, we create a microbenchmark that > shows us how the flag impact the performance > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security > > As a result of the microbenchmark, the performance impact is not > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > details on the HW and SW specification upon requests) > > The areas of the code we are suggesting on the patches are: > > * stx-ha > * stx-metal > * stx-nfv > * stx-fault > > We do take care that these flags are not breaking the following areas > after being applied. > > * Build process of the image > * Sanity test cases after the image is created > (Ada can give more details on the sanity report of the image generated > with these flags) > > If running the sanity tests are not enough to prove that a change in > compiler flags do not affect functionality, please gave us the right > path to follow. > > As mentioned before, this is a preliminary report, and that we will be > following up with a test plan for functional & performance test plans > for the services as a next step. > > Hope this email helps to clarify some questions related to the flags > and start the follow-up discussion. > > > > Thanks for the context Victor, it's very helpful to me. > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > One thing I want to mention is something the Kata Containers team was talking about at the Berlin OpenStack summit, which is when many small performance hits start to add up. They have to be careful to ensure they don't have a bunch of smallish looking changes that add up to a large performance hit over a longer period of time. > > > > You are right, it's a valid point that we need to take care too > > > > Overall I'm sure the StarlingX project would like to have some performance testing, if we don't already, though that can be challenging for an open source project. I had mentioned OPNFV's Functest and related projects on the TSC call, but now seeing which components are affected I'm not sure that would be directly helpful. I look forward to further discussions around this area. > > > > Thanks for let me know that, I will take a look at OPNFV's functest and other projects before the next TSC of 2019 > > > > I will do my best to came up with a proposal for a better performance testing. > > > > Thanks > > > > Victor Rodriguez > > > > Thanks, > > Curtis > > > > > Regards > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com From juan.carlos.alonso at intel.com Thu Jan 3 01:26:19 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 3 Jan 2019 01:26:19 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190102 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C7ED65@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-02 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From gwanmax at gmail.com Thu Jan 3 09:06:14 2019 From: gwanmax at gmail.com (wang guo) Date: Thu, 3 Jan 2019 17:06:14 +0800 Subject: [Starlingx-discuss] build-pkgs failed when building iso Message-ID: Hi everyone, I'm trying to build starlingx iso by following the guide step by step and got some error messages. When execute command "build-pkgs", the message "build-srpm-parallel --std failed with rc=1" was printed. When execute command "generate-cgcs-tis-repo", the message "find: ‘//localdisk/loadbuild/root/starlingx/rt/rpmbuild/RPMS’: No such file or directory" was printed. When execute command "build-iso", the message " Error -- could not install all explicitly listed packages" was printed. The logs attached with this email are the output of those command. Anyone knows what these error messages meam or the building details? thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: generate-cgcs-tis-repo.log Type: text/x-log Size: 1393 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build-pkgs.log Type: text/x-log Size: 1289 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build-iso.log Type: text/x-log Size: 85416 bytes Desc: not available URL: From erich.cordoba.malibran at intel.com Thu Jan 3 14:39:15 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 3 Jan 2019 14:39:15 +0000 Subject: [Starlingx-discuss] build-pkgs failed when building iso In-Reply-To: References: Message-ID: <1919AF31-6C01-49BE-9964-0544AB1F7F48@intel.com> Hi Wang, Seems that something happened in your environment. In the build-pkgs.log file see: 08:47:59 build-srpms-parallel --std 08:47:59 Error: MY_REPO changed since last build 08:47:59 old path: /root/starlingx/workspace/localdisk/designer/root/starlingx/cgcs-root 08:47:59 new path: //localdisk/designer/root/starlingx/cgcs-root The MY_REPO env variable is automatically set every time you open a shell (or a docker exec). It should point to: /localdisk/designer///cgcs-root where and are defined in the localrc file you created at the beginning of the setup. You can try to: - Review the content of the localrc file. - Kill the container and start it again and check the value of MY_REPO. The additional steps you mentioned failed because this one wasn't able to complete. If you need more help, please let us know. -Erich From: wang guo Date: Thursday, January 3, 2019 at 5:02 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] build-pkgs failed when building iso Hi everyone,      I'm trying to build starlingx iso by following the guide step by step and got some error messages. When execute command "build-pkgs",  the message "build-srpm-parallel --std failed with rc=1" was printed. When execute command "generate-cgcs-tis-repo",  the message "find: ‘//localdisk/loadbuild/root/starlingx/rt/rpmbuild/RPMS’: No such file or directory" was printed. When execute command "build-iso", the message " Error -- could not install all explicitly listed packages" was printed.   The logs attached with this email are the output of those command. Anyone knows what these error messages meam or  the building details?  thanks From juan.carlos.alonso at intel.com Thu Jan 3 19:35:34 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 3 Jan 2019 19:35:34 +0000 Subject: [Starlingx-discuss] How to make controller-0 enabled and available Message-ID: <8557B550001AFB46A43A0CCC314BF85153C85F7B@FMSMSX108.amr.corp.intel.com> Hello, I performed a controller swact, controller-1 become the active, then controller-0 went to disable and offline and I could not perform the swact again. I have this output from system host-list: +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | disabled | offline | | 2 | controller-1 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ Do you know how to make the controller-0 enabled and available again in order to perform the controller swact? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Thu Jan 3 19:59:57 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 3 Jan 2019 19:59:57 +0000 Subject: [Starlingx-discuss] STX install_state status Message-ID: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Jan 3 20:06:14 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 3 Jan 2019 20:06:14 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: "Penney, Don" Subject: RE: [Starlingx-discuss] the install progress information has missing when install computer node , storage node, and new controller node Date: Thu, 27 Dec 2018 15:02:18 +0000 Size: 36451 URL: From juan.carlos.alonso at intel.com Thu Jan 3 20:17:43 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 3 Jan 2019 20:17:43 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Jan 3 20:59:11 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 3 Jan 2019 20:59:11 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> I've downloaded the CENGN ISO from: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ and it does not contain the installer patches. However, the installer image here is patched: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ We'll have to look at the build to see what's going on here. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:18 PM To: Penney, Don; starlingx Subject: RE: STX install_state status Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Jan 3 21:30:41 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 3 Jan 2019 16:30:41 -0500 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> Message-ID: <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> I've updated the build scripts at CENGN.  We were still using the -0.2 qualified names at one point in the scripts. The next build should rebuild pxe-network-installer with the the correct kernel and ram disk. Scott On 2019-01-03 3:59 p.m., Penney, Don wrote: > > I’ve downloaded the CENGN ISO from: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ > > and it does not contain the installer patches. However, the installer > image here is patched: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ > > We’ll have to look at the build to see what’s going on here. > > *From:*Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] > *Sent:* Thursday, January 03, 2019 3:18 PM > *To:* Penney, Don; starlingx > *Subject:* RE: STX install_state status > > Thank You for your help, > > I am using the latest ISO from CENGN and has the same issue. > > Do you know if there is a Launchpad already open for this issue? > > Regards. > > Juan Carlos Alonso > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Thursday, January 3, 2019 2:06 PM > *To:* Alonso, Juan Carlos ; starlingx > > *Subject:* RE: STX install_state status > > Please see my reply (attached) to the thread “the install progress > information has missing when install computer node , storage node, and > new controller node”. In particular, please check that your installer > has the required patch. If you’re using a stock CentOS installer > image, you will not get the install-state notifications. > > Cheers, > > Don. > > *From:*Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] > *Sent:* Thursday, January 03, 2019 3:00 PM > *To:* starlingx > *Subject:* [Starlingx-discuss] STX install_state status > > Hi, > > Do someone have faced an issue with install states status when a new > host is added? > > During controller, compute or storage host installation: > > system host-show controller-1 | grep install > > | install_output      | text > > | install_state       | None > > | install_state_info  | None > > During installation several values should be showed as pre-install, > installing, completed. At the end of installation the output must be > install_state = completed. > > Regards. > > Juan Carlos Alonso > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Jan 4 00:07:05 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 4 Jan 2019 00:07:05 +0000 Subject: [Starlingx-discuss] FW: Sanity Test - ISO 20190103 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C89047@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-03 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ During installation of a new host (controller, compute or storage) the install states should show status of node's installation: pre-install, installation progress, post-installation, etc. Currently the install states stay as None during all host installation process. Regarding updates were made to CENGN scripts and will be applied for next build (see thread attached). Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: Scott Little Subject: Re: [Starlingx-discuss] STX install_state status Date: Thu, 3 Jan 2019 21:30:41 +0000 Size: 16632 URL: From ran1.an at intel.com Fri Jan 4 08:28:23 2019 From: ran1.an at intel.com (An, Ran1) Date: Fri, 4 Jan 2019 08:28:23 +0000 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> Hi all I'm sending this to discuss about the rule of initial value of TIS_PATCH_VER when srpm package is upgraded. "TIS_PATCH_VER" is a counter to indicate change within a major version of the package, on which we put patches. When I upgraded srpms(related to CentOS) from CentOS 7.5 to 7.6, there are different voices about the initial value of TIS_PATCH_VER(comments on [1][2][3][4]): a). reset it to 0 b). reset to the number of STX patches remaining (source patches and meta_patches together) c). reset to the number of STX patches remaining (source patches only) d). reset to the number of STX patches remaining (meta patches only) e). case by case, better do not reset. It is not a technical issue, but we will face it each time we upgrade packages, so which would you like to choose? [1] https://review.openstack.org/#/c/627760/ [2] https://review.openstack.org/#/c/627750/ [3] https://review.openstack.org/#/c/627156/ [4] https://review.openstack.org/#/c/627770/ Thanks Ran From chris.friesen at windriver.com Fri Jan 4 14:46:22 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 4 Jan 2019 08:46:22 -0600 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> Message-ID: When we customize an upstream package for the first time, TIS_PATCH_VER gets set to 1, then generally gets incremented on each subsequent change.  Thus, prior to package upgrade TIS_PATCH_VER reflects the number of changes that were made to the upstream package.  This can be used to tell at a glance how customized a given package is. When upgrading, it's possible that some customizations are no longer applicable, while others are.  Thus, I think options "a" and "e" don't make sense as they remove the "how customized is this package" meaning. Of the options below, I think option "c" is probably the best since for an upgrade we might create a single meta-patch to add all the source patches. I think the most accurate value would probably be "number of source patches" plus "number of meta patches that don't add/remove source patches".  But we probably don't really need that level of accuracy. Chris On 1/4/2019 2:28 AM, An, Ran1 wrote: > Hi all > I'm sending this to discuss about the rule of initial value of TIS_PATCH_VER when srpm package is upgraded. > "TIS_PATCH_VER" is a counter to indicate change within a major version of the package, on which we put patches. > > When I upgraded srpms(related to CentOS) from CentOS 7.5 to 7.6, there are different voices about the initial value of TIS_PATCH_VER(comments on [1][2][3][4]): > a). reset it to 0 > b). reset to the number of STX patches remaining (source patches and meta_patches together) > c). reset to the number of STX patches remaining (source patches only) > d). reset to the number of STX patches remaining (meta patches only) > e). case by case, better do not reset. > > It is not a technical issue, but we will face it each time we upgrade packages, so which would you like to choose? > > [1] https://review.openstack.org/#/c/627760/ > [2] https://review.openstack.org/#/c/627750/ > [3] https://review.openstack.org/#/c/627156/ > [4] https://review.openstack.org/#/c/627770/ > > Thanks > Ran From Don.Penney at windriver.com Fri Jan 4 15:12:53 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 4 Jan 2019 15:12:53 +0000 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> From a patching perspective, which is why TIS_PATCH_VER was introduced originally, it can be reset to 0 when the source package is upversioned. But I see Scott's point from his review comment about indicating a revision from source, and Chris's below. Setting it to 1 to show modification from original source seems reasonable to me. Given that it will get incremented and veer from the patch count, I don't see a lot of benefit to needing to count the patches to determine an initial version. But if we're going that route, I'd vote for b - count the number of patch files total. -----Original Message----- From: Friesen, Chris Sent: Friday, January 04, 2019 9:46 AM To: An, Ran1; Lin, Shuicheng; Penney, Don; Saul Wold; Little, Scott; Church, Robert; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io; Chen, Haochuan Z Subject: Re: [Starlingx-discuss]discuss about initial value of TIS_PATCH_VER when upgrade packages When we customize an upstream package for the first time, TIS_PATCH_VER gets set to 1, then generally gets incremented on each subsequent change.  Thus, prior to package upgrade TIS_PATCH_VER reflects the number of changes that were made to the upstream package.  This can be used to tell at a glance how customized a given package is. When upgrading, it's possible that some customizations are no longer applicable, while others are.  Thus, I think options "a" and "e" don't make sense as they remove the "how customized is this package" meaning. Of the options below, I think option "c" is probably the best since for an upgrade we might create a single meta-patch to add all the source patches. I think the most accurate value would probably be "number of source patches" plus "number of meta patches that don't add/remove source patches".  But we probably don't really need that level of accuracy. Chris On 1/4/2019 2:28 AM, An, Ran1 wrote: > Hi all > I'm sending this to discuss about the rule of initial value of TIS_PATCH_VER when srpm package is upgraded. > "TIS_PATCH_VER" is a counter to indicate change within a major version of the package, on which we put patches. > > When I upgraded srpms(related to CentOS) from CentOS 7.5 to 7.6, there are different voices about the initial value of TIS_PATCH_VER(comments on [1][2][3][4]): > a). reset it to 0 > b). reset to the number of STX patches remaining (source patches and meta_patches together) > c). reset to the number of STX patches remaining (source patches only) > d). reset to the number of STX patches remaining (meta patches only) > e). case by case, better do not reset. > > It is not a technical issue, but we will face it each time we upgrade packages, so which would you like to choose? > > [1] https://review.openstack.org/#/c/627760/ > [2] https://review.openstack.org/#/c/627750/ > [3] https://review.openstack.org/#/c/627156/ > [4] https://review.openstack.org/#/c/627770/ > > Thanks > Ran From Ken.Young at windriver.com Fri Jan 4 18:47:15 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Fri, 4 Jan 2019 18:47:15 +0000 Subject: [Starlingx-discuss] CENGN Mirror Robustness Message-ID: <177FB3B9-4EE5-40B0-9871-442FF48AA4FF@windriver.com> All, We are working on a plan to make the Mirror more robust for the community as a whole. In co-operation with CENGN, we are moving the mirror from the current bare metal server to a kubernetes container with a CEPH backend. From a community perspective, we are expecting to have the transition to be seamless. We are sending the email from an awareness perspective. The current timeline we are targeting is: * Have the environment ready by Jan 8th * Perform testing in parallel with the existing and the new environment until Jan 11th * Transition to the new implementation of the mirror on Jan 14th. We are planning to maintain the current environment in case we need to switch back to the existing implementation. We will keep you in the loop as we progress towards this transition. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Fri Jan 4 18:57:00 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Fri, 4 Jan 2019 18:57:00 +0000 Subject: [Starlingx-discuss] CENGN Build Aging Message-ID: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> All, The build team has been discussing the frequency of the build and how long they are maintained on the Mirror. Our initial ideas are capture here: https://wiki.openstack.org/wiki/StarlingX/Build/EventBuildCadence This captures the initial view of the team. We can evolve this as we get clear direction from the Release team and the community as whole. Comments welcome. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Jan 4 19:46:25 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 4 Jan 2019 19:46:25 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C89181@FMSMSX108.amr.corp.intel.com> Hi, Seems that the issue is still present on the last CENGN ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ Can you help me to verify if patches were integrated on the ISO? Regards. Juan Carlos Alonso From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, January 3, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status I've updated the build scripts at CENGN. We were still using the -0.2 qualified names at one point in the scripts. The next build should rebuild pxe-network-installer with the the correct kernel and ram disk. Scott On 2019-01-03 3:59 p.m., Penney, Don wrote: I've downloaded the CENGN ISO from: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ and it does not contain the installer patches. However, the installer image here is patched: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ We'll have to look at the build to see what's going on here. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:18 PM To: Penney, Don; starlingx Subject: RE: STX install_state status Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Jan 4 19:51:19 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 4 Jan 2019 19:51:19 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C89181@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> <8557B550001AFB46A43A0CCC314BF85153C89181@FMSMSX108.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F730@ALA-MBD.corp.ad.wrs.com> Looking at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ There are still files with -stx-0.2 suffix, and some with new- prefix. The pxe-network-installer RPM is looking for the files with neither: https://github.com/openstack/stx-metal/blob/master/installer/pxe-network-installer/centos/build_srpm.data So it's likely still picking up the stock images, rather than the patched ones. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Friday, January 04, 2019 2:46 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status Hi, Seems that the issue is still present on the last CENGN ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ Can you help me to verify if patches were integrated on the ISO? Regards. Juan Carlos Alonso From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, January 3, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status I've updated the build scripts at CENGN. We were still using the -0.2 qualified names at one point in the scripts. The next build should rebuild pxe-network-installer with the the correct kernel and ram disk. Scott On 2019-01-03 3:59 p.m., Penney, Don wrote: I've downloaded the CENGN ISO from: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ and it does not contain the installer patches. However, the installer image here is patched: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ We'll have to look at the build to see what's going on here. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:18 PM To: Penney, Don; starlingx Subject: RE: STX install_state status Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Jan 4 19:55:35 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 4 Jan 2019 19:55:35 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F730@ALA-MBD.corp.ad.wrs.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> <8557B550001AFB46A43A0CCC314BF85153C89181@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F730@ALA-MBD.corp.ad.wrs.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C891A2@FMSMSX108.amr.corp.intel.com> I think I will open a Launchpad to track the progress on it :) Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Friday, January 4, 2019 1:51 PM To: Alonso, Juan Carlos ; Little, Scott ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] STX install_state status Looking at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ There are still files with -stx-0.2 suffix, and some with new- prefix. The pxe-network-installer RPM is looking for the files with neither: https://github.com/openstack/stx-metal/blob/master/installer/pxe-network-installer/centos/build_srpm.data So it's likely still picking up the stock images, rather than the patched ones. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Friday, January 04, 2019 2:46 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status Hi, Seems that the issue is still present on the last CENGN ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ Can you help me to verify if patches were integrated on the ISO? Regards. Juan Carlos Alonso From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, January 3, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status I've updated the build scripts at CENGN. We were still using the -0.2 qualified names at one point in the scripts. The next build should rebuild pxe-network-installer with the the correct kernel and ram disk. Scott On 2019-01-03 3:59 p.m., Penney, Don wrote: I've downloaded the CENGN ISO from: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ and it does not contain the installer patches. However, the installer image here is patched: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ We'll have to look at the build to see what's going on here. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:18 PM To: Penney, Don; starlingx Subject: RE: STX install_state status Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Jan 4 20:01:27 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 4 Jan 2019 20:01:27 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C891A2@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> <8557B550001AFB46A43A0CCC314BF85153C89181@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F730@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C891A2@FMSMSX108.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F77B@ALA-MBD.corp.ad.wrs.com> It may have been better for the recent change to pxe-network-installer to have dropped just the -0.2, but kept the -stx suffix. This would more clearly differentiate between the stock CentOS installer images and the StarlingX modified ones. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Friday, January 04, 2019 2:56 PM To: Penney, Don; Little, Scott; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] STX install_state status I think I will open a Launchpad to track the progress on it :) Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Friday, January 4, 2019 1:51 PM To: Alonso, Juan Carlos ; Little, Scott ; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] STX install_state status Looking at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ There are still files with -stx-0.2 suffix, and some with new- prefix. The pxe-network-installer RPM is looking for the files with neither: https://github.com/openstack/stx-metal/blob/master/installer/pxe-network-installer/centos/build_srpm.data So it's likely still picking up the stock images, rather than the patched ones. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Friday, January 04, 2019 2:46 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status Hi, Seems that the issue is still present on the last CENGN ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ Can you help me to verify if patches were integrated on the ISO? Regards. Juan Carlos Alonso From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, January 3, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status I've updated the build scripts at CENGN. We were still using the -0.2 qualified names at one point in the scripts. The next build should rebuild pxe-network-installer with the the correct kernel and ram disk. Scott On 2019-01-03 3:59 p.m., Penney, Don wrote: I've downloaded the CENGN ISO from: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ and it does not contain the installer patches. However, the installer image here is patched: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ We'll have to look at the build to see what's going on here. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:18 PM To: Penney, Don; starlingx Subject: RE: STX install_state status Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Fri Jan 4 20:48:34 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 4 Jan 2019 20:48:34 +0000 Subject: [Starlingx-discuss] Meeting Reminder: StarlingX Infrastructure Containerization Message-ID: Just a reminder that our next meeting will be Monday Jan 7th. The agenda is posted here: https://etherpad.openstack.org/p/stx-containerization If anyone would like to add an agenda topic please update the etherpad. Frank -----Original Appointment----- From: Miller, Frank Sent: Thursday, November 29, 2018 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: StarlingX Infrastructure Containerization When: Occurs every Monday effective 12/3/2018 until 3/25/2019 from 11:00 AM to 11:30 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Jan 4 21:52:37 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 4 Jan 2019 13:52:37 -0800 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> Message-ID: <8e27b76c-ce2a-039c-f51c-b00087dad813@linux.intel.com> On 1/4/19 7:12 AM, Penney, Don wrote: > From a patching perspective, which is why TIS_PATCH_VER was introduced originally, it can be reset to 0 when the source package is upversioned. But I see Scott's point from his review comment about indicating a revision from source, and Chris's below. > > Setting it to 1 to show modification from original source seems reasonable to me. Given that it will get incremented and veer from the patch count, I don't see a lot of benefit to needing to count the patches to determine an initial version. But if we're going that route, I'd vote for b - count the number of patch files total. > I am not sure I agree with any of this, first off, just the fact that we have an SRPM and the TIS_PACTH_VER indicates that it's been patched, I really don't see the value in having the patch count indicated as a "Version" item. It makes more sense to start from 0 (option a) and that way we can track each subsequent change to that package with an increment. This issue did not come up at all in past updates, I am not sure why it's becoming an issue now. See below for additional comments > -----Original Message----- > From: Friesen, Chris > Sent: Friday, January 04, 2019 9:46 AM > To: An, Ran1; Lin, Shuicheng; Penney, Don; Saul Wold; Little, Scott; Church, Robert; Bailey, Henry Albert (Al) > Cc: starlingx-discuss at lists.starlingx.io; Chen, Haochuan Z > Subject: Re: [Starlingx-discuss]discuss about initial value of TIS_PATCH_VER when upgrade packages > > When we customize an upstream package for the first time, TIS_PATCH_VER > gets set to 1, then generally gets incremented on each subsequent > change.  Thus, prior to package upgrade TIS_PATCH_VER reflects the > number of changes that were made to the upstream package.  This can be > used to tell at a glance how customized a given package is. > > When upgrading, it's possible that some customizations are no longer > applicable, while others are.  Thus, I think options "a" and "e" don't > make sense as they remove the "how customized is this package" meaning. > As mentioned above, just having that additional tis. in the file name indicates that it's been modified. > Of the options below, I think option "c" is probably the best since for > an upgrade we might create a single meta-patch to add all the source > patches. > And what happens when a modification is needed to the Specfile or patch with out increasing the actual number of patches, now the value of TIS_PATCH_VER increments and no longer matches the patch count. Therefore, a version should be incremental from 0. Sau! > I think the most accurate value would probably be "number of source > patches" plus "number of meta patches that don't add/remove source > patches".  But we probably don't really need that level of accuracy. > > Chris > > On 1/4/2019 2:28 AM, An, Ran1 wrote: >> Hi all >> I'm sending this to discuss about the rule of initial value of TIS_PATCH_VER when srpm package is upgraded. >> "TIS_PATCH_VER" is a counter to indicate change within a major version of the package, on which we put patches. >> >> When I upgraded srpms(related to CentOS) from CentOS 7.5 to 7.6, there are different voices about the initial value of TIS_PATCH_VER(comments on [1][2][3][4]): >> a). reset it to 0 >> b). reset to the number of STX patches remaining (source patches and meta_patches together) >> c). reset to the number of STX patches remaining (source patches only) >> d). reset to the number of STX patches remaining (meta patches only) >> e). case by case, better do not reset. >> >> It is not a technical issue, but we will face it each time we upgrade packages, so which would you like to choose? >> >> [1] https://review.openstack.org/#/c/627760/ >> [2] https://review.openstack.org/#/c/627750/ >> [3] https://review.openstack.org/#/c/627156/ >> [4] https://review.openstack.org/#/c/627770/ >> >> Thanks >> Ran > > From juan.carlos.alonso at intel.com Fri Jan 4 22:47:24 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 4 Jan 2019 22:47:24 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190104 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C891FC@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-04 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ When installing a new host (controller, compute or storage) the 'install_state' fields don't show values about progress installation. They stay as 'None'. Check thread attached for more details. Launchpad: https://bugs.launchpad.net/starlingx/+bug/1810553 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An embedded message was scrubbed... From: "Penney, Don" Subject: RE: [Starlingx-discuss] STX install_state status Date: Fri, 4 Jan 2019 20:01:27 +0000 Size: 28276 URL: From haochuan.z.chen at intel.com Mon Jan 7 01:43:39 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 7 Jan 2019 01:43:39 +0000 Subject: [Starlingx-discuss] binary image upload to cengn for cenos 7.6 upgrade Message-ID: <56829C2A36C2E542B0CCB9854828E4D8561E21DC@CDSMSX101.ccr.corp.intel.com> Hi I works on centos7.6 upgrade, and focus anaconda upgrade from 21.48.22.121 to 21.48.22.147. It request to upgrade binary to centos 7.6. http://mirror.centos.org/centos/7.6.1810/os/x86_64/LiveOS/ http://mirror.centos.org/centos/7.6.1810/os/x86_64/images/ http://mirror.centos.org/centos/7.6.1810/os/x86_64/isolinux/ But currently on cengn, there is only rpms for 7.6, please help to upload. http://mirror.starlingx.cengn.ca/mirror/centos/centos/mirror.centos.org/7.6.1810/os/x86_64/ Thanks! Martin, Chen SSG OTC, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Mon Jan 7 03:57:16 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Mon, 7 Jan 2019 03:57:16 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F3EAA5@SHSMSX103.ccr.corp.intel.com> Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Mon Jan 7 05:14:04 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Mon, 7 Jan 2019 05:14:04 +0000 Subject: [Starlingx-discuss] STX install_state status In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA40F77B@ALA-MBD.corp.ad.wrs.com> References: <8557B550001AFB46A43A0CCC314BF85153C87FA0@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F1E0@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C87FC6@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F263@ALA-MBD.corp.ad.wrs.com> <1488509b-5510-b936-9d3b-e398f96de5e8@windriver.com> <8557B550001AFB46A43A0CCC314BF85153C89181@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F730@ALA-MBD.corp.ad.wrs.com> <8557B550001AFB46A43A0CCC314BF85153C891A2@FMSMSX108.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F77B@ALA-MBD.corp.ad.wrs.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F3EB22@SHSMSX103.ccr.corp.intel.com> If there's any change about build process, please update below document: https://docs.starlingx.io/developer_guide/index.html From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Saturday, January 5, 2019 4:01 AM To: Alonso, Juan Carlos ; Little, Scott ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status It may have been better for the recent change to pxe-network-installer to have dropped just the -0.2, but kept the -stx suffix. This would more clearly differentiate between the stock CentOS installer images and the StarlingX modified ones. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Friday, January 04, 2019 2:56 PM To: Penney, Don; Little, Scott; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] STX install_state status I think I will open a Launchpad to track the progress on it :) Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Friday, January 4, 2019 1:51 PM To: Alonso, Juan Carlos >; Little, Scott >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] STX install_state status Looking at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ There are still files with -stx-0.2 suffix, and some with new- prefix. The pxe-network-installer RPM is looking for the files with neither: https://github.com/openstack/stx-metal/blob/master/installer/pxe-network-installer/centos/build_srpm.data So it's likely still picking up the stock images, rather than the patched ones. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Friday, January 04, 2019 2:46 PM To: Little, Scott; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status Hi, Seems that the issue is still present on the last CENGN ISO: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ Can you help me to verify if patches were integrated on the ISO? Regards. Juan Carlos Alonso From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, January 3, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX install_state status I've updated the build scripts at CENGN. We were still using the -0.2 qualified names at one point in the scripts. The next build should rebuild pxe-network-installer with the the correct kernel and ram disk. Scott On 2019-01-03 3:59 p.m., Penney, Don wrote: I've downloaded the CENGN ISO from: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ and it does not contain the installer patches. However, the installer image here is patched: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/installer/ We'll have to look at the build to see what's going on here. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:18 PM To: Penney, Don; starlingx Subject: RE: STX install_state status Thank You for your help, I am using the latest ISO from CENGN and has the same issue. Do you know if there is a Launchpad already open for this issue? Regards. Juan Carlos Alonso From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, January 3, 2019 2:06 PM To: Alonso, Juan Carlos ; starlingx Subject: RE: STX install_state status Please see my reply (attached) to the thread "the install progress information has missing when install computer node , storage node, and new controller node". In particular, please check that your installer has the required patch. If you're using a stock CentOS installer image, you will not get the install-state notifications. Cheers, Don. From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Thursday, January 03, 2019 3:00 PM To: starlingx Subject: [Starlingx-discuss] STX install_state status Hi, Do someone have faced an issue with install states status when a new host is added? During controller, compute or storage host installation: system host-show controller-1 | grep install | install_output | text | install_state | None | install_state_info | None During installation several values should be showed as pre-install, installing, completed. At the end of installation the output must be install_state = completed. Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Mon Jan 7 13:50:16 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Mon, 7 Jan 2019 13:50:16 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F3EAA5@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F3EAA5@SHSMSX103.ccr.corp.intel.com> Message-ID: I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jan 7 15:10:43 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 7 Jan 2019 10:10:43 -0500 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> Message-ID: <266d1987-b300-97a3-e84e-147ff16058c8@windriver.com> 100% agree. Scott On 2019-01-04 9:46 a.m., Chris Friesen wrote: > When we customize an upstream package for the first time, > TIS_PATCH_VER gets set to 1, then generally gets incremented on each > subsequent change.  Thus, prior to package upgrade TIS_PATCH_VER > reflects the number of changes that were made to the upstream > package.  This can be used to tell at a glance how customized a given > package is. > > When upgrading, it's possible that some customizations are no longer > applicable, while others are.  Thus, I think options "a" and "e" don't > make sense as they remove the "how customized is this package" meaning. > > Of the options below, I think option "c" is probably the best since > for an upgrade we might create a single meta-patch to add all the > source patches. > > I think the most accurate value would probably be "number of source > patches" plus "number of meta patches that don't add/remove source > patches".  But we probably don't really need that level of accuracy. > > Chris > > On 1/4/2019 2:28 AM, An, Ran1 wrote: >> Hi all >>    I'm sending this to discuss about the rule of initial value of >> TIS_PATCH_VER when srpm package is upgraded. >> "TIS_PATCH_VER" is a counter to indicate change within a major >> version of the package, on which we put patches. >>    When I upgraded srpms(related to CentOS) from CentOS 7.5 to 7.6, >> there are different voices about the initial value of >> TIS_PATCH_VER(comments on [1][2][3][4]): >>      a). reset it to 0 >>      b). reset to the number of STX patches remaining (source patches >> and meta_patches together) >>      c). reset to the number of STX patches remaining (source patches >> only) >>      d). reset to the number of STX patches remaining (meta patches >> only) >>      e). case by case, better do not reset. >> >> It is not a technical issue, but we will face it each time we upgrade >> packages, so which would you like to choose? >> >> [1] https://review.openstack.org/#/c/627760/ >> [2] https://review.openstack.org/#/c/627750/ >> [3] https://review.openstack.org/#/c/627156/ >> [4] https://review.openstack.org/#/c/627770/ >> >> Thanks >> Ran > > From scott.little at windriver.com Mon Jan 7 15:28:36 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 7 Jan 2019 10:28:36 -0500 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: <8e27b76c-ce2a-039c-f51c-b00087dad813@linux.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> <8e27b76c-ce2a-039c-f51c-b00087dad813@linux.intel.com> Message-ID: <36d5be50-348e-a5a5-6d63-c4be2543f4f4@windriver.com> I disagree.   Our experience in the past, is that putting a tis.0 on a package raises questions from both customers and designers.  Why are you compiling this at all if you aren't changing it? A little digging, and some wasted cycles, and the answer is.  "Oh, we are changing it. we still have 3 patches against it. sorry for the confusion." Now as you point out.  We might remove a patch in a non-rebase context.  In this case we are compelled to increment, rather than decrement, TIS_PACTH_VER.  In this case we have to live with the misleadingly high number until the next rebase.  That's ok.  No one has complained about that. I should have been flagging this in earlier code reviews.  I wasn't.  My error.  Had bigger fish to fry in the early months of going open source. If the community wants to overrule, that's fine.  I'm just trying to share my hard won experience as 'the rebase guy' for 4 years prior to open sourcing. Scott On 2019-01-04 4:52 p.m., Saul Wold wrote: > > I am not sure I agree with any of this, first off, just the fact that > we have an SRPM and the TIS_PACTH_VER indicates that it's been > patched, I really don't see the value in having the patch count > indicated as a "Version" item. > > It makes more sense to start from 0 (option a) and that way we can > track each subsequent change to that package with an increment. > > This issue did not come up at all in past updates, I am not sure why > it's becoming an issue now. > > See below for additional comments From Marvin.Huang at windriver.com Mon Jan 7 15:51:21 2019 From: Marvin.Huang at windriver.com (Huang, Marvin) Date: Mon, 7 Jan 2019 15:51:21 +0000 Subject: [Starlingx-discuss] latest issue regarding devstack/stx support In-Reply-To: References: <9E7365F4-4B68-4DAB-AF76-057C7D2241D3@intel.com> <74D9C1EDDC44EF468303629CF9A2832C9CE1633E@ALA-MBD.corp.ad.wrs.com> <74D9C1EDDC44EF468303629CF9A2832C9CE178E4@ALA-MBD.corp.ad.wrs.com> Message-ID: <74D9C1EDDC44EF468303629CF9A2832C9CE19864@ALA-MBD.corp.ad.wrs.com> Thanks Al! I gave another try to bring up DevStack but found a different issue this time with stx-fault (after ‘fix’ about 5, 6 issues, which are file permission issues and easy to be ‘fixed’): g++ -pthread -shared -Wl,-z,relro build/temp.linux-x86_64-2.7/fm_python_mod_main.o -L. -L/usr/lib64 -lpq -lfmcommon -lpython2.7 -o build/lib.linux-x86_64-2.7/fm_core.so ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_common:245 sudo make DEST_DIR=/usr BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target `install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_common:1 exit_trap Anybody hit similar issue? Or what am I missing here? I’m using https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc as local.conf. BRs, Marvin From: Bailey, Henry Albert (Al) Sent: Friday, December 14, 2018 1:57 PM To: Huang, Marvin; starlingx Subject: RE: [Starlingx-discuss] latest issue regarding devstack/stx support I’m not entirely sure I understand the question. The devstack plugin was impacted by a Makefile change here https://github.com/openstack/stx-fault/commit/93f316da167a5dbb99a234e022a134d16baa5449 There is currently a devstack review for stx-fault here https://review.openstack.org/#/c/623590/ The code in that review is adding devstack as a zuul job, so it will have include fixes related to the Makefile in order for devstack to pass, and it to be able to merge. Al From: Huang, Marvin Sent: Thursday, December 13, 2018 5:08 PM To: Bailey, Henry Albert (Al); starlingx Cc: Huang, Marvin Subject: RE: [Starlingx-discuss] latest issue regarding devstack/stx support Hi Eric/Al, Can you point me the review this issue was involved? So that I can check the status to see I can try it again. Thanks! Marvin From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 12:03 PM To: Cordoba Malibran, Erich; Bailey, Henry Albert (Al); starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Thanks all! The good news is that it looks the current codes fixed some old issues I hit before (or this time it broke before the previous failing point). Marvin From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Tuesday, December 11, 2018 11:31 AM To: Bailey, Henry Albert (Al); Huang, Marvin; starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Hi My bad, I wasn’t of this required changes on devstack. I’ll send the patch to solve it. -Erich From: "Bailey, Henry Albert (Al)" Date: Tuesday, December 11, 2018 at 10:03 AM To: "Huang, Marvin" , starlingx Subject: Re: [Starlingx-discuss] latest issue regarding devstack/stx support Marvin, A change was merged on Dec 10 which changed the install_non_bb target in the Makefile for fm-mgr There is an open review for adding a devstack job to zuul for stx-fault, so in order for that review to pass zuul, it will need to include the fix. Al From: Huang, Marvin [mailto:Marvin.Huang at windriver.com] Sent: Tuesday, December 11, 2018 10:52 AM To: starlingx Subject: [Starlingx-discuss] latest issue regarding devstack/stx support Hi all, I tried to bring up Devstack/STX this morning, but got the following error, which broke the execution of ./stack.sh. g++ -o fmManager fm_main.o -lfmcommon -lrt -lpthread -luuid ++/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:288 sudo make BIN_DIR=/bin LIB_DIR=/lib INC_DIR=/include MAJOR=1 MINOR=0 install_non_bb make: *** No rule to make target 'install_non_bb'. Stop. +/opt/stack/stx-fault/devstack/lib/stx-fault:install_fm_mgr:1 exit_trap +./stack.sh:exit_trap:522 local r=2 ++./stack.sh:exit_trap:523 jobs -p +./stack.sh:exit_trap:523 jobs= +./stack.sh:exit_trap:526 [[ -n '' ]] +./stack.sh:exit_trap:532 '[' -f '' ']' +./stack.sh:exit_trap:537 kill_spinner +./stack.sh:kill_spinner:432 '[' '!' -z '' ']' +./stack.sh:exit_trap:539 [[ 2 -ne 0 ]] +./stack.sh:exit_trap:540 echo 'Error on exit' Error on exit +./stack.sh:exit_trap:542 type -p generate-subunit +./stack.sh:exit_trap:543 generate-subunit 1544541794 890 fail +./stack.sh:exit_trap:545 [[ -z /opt/stack/logs ]] +./stack.sh:exit_trap:548 /opt/stack/devstack/tools/worlddump.py -d /opt/stack/logs +./stack.sh:exit_trap:557 exit 2 stack at ubuntu16045server1:~/devstack$ I’m using the contents of https://wiki.openstack.org/wiki/StarlingX/Devstack/stx-config/localrc and created a local.conf. System: a VirtualBox VM: Ubuntu VERSION="16.04.5 LTS (Xenial Xerus)" Can anybody know if this is a known issue? Any more information regarding which version is working? Thanks! Marvin -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jan 7 16:53:51 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 7 Jan 2019 11:53:51 -0500 Subject: [Starlingx-discuss] Where to send build failure reports from starlingx.cengn.ca Message-ID: <70eead44-fba3-f4ac-799c-7ec0b52adb5b@windriver.com> Hi folks I'm thinking build failure reports from starlingx.cengn.ca need to be sent somewhere.  I think are options are 1) create a separate mailing list.  e.g. build-report at lists.starlingx.io 2) Use this list, but prefix the mailings with '[build-report]' so it's easy to filter Preferences? I'm also thinking we might want a build-admin at lists.starlingx.io list for folk with admin powers on starlingx.cengn.ca, and can fix the mirror or build. Finally, we don't have a proper smtp host for starlingx.cengn.ca.  If anyone can offer an smtp alternative, great!  Otherwise we can explore the use of a public service, e.g. smpt.google.com, but the free version comes with a pretty tight capacity cap.  I've taken the liberty of reserving build.starlingx at gmail.com for initial experimentation. Thoughts welcome. Scott From scott.little at windriver.com Mon Jan 7 17:05:57 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 7 Jan 2019 12:05:57 -0500 Subject: [Starlingx-discuss] download_mirrors.sh failing due to relocation of 7.5.1804 Message-ID: <567559f6-3084-9748-807a-3c92d7daa97b@windriver.com> CentOS 7.5.1804 has been relocated from mirror.centos.org/centos/7.5.1804 to vault.centos.org/7.5.1804. You may be seeing errors from download_mirrors.sh like ... http://mirror.centos.org/centos/7.5.1804/cloud/x86_64/openstack-pike/repodata/repomd.xml: [Errno 14] HTTP Error 404 - Not Found I'm preparing an stx-tools update to our yum repo config to correct this error. Also mirror.starlingx.cengn.ca is in the process of downloading all the new (reloacated) content.  You will see it under the subdirectory /mirror/centos/centos/vault.centos.org/7.5.1804/ . Scott From scott.little at windriver.com Mon Jan 7 17:22:05 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 7 Jan 2019 12:22:05 -0500 Subject: [Starlingx-discuss] download_mirrors.sh failing due to relocation of 7.5.1804 In-Reply-To: <567559f6-3084-9748-807a-3c92d7daa97b@windriver.com> References: <567559f6-3084-9748-807a-3c92d7daa97b@windriver.com> Message-ID: PS. As a work around, you can use 'download_mirror.sh *-s*'. This means you will download exclusively from mirror.starlingx.cengn.ca.  Our mirror will retain a copy of mirror.centos.org/centos/7.5.1804.   The missing repos under mirror.centos.org should not effect you, unless there has been an .lst file change since the last snapshot. Scott On 2019-01-07 12:05 p.m., Scott Little wrote: > CentOS 7.5.1804 has been relocated from > mirror.centos.org/centos/7.5.1804 to vault.centos.org/7.5.1804. > > You may be seeing errors from download_mirrors.sh like ... > > http://mirror.centos.org/centos/7.5.1804/cloud/x86_64/openstack-pike/repodata/repomd.xml: > [Errno 14] HTTP Error 404 - Not Found > > I'm preparing an stx-tools update to our yum repo config to correct > this error. > > Also mirror.starlingx.cengn.ca is in the process of downloading all > the new (reloacated) content.  You will see it under the subdirectory > /mirror/centos/centos/vault.centos.org/7.5.1804/ . > > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Jan 7 20:40:35 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 7 Jan 2019 21:40:35 +0100 Subject: [Starlingx-discuss] [test][docs] Zoom conflicts during the Contributor Meetup Message-ID: Hi StarlingX Community, The StarlingX Contributor Meetup[1] is approaching quickly and we are working on the last bits of logistics. We would like to provide the option to participate remotely for those of you who are not able to attend in person. An option is to use the Zoom account which we currently run the weekly project and team calls from which results in not being able to run the calls during the meetup hours on Tuesday and Wednesday next week. The affected calls are the Test and Documentation team calls. What would be the teams’ preference? Would you like to keep the weekly team call or cancel it in favor of the contributor meetup? Please respond to this thread and we will also discuss the topic on the project call this Wednesday. Please flag if I missed any other colliding calls. Thanks and Best Regards, Ildikó [1] https://www.eventbrite.com/e/starlingx-contributor-meetup-january-2019-tickets-53250423450 From scott.little at windriver.com Mon Jan 7 21:01:41 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 7 Jan 2019 16:01:41 -0500 Subject: [Starlingx-discuss] FW: binary image upload to cengn for cenos 7.6 upgrade In-Reply-To: References: <56829C2A36C2E542B0CCB9854828E4D8561E21DC@CDSMSX101.ccr.corp.intel.com> Message-ID: <4848cba0-8912-302d-7143-1e648c934c7c@windriver.com> A scripting fix was required. https://review.openstack.org/629047 I have tested it, and the installer directories he lists are now present. Scott On 2019-01-07 10:18 a.m., Young, Ken wrote: > > Do we have any action on this or will this happen automatically? > > /KenY > > *From: *Haochuan Chen > *Date: *Sunday, January 6, 2019 at 8:44 PM > *To: *"starlingx-discuss at lists.starlingx.io" > > *Subject: *[Starlingx-discuss] binary image upload to cengn for cenos > 7.6 upgrade > > Hi > > I works on centos7.6 upgrade, and focus anaconda upgrade from  > 21.48.22.121 to 21.48.22.147. It request to upgrade binary to centos 7.6. > > http://mirror.centos.org/centos/7.6.1810/os/x86_64/LiveOS/ > > http://mirror.centos.org/centos/7.6.1810/os/x86_64/images/ > > http://mirror.centos.org/centos/7.6.1810/os/x86_64/isolinux/ > > But currently on cengn, there is only rpms for 7.6, please help to upload. > > http://mirror.starlingx.cengn.ca/mirror/centos/centos/mirror.centos.org/7.6.1810/os/x86_64/ > > Thanks! > > Martin, Chen > > SSG OTC, Software Engineer > > 021-61164330 > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Mon Jan 7 21:06:26 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 7 Jan 2019 21:06:26 +0000 Subject: [Starlingx-discuss] TSC meeting minutes - Dec 20th In-Reply-To: <76B25E27-CA84-480D-A540-0E2FD33A0B09@windriver.com> References: <76B25E27-CA84-480D-A540-0E2FD33A0B09@windriver.com> Message-ID: <2D3B5ACA-D54B-4EEB-87F8-DE0AC85FD08C@windriver.com> Hi all; Below are the minutes of our last TSC meeting of 2018 – looking forward to an exciting 2019 with the community. Our next meeting in the normal slot will be on Jan 10th. The Jan 17th meeting will be cancelled as the TSC will be meeting at the F2F in Chandler on Tuesday and Wednesday next week – the agenda is here [0]. Meeting minutes: Dec 20th, 2018. Compiler option item from the prioritization SS * It sounds like there are patches out there? What projects are compiled with these options? – Curtis * Can we leverage OPNFV performance test frameworks? * On going evolution of community test infrastructure is important to make these things easier to assess. * Need to work with technical leads and due to lower priority this work fall after the higher priority items Help requested for Nova spec review: https://review.openstack.org/#/c/620959/ (brucej) * Chris to review * This change has been merged OpenStack Pilot Project Promotion Process update - brucej - 10 mins Current draft proposed by Intel to the BoD includes this: Pilot Projects can be promoted to a Confirmed Project when the Officers determine that the following Criteria have been met: 1) The project continues to be strategic to the Foundation’s mission, falls within the scope of software infrastructure and is aligned with one of the OSF strategic focus areas. 2) The project has adopted the Four Opens as defined in [1]: All code is Open Source (not “open core”); uses Open Design & Open Development processes; and Open Community practices are upheld. 3) The project has contribution and engagement from a variety of companies in order to appeal to a broad ecosystem. Needs to be a focus area for community and TSC 4) The project has well defined governance spearheaded by technical leaders who set the project’s direction and own its roadmap while facilitating the Four Opens. The project’s governance shall be documented, maintained and approved by the project’s leadership and the Foundation. See Exhibit A for more indicators of open governance. 5) The project does regular and periodic releases at least twice per year (more releases is better). If the project is aligned with OpenStack and subject to agreement between the Officers and the Pilot Project team, the project’s releases and CI/CD framework should be coordinated with that of the OpenStack project to ensure compatibility with relevant OpenStack components. 6) The project can show proof of having users of code in production environments (one metric for maturity, there may be others.) Action: Need a mechanism to track this - should we consider an adopters file? Logo page in our documentation and web site for StarlingX Action: Bruce to take action back to Doc team 7) The project adheres to the OpenStack Foundation Code of Conduct. Action: We should start tracking users for this metric. I suggest asking users to self register in an Adopters file and/or wiki page. Other items: Should we have a home location - for miscellaneous items Dean is looking for input - add to F2F agenda No meetings - Dec 27 or Jan 3rd Victor and Brent met to discuss big picture multi OS Encourage community members to attend Multi-OS meeting Regards; Ian [0] https://etherpad.openstack.org/p/stx-chandler-meetup -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Mon Jan 7 21:08:10 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 7 Jan 2019 21:08:10 +0000 Subject: [Starlingx-discuss] TSC meeting minutes - Dec 20th In-Reply-To: <2D3B5ACA-D54B-4EEB-87F8-DE0AC85FD08C@windriver.com> References: <76B25E27-CA84-480D-A540-0E2FD33A0B09@windriver.com> <2D3B5ACA-D54B-4EEB-87F8-DE0AC85FD08C@windriver.com> Message-ID: <5572D99A-B7A0-493E-8A20-1E21F681CED7@windriver.com> Hi all; Below are the minutes of our last TSC meeting of 2018 – looking forward to an exciting 2019 with the community. Our next meeting in the normal slot will be on Jan 10th. The Jan 17th meeting will be cancelled as the TSC will be meeting at the F2F in Chandler on Tuesday and Wednesday next week – the agenda is here [0]. Meeting minutes: Dec 20th, 2018. Compiler option item from the prioritization SS * It sounds like there are patches out there? What projects are compiled with these options? – Curtis * Can we leverage OPNFV performance test frameworks? * On going evolution of community test infrastructure is important to make these things easier to assess. * Need to work with technical leads and due to lower priority this work fall after the higher priority items Help requested for Nova spec review: https://review.openstack.org/#/c/620959/ (brucej) * Chris to review * This change has been merged OpenStack Pilot Project Promotion Process update - brucej - 10 mins Current draft proposed by Intel to the BoD includes this: Pilot Projects can be promoted to a Confirmed Project when the Officers determine that the following Criteria have been met: 1) The project continues to be strategic to the Foundation’s mission, falls within the scope of software infrastructure and is aligned with one of the OSF strategic focus areas. 2) The project has adopted the Four Opens as defined in [1]: All code is Open Source (not “open core”); uses Open Design & Open Development processes; and Open Community practices are upheld. 3) The project has contribution and engagement from a variety of companies in order to appeal to a broad ecosystem. Needs to be a focus area for community and TSC 4) The project has well defined governance spearheaded by technical leaders who set the project’s direction and own its roadmap while facilitating the Four Opens. The project’s governance shall be documented, maintained and approved by the project’s leadership and the Foundation. See Exhibit A for more indicators of open governance. 5) The project does regular and periodic releases at least twice per year (more releases is better). If the project is aligned with OpenStack and subject to agreement between the Officers and the Pilot Project team, the project’s releases and CI/CD framework should be coordinated with that of the OpenStack project to ensure compatibility with relevant OpenStack components. 6) The project can show proof of having users of code in production environments (one metric for maturity, there may be others.) Action: Need a mechanism to track this - should we consider an adopters file? Logo page in our documentation and web site for StarlingX Action: Bruce to take action back to Doc team 7) The project adheres to the OpenStack Foundation Code of Conduct. Action: We should start tracking users for this metric. I suggest asking users to self register in an Adopters file and/or wiki page. Other items: Should we have a home location - for miscellaneous items Dean is looking for input - add to F2F agenda No meetings - Dec 27 or Jan 3rd Victor and Brent met to discuss big picture multi OS Encourage community members to attend Multi-OS meeting Regards; Ian [0] https://etherpad.openstack.org/p/stx-chandler-meetup -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Jan 7 21:40:59 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 7 Jan 2019 21:40:59 +0000 Subject: [Starlingx-discuss] CENGN Build Aging In-Reply-To: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> References: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> Message-ID: <9A85D2917C58154C960D95352B22818BB28C7655@fmsmsx121.amr.corp.intel.com> This looks entirely sensible to me. Very well done, thanks Ken & team! brucej From: Young, Ken [mailto:Ken.Young at windriver.com] Sent: Friday, January 4, 2019 10:57 AM To: starlingx Subject: [Starlingx-discuss] CENGN Build Aging All, The build team has been discussing the frequency of the build and how long they are maintained on the Mirror. Our initial ideas are capture here: https://wiki.openstack.org/wiki/StarlingX/Build/EventBuildCadence This captures the initial view of the team. We can evolve this as we get clear direction from the Release team and the community as whole. Comments welcome. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Jan 7 21:57:37 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 7 Jan 2019 21:57:37 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Infrastructure Containerization Message-ID: Minutes from today's meeting are here: https://etherpad.openstack.org/p/stx-containerization Key outcomes from today's meeting: * Containerized services can now be run by any community member using this link off the containerization wiki: https://wiki.openstack.org/wiki/StarlingX/Containers/Installation * The containerization team is looking for volunteers to take on the following - please let me know if you are interested in getting involved in this project: o 2004008: [Feature] Create HELM chart for Fault project https://storyboard.openstack.org/#!/story/2004008 o 2004470: [Feature] Add support for k8s labels to inventory panel https://storyboard.openstack.org/#!/story/2004470 o 2004711: Add support for a local mirror of docker images https://storyboard.openstack.org/#!/story/2004711 Frank _____________________________________________ From: Miller, Frank Sent: Friday, December 07, 2018 4:43 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: Meeting Agenda: StarlingX Infrastructure Containerization Just a reminder that our next meeting will be Monday Dec 10th. The agenda is posted here: https://etherpad.openstack.org/p/stx-containerization If anyone would like to add an agenda topic please update the etherpad. Frank -----Original Appointment----- From: Miller, Frank Sent: Thursday, November 29, 2018 4:55 PM To: starlingx-discuss at lists.starlingx.io Subject: StarlingX Infrastructure Containerization When: Occurs every Monday effective 12/3/2018 until 3/25/2019 from 11:00 AM to 11:30 AM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236 For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Mon Jan 7 22:13:36 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Mon, 7 Jan 2019 22:13:36 +0000 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: References: Message-ID: <63bc811b29fb8298d8e9f57ef0f9212c44905aa3.camel@intel.com> On Wed, 2019-01-02 at 14:40 -0600, Victor Rodriguez wrote: > > > > > On Wed, Jan 2, 2019 at 10:35 AM Young, Ken > wrote: > > > > Victor, > > > > > > > > Security work is never completed. There is always a long list of > > inventive new vulnerabilities and a laundry list of hardening work > > to be completed. The vulnerability work, considering the > > severity, is generally urgent. Hardening work is not urgent but > > important. In this case, we are dealing with a hardening > > initiative that focuses on a small area of the code. > > I don't entirely agree with the hardening urgency, as hardening can prevent vulnerabilities. The lack of reported vulnerabilities not necessary means that software is secure, most probably is that the software hasn't been used/tested well enough. Prevention tends to be more cost-effective than mitigation. I'd prefer to treat the hardening as high priority. > > > > The challenge is that these small change proposed have larger > > implications. As was pointed out on the gerrit reviews, > > performance and / or functional testing is required. My concern is > > that we affect the timing / behaviour of stx-ha and stx-metal such > > that they do not work together in some scenarios. This will need > > to be tested and is certainly larger than a sanity. > > I agree, Also I think that in order to improve the discussion and solution of these issues it would be helpful to understand the critical/specific use cases that people is worried about and the thresholds that we shouldn't cross. Then, find a way to measure them and start from there. Otherwise we will deal with a high level of ambiguity that could cause delays on solving these issues. > Agree, our concern on the last TSC meeting was to come up with a > proper framework to measure the performance impact of key changes in > the project ( such as new compiler flags or new functionality > options). The concern you have about timing /behavior of stx-ha and > stx-metal is a key point that I would like to understand more, the > idea is to improve security without affecting functionality at all > > Also, I am wondering if there is a way to phase the effort. For > > example, is there a way to break up the flag changes such that the > > warnings are separated from the flags which change the compiled > > code? That way, we are not trying to jam everything through at > > once. > > We could came up with a V2 of the patches with just the warning flags > and the fixes to those warnings, is that ok? Agree with apply the warning patch first to have some progress. > > > > > > > > Hope this helps. Happy to discuss when you return from Holliday. > > Sure, thanks for the feedback ( I will be fully back Monday ) > > > > > > > > > Regards, > > > > Ken Y > > > > > > > > From: Victor Rodriguez > > Date: Friday, December 28, 2018 at 7:34 PM > > To: Curtis > > Cc: "starlingx-discuss at lists.starlingx.io" > .starlingx.io> > > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag > > for security > > > > > > > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > > > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez > m> wrote: > > > > Hi StarlingX community > > > > We can all agree that security is an important feature to be taken > > into consideration in any SW project. In the aim of improving the > > security of the StarlingX project, we have been taking the task to > > propose the use of some compiler flags that prevent and detect some > > security holes, especially by buffer overflow that could lead into > > ROP > > attacks. > > > > The list of flags that we are proposing are : > > > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector- > > strong” > > > > Fortify source: CFLAGS="-O2 > > -D_FORTIFY_SOURCE=2" > > Format string vulnerabilities: CFLAGS="-Wformat -Wformat- > > security" > > Stack execution protection: LDFLAGS="-z noexecstack" > > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > > > > These are being analyzed in the following Gerrit reviews (thanks a > > lot > > for all the good feedback) > > > > https://review.openstack.org/#/c/623608/ > > https://review.openstack.org/#/c/623603/ > > https://review.openstack.org/#/c/623601/ > > https://review.openstack.org/#/c/623599/ > > > > As requested in the Gerrit reviews, there is a proper need to first > > understand what these compiler flags do and what is the impact they > > have at the functional and performance area of the project. This is > > a > > preliminary report, we will be following up with a test plan for > > functional & performance test plans for the services as a next > > step. > > This report includes: > > > > * Detailed description of what the compiler flag does > > * Code example that shows how does it work to prevent attacks > > * If there is a change in the binary, we create a microbenchmark > > that > > shows us how the flag impact the performance > > > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing > > _exercises/cflags_security > > > > As a result of the microbenchmark, the performance impact is not > > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) > > (more > > details on the HW and SW specification upon requests) > > > > The areas of the code we are suggesting on the patches are: > > > > * stx-ha > > * stx-metal > > * stx-nfv > > * stx-fault > > > > We do take care that these flags are not breaking the following > > areas > > after being applied. > > > > * Build process of the image > > * Sanity test cases after the image is created > > (Ada can give more details on the sanity report of the image > > generated > > with these flags) > > > > If running the sanity tests are not enough to prove that a change > > in > > compiler flags do not affect functionality, please gave us the > > right > > path to follow. > > > > As mentioned before, this is a preliminary report, and that we will > > be > > following up with a test plan for functional & performance test > > plans > > for the services as a next step. > > > > Hope this email helps to clarify some questions related to the > > flags > > and start the follow-up discussion. > > > > > > > > Thanks for the context Victor, it's very helpful to me. > > > > > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > > > > > One thing I want to mention is something the Kata Containers team > > was talking about at the Berlin OpenStack summit, which is when > > many small performance hits start to add up. They have to be > > careful to ensure they don't have a bunch of smallish looking > > changes that add up to a large performance hit over a longer period > > of time. > > > > > > > > You are right, it's a valid point that we need to take care too > > > > > > > > Overall I'm sure the StarlingX project would like to have some > > performance testing, if we don't already, though that can be > > challenging for an open source project. I had mentioned OPNFV's > > Functest and related projects on the TSC call, but now seeing which > > components are affected I'm not sure that would be directly > > helpful. I look forward to further discussions around this area. > > > > > > > > Thanks for let me know that, I will take a look at OPNFV's functest > > and other projects before the next TSC of 2019 > > > > > > > > I will do my best to came up with a proposal for a better > > performance testing. > > > > > > > > Thanks > > > > > > > > Victor Rodriguez > > > > > > > > Thanks, > > > > Curtis > > > > > > > > > > Regards > > > > Victor Rodriguez > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > > > > > -- > > > > Blog: serverascode.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From dtroyer at gmail.com Mon Jan 7 22:41:12 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 7 Jan 2019 16:41:12 -0600 Subject: [Starlingx-discuss] CENGN Build Aging In-Reply-To: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> References: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> Message-ID: On Fri, Jan 4, 2019 at 12:57 PM Young, Ken wrote: > The build team has been discussing the frequency of the build and how long they are maintained on the Mirror. Our initial ideas are capture here: I think that generally looks good, thanks! One clarification I have is with the release point builds. Do I read that right that if we have a 2019.05.1 release and then do a .2 the .1 would go away immediately? As a user I would want to be able to get to my older install if I need to re-create a deployment. As a smart user I would have a local copy but I'm not always a smart user :) It is only because these are the releases we expect people to actually use and depend on that I think we need to be really conservative about removing them. Think about our experiences with binary RPMs being replaced but we still want a specific version. This feels like the same issue only at a higher layer. dt -- Dean Troyer dtroyer at gmail.com From sgw at linux.intel.com Mon Jan 7 23:48:04 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 7 Jan 2019 15:48:04 -0800 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: <36d5be50-348e-a5a5-6d63-c4be2543f4f4@windriver.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> <8e27b76c-ce2a-039c-f51c-b00087dad813@linux.intel.com> <36d5be50-348e-a5a5-6d63-c4be2543f4f4@windriver.com> Message-ID: On 1/7/19 7:28 AM, Scott Little wrote: > I disagree.   Our experience in the past, is that putting a tis.0 on a > package raises questions from both customers and designers.  Why are you > compiling this at all if you aren't changing it?I would have thought that the tis. extension would be enough to indicate this package had patches. I also think we should really be switching to stx.0, but that's a different discussion I would guess. > A little digging, and some wasted cycles, and the answer is.  "Oh, we > are changing it. we still have 3 patches against it. sorry for the > confusion." > > Now as you point out.  We might remove a patch in a non-rebase context. > In this case we are compelled to increment, rather than decrement, > TIS_PACTH_VER.  In this case we have to live with the misleadingly high > number until the next rebase.  That's ok.  No one has complained about > that. > I guess I am about the consistency of the meaning of tis. when it increments, such that starting at 0 and later incrementing means change occurs vs starting at N want meaning a patch count and later incrementing and not really having a meaning any more, my OCD kind of kicks in. > I should have been flagging this in earlier code reviews.  I wasn't.  My > error.  Had bigger fish to fry in the early months of going open source. > As I said, I had never heard this until now, I understand your busy, but we did the whole 7.5 update without hearing about. > If the community wants to overrule, that's fine.  I'm just trying to > share my hard won experience as 'the rebase guy' for 4 years prior to > open sourcing. > Do we need a proper Specification for the meaning of the package information, this is where we can change the tis/TIS to stx/STX! Sau! > Scott > > > > On 2019-01-04 4:52 p.m., Saul Wold wrote: >> >> I am not sure I agree with any of this, first off, just the fact that >> we have an SRPM and the TIS_PACTH_VER indicates that it's been >> patched, I really don't see the value in having the patch count >> indicated as a "Version" item. >> >> It makes more sense to start from 0 (option a) and that way we can >> track each subsequent change to that package with an increment. >> >> This issue did not come up at all in past updates, I am not sure why >> it's becoming an issue now. >> >> See below for additional comments > From bruce.e.jones at intel.com Tue Jan 8 00:19:14 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 8 Jan 2019 00:19:14 +0000 Subject: [Starlingx-discuss] No Multi-OS meeting today. Message-ID: <9A85D2917C58154C960D95352B22818BB28C7807@fmsmsx121.amr.corp.intel.com> Several of us have gotten caught up in another meeting and aren't able to join. We will pick up next week. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ken.Young at windriver.com Tue Jan 8 00:49:51 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Tue, 8 Jan 2019 00:49:51 +0000 Subject: [Starlingx-discuss] CENGN Build Aging In-Reply-To: References: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> Message-ID: <0097D305-6FC3-485C-B3C2-139D31F574BA@windriver.com> See inline. On 2019-01-07, 5:41 PM, "Dean Troyer" wrote: On Fri, Jan 4, 2019 at 12:57 PM Young, Ken wrote: > The build team has been discussing the frequency of the build and how long they are maintained on the Mirror. Our initial ideas are capture here: I think that generally looks good, thanks! One clarification I have is with the release point builds. Do I read that right that if we have a 2019.05.1 release and then do a .2 the .1 would go away immediately? As a user I would want to be able to get to my older install if I need to re-create a deployment. As a smart user I would have a local copy but I'm not always a smart user :) It is only because these are the releases we expect people to actually use and depend on that I think we need to be really conservative about removing them. Think about our experiences with binary RPMs being replaced but we still want a specific version. This feels like the same issue only at a higher layer. That is not the intent. Maybe I can make that clearer in the slides. What I meant to say is that both loads are available at the same point in the timeline. I could expose that as two lines both sitting in the same position. Both loads are available until that *release* ages out. dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Tue Jan 8 00:55:57 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Mon, 7 Jan 2019 18:55:57 -0600 Subject: [Starlingx-discuss] CENGN Build Aging In-Reply-To: <0097D305-6FC3-485C-B3C2-139D31F574BA@windriver.com> References: <5733B9C8-4346-43D8-BFCF-0612B291B32D@windriver.com> <0097D305-6FC3-485C-B3C2-139D31F574BA@windriver.com> Message-ID: On Mon, Jan 7, 2019 at 6:49 PM Young, Ken wrote: > That is not the intent. Maybe I can make that clearer in the slides. What I meant to say is that both loads are available at the same point in the timeline. I could expose that as two lines both sitting in the same position. Both loads are available until that *release* ages out. Good deal, thanks for the clarification dt -- Dean Troyer dtroyer at gmail.com From changcheng.liu at intel.com Tue Jan 8 01:25:49 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Tue, 8 Jan 2019 01:25:49 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what's below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I'm removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don't know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng ; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From gwanmax at gmail.com Tue Jan 8 03:10:52 2019 From: gwanmax at gmail.com (wang guo) Date: Tue, 8 Jan 2019 11:10:52 +0800 Subject: [Starlingx-discuss] document outdated Message-ID: https://docs.starlingx.io/developer_guide/index.html#setup-repository-docker-container After executing "cd $HOME/stx-tools/centos-mirror-tools/", cannot find Dockerfile at this directory; then execute "docker build --tag $USER:centos-mirror-repository --file Dockerfile" failed. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jan 8 13:29:57 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 8 Jan 2019 14:29:57 +0100 Subject: [Starlingx-discuss] Upcoming CFP deadlines in January Message-ID: <08C20EE3-C374-4006-8AF8-FF5034335322@gmail.com> Hi, It is a friendly reminder to the CFP deadlines coming up over the next three weeks including the Open Infrastructure Summit: * January 21 - Open Networking Summit in San Jose April 3-5 - https://events.linuxfoundation.org/events/open-networking-summit-north-america-2019/program/cfp/ * January 23 - Open Infrastructure Summit in Denver April 29-May 1 - https://www.openstack.org/summit/denver-2019/ Please consider submitting proposals to ensure good representation of StarlingX and the great work the community is doing at industry events to get more visibility for the project. Please see the following Google sheet for further events this year: https://docs.google.com/spreadsheets/d/1A9HiMjnqVGxSCd9No7theW8oNu3V1rmK6R1xJOzfUEU/edit?usp=sharing Please let me know if you have any questions or need any help. Thanks and Best Regards, Ildikó From michel.thebeau at windriver.com Tue Jan 8 13:50:28 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Tue, 8 Jan 2019 08:50:28 -0500 Subject: [Starlingx-discuss] How to make controller-0 enabled and available In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C85F7B@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C85F7B@FMSMSX108.amr.corp.intel.com> Message-ID: <1546955428.2938.53.camel@windriver.com> Hi Juan Carlos, You had asked "Do you know how to make the controller-0 enabled and available again in order to perform the controller swact?". In this case I believe you would need to collect and provide logs since the condition described has the appearance of a defect - one does not expect the controller to go disabled/offline due to swact.  I did not observe an existing bug report, so you could double-check and create one. M On Thu, 2019-01-03 at 19:35 +0000, Alonso, Juan Carlos wrote: > Hello, >   > I performed a controller swact, controller-1 become the active, then > controller-0 went to disable and offline and I could not perform the > swact again. >   > I have this output from system host-list: > +----+--------------+-------------+----------------+-------------+--- > -----------+ > | id | hostname     | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--- > -----------+ > | 1  | controller-0 | controller  | unlocked       | disabled    | > offline      | > | 2  | controller-1 | controller  | unlocked       | enabled     | > available    | > +----+--------------+-------------+----------------+-------------+--- > -----------+ >   > Do you know how to make the controller-0 enabled and available again > in order to perform the controller swact? >   > Regards. > Juan Carlos Alonso >   > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Al.Bailey at windriver.com Tue Jan 8 14:08:22 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 8 Jan 2019 14:08:22 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> Message-ID: I don't think any of those files in that patches folder are needed anymore. They were added to the c ode back to 2015. Bin will know for sure, whether or not they can be removed. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Monday, January 07, 2019 8:26 PM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what's below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I'm removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don't know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng ; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Tue Jan 8 15:57:06 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Tue, 8 Jan 2019 15:57:06 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: References: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F40E7D@SHSMSX103.ccr.corp.intel.com> Hi AI & Bin, Please help submit patch to remove them from source code if we could make sure they're not needed anymore. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 8, 2019 10:08 PM To: Liu, Changcheng ; Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I don't think any of those files in that patches folder are needed anymore. They were added to the c ode back to 2015. Bin will know for sure, whether or not they can be removed. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Monday, January 07, 2019 8:26 PM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what's below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I'm removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don't know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng >; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin > Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Jan 8 19:33:29 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 8 Jan 2019 19:33:29 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 1/8/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD5B802@FMSMSX114.amr.corp.intel.com> Agenda for 1/8/2019 Attendees: Abraham, Cristopher, Fernando, JC, Numan, Jose, JP, Elio, Bruce, Ada, Richo * To-do list for 2019 - Ada + Ada to provide structure to present this into the community meeting Coming from TSC - Performance testing - Stress and stability testing - Reduce test cycle time - Testing as a Service - Zuul unit test for the Flock - Increase unit test coverage for flock - Zuul API tests for the Flock - DevStack enablement - Zuul CLI tests for the Flock Coming from Ada - Code out - Lab setup - Automated lab deployment - Sanity on Bare metal - SDL - User documentation - Zuul enablement - Containers testing - Stein rebase - tests analysis - Test automation - Dashboard for ISO daily status - Lab as a 3rd party CI - Grow debug capabilities - Manual regression - Security testing for containers * Test repo - Numan 2004674: [Test] stx - Creation of Test Artifacts Repository - https://storyboard.openstack.org/#!/story/2004674 - Repo has been requested - ask for finishing this to Dean - Ada (1/8/2019) - Process has to be defined for code submissions - Need an owner * Dashboard - Numan 2004671: [Test] stx - Creation of Test Dashboard - https://storyboard.openstack.org/#!/story/2004671 - include 2 weeks history on Sanity - Automated regression - have some weeks - manual regression - have current one and the previous - Proposal - http://reportportal.io/ - estimation is this will take 3-4 working weeks for having it running - more time will be required for adjusting the reports we have currently - we need a place to host the portal - maybe CENGN? - Check the resource availability - Numan and Ada * Regression test plan - Numan 2004672: [Test] stx - Creating Regression Test Suite for stx.2019.05 Release - https://storyboard.openstack.org/#!/story/2004672 * Meeting next week conflicting with the meetup Let's use the zoom slot for the community meeting on Tuesday. Try to attend the meetings * Opens - Fernando - an email with questions regarding security was sent during the Holidays, asking help for Numan - WIP signed patches - infrastructure is required for signing them. Numan checking this topic also with Ken. different patches are created for testing, we need to create a infra similar to what Wind River has This is tied to build process. - Numan - Ken looking for the public driver keys, so we can store the sanity logs on that. Cristopher to contact Ken. - Bruce - are the testing team ready for discussion next week? Ada and Numan working on that, to get together by end of this week Regards Ada From chris.friesen at windriver.com Tue Jan 8 22:29:27 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 8 Jan 2019 16:29:27 -0600 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> <8e27b76c-ce2a-039c-f51c-b00087dad813@linux.intel.com> <36d5be50-348e-a5a5-6d63-c4be2543f4f4@windriver.com> Message-ID: <90863383-6f2c-fbd0-8670-e1378eeed81c@windriver.com> I agree that we'd eventually want to switch to "stx" instead of "tis". The way that we did it previously was to consistently have the meaning of the ".x" be "the number of changes made to the upstream package".  So the first time you make a change it'd be ".1", then you make another change and it'd be ".2", and then if you upgrade to a newer base package but keep both changes it'd have a new upstream base version but still be ".2" for the version suffix. For what it's worth, CentOS and Debian do things a bit differently.  When they move to a new upstream version of the package they switch back to "-1" regardless of the number of patches .  So you'd have something like 0.14.0-1, then 0.14.0-2, then 0.15.2-1.  OpenSUSE has a more complicated suffix like "-5.3.1", I'm not sure what their rules for updating it are. Given the above, I could see a rationale for reducing confusion by aligning with CentOS and switching back to ".1" when bumping  upstream versions.  But I still think there is value in the previous mechanism as it gives a general idea of how much a given package differs from upstream. Chris On 1/7/2019 5:48 PM, Saul Wold wrote: > > I also think we should really be switching to stx.0, but that's a > different discussion I would guess. > I guess I am about the consistency of the meaning of tis. when it > increments, such that starting at 0 and later incrementing means > change occurs vs starting at N want meaning a patch count and later > incrementing and not really having a meaning any more, my OCD kind of > kicks in. From luis.botello.ortega at intel.com Tue Jan 8 23:44:14 2019 From: luis.botello.ortega at intel.com (Botello Ortega, Luis) Date: Tue, 8 Jan 2019 23:44:14 +0000 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: <63bc811b29fb8298d8e9f57ef0f9212c44905aa3.camel@intel.com> References: <63bc811b29fb8298d8e9f57ef0f9212c44905aa3.camel@intel.com> Message-ID: Hi: Thank you all for your feedback! I also agree to first add the "warning as errors" flags, I abandoned the old patches and proposed new ones: https://review.openstack.org/#/c/629329/ https://review.openstack.org/#/c/629331/ https://review.openstack.org/#/c/629332/ Once them are merged, we may add the remaining security flags Best Regards Luis Botello -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: Monday, January 7, 2019 4:14 PM To: vm.rod25 at gmail.com; Ken.Young at windriver.com Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security On Wed, 2019-01-02 at 14:40 -0600, Victor Rodriguez wrote: > > > > > On Wed, Jan 2, 2019 at 10:35 AM Young, Ken > wrote: > > > > Victor, > > > > > > > > Security work is never completed. There is always a long list of > > inventive new vulnerabilities and a laundry list of hardening work > > to be completed. The vulnerability work, considering the > > severity, is generally urgent. Hardening work is not urgent but > > important. In this case, we are dealing with a hardening initiative > > that focuses on a small area of the code. > > I don't entirely agree with the hardening urgency, as hardening can prevent vulnerabilities. The lack of reported vulnerabilities not necessary means that software is secure, most probably is that the software hasn't been used/tested well enough. Prevention tends to be more cost-effective than mitigation. I'd prefer to treat the hardening as high priority. > > > > The challenge is that these small change proposed have larger > > implications. As was pointed out on the gerrit reviews, performance > > and / or functional testing is required. My concern is that we > > affect the timing / behaviour of stx-ha and stx-metal such that they > > do not work together in some scenarios. This will need to be tested > > and is certainly larger than a sanity. > > I agree, Also I think that in order to improve the discussion and solution of these issues it would be helpful to understand the critical/specific use cases that people is worried about and the thresholds that we shouldn't cross. Then, find a way to measure them and start from there. Otherwise we will deal with a high level of ambiguity that could cause delays on solving these issues. > Agree, our concern on the last TSC meeting was to come up with a > proper framework to measure the performance impact of key changes in > the project ( such as new compiler flags or new functionality > options). The concern you have about timing /behavior of stx-ha and > stx-metal is a key point that I would like to understand more, the > idea is to improve security without affecting functionality at all > > Also, I am wondering if there is a way to phase the effort. For > > example, is there a way to break up the flag changes such that the > > warnings are separated from the flags which change the compiled > > code? That way, we are not trying to jam everything through at > > once. > > We could came up with a V2 of the patches with just the warning flags > and the fixes to those warnings, is that ok? Agree with apply the warning patch first to have some progress. > > > > > > > > Hope this helps. Happy to discuss when you return from Holliday. > > Sure, thanks for the feedback ( I will be fully back Monday ) > > > > > > > > > Regards, > > > > Ken Y > > > > > > > > From: Victor Rodriguez > > Date: Friday, December 28, 2018 at 7:34 PM > > To: Curtis > > Cc: "starlingx-discuss at lists.starlingx.io" > .starlingx.io> > > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for > > security > > > > > > > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > > > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez > m> wrote: > > > > Hi StarlingX community > > > > We can all agree that security is an important feature to be taken > > into consideration in any SW project. In the aim of improving the > > security of the StarlingX project, we have been taking the task to > > propose the use of some compiler flags that prevent and detect some > > security holes, especially by buffer overflow that could lead into > > ROP attacks. > > > > The list of flags that we are proposing are : > > > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector- > > strong” > > > > Fortify source: CFLAGS="-O2 > > -D_FORTIFY_SOURCE=2" > > Format string vulnerabilities: CFLAGS="-Wformat -Wformat- > > security" > > Stack execution protection: LDFLAGS="-z noexecstack" > > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > > > > These are being analyzed in the following Gerrit reviews (thanks a > > lot for all the good feedback) > > > > https://review.openstack.org/#/c/623608/ > > https://review.openstack.org/#/c/623603/ > > https://review.openstack.org/#/c/623601/ > > https://review.openstack.org/#/c/623599/ > > > > As requested in the Gerrit reviews, there is a proper need to first > > understand what these compiler flags do and what is the impact they > > have at the functional and performance area of the project. This is > > a preliminary report, we will be following up with a test plan for > > functional & performance test plans for the services as a next step. > > This report includes: > > > > * Detailed description of what the compiler flag does > > * Code example that shows how does it work to prevent attacks > > * If there is a change in the binary, we create a microbenchmark > > that shows us how the flag impact the performance > > > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing > > _exercises/cflags_security > > > > As a result of the microbenchmark, the performance impact is not > > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > > details on the HW and SW specification upon requests) > > > > The areas of the code we are suggesting on the patches are: > > > > * stx-ha > > * stx-metal > > * stx-nfv > > * stx-fault > > > > We do take care that these flags are not breaking the following > > areas after being applied. > > > > * Build process of the image > > * Sanity test cases after the image is created (Ada can give more > > details on the sanity report of the image generated with these > > flags) > > > > If running the sanity tests are not enough to prove that a change in > > compiler flags do not affect functionality, please gave us the right > > path to follow. > > > > As mentioned before, this is a preliminary report, and that we will > > be following up with a test plan for functional & performance test > > plans for the services as a next step. > > > > Hope this email helps to clarify some questions related to the flags > > and start the follow-up discussion. > > > > > > > > Thanks for the context Victor, it's very helpful to me. > > > > > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > > > > > One thing I want to mention is something the Kata Containers team > > was talking about at the Berlin OpenStack summit, which is when many > > small performance hits start to add up. They have to be careful to > > ensure they don't have a bunch of smallish looking changes that add > > up to a large performance hit over a longer period of time. > > > > > > > > You are right, it's a valid point that we need to take care too > > > > > > > > Overall I'm sure the StarlingX project would like to have some > > performance testing, if we don't already, though that can be > > challenging for an open source project. I had mentioned OPNFV's > > Functest and related projects on the TSC call, but now seeing which > > components are affected I'm not sure that would be directly helpful. > > I look forward to further discussions around this area. > > > > > > > > Thanks for let me know that, I will take a look at OPNFV's functest > > and other projects before the next TSC of 2019 > > > > > > > > I will do my best to came up with a proposal for a better > > performance testing. > > > > > > > > Thanks > > > > > > > > Victor Rodriguez > > > > > > > > Thanks, > > > > Curtis > > > > > > > > > > Regards > > > > Victor Rodriguez > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > > > > > > > > -- > > > > Blog: serverascode.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.carlos.alonso at intel.com Tue Jan 8 23:45:52 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 8 Jan 2019 23:45:52 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190108 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C89AE2@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-08 (link) Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ When installing a new host (controller, compute or storage) the 'install_state' fields don't show values about progress installation. They stay as 'None'. Check thread attached for more details. Launchpad: https://bugs.launchpad.net/starlingx/+bug/1810553 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Wed Jan 9 00:43:06 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 9 Jan 2019 00:43:06 +0000 Subject: [Starlingx-discuss] [ Test ] Repo creation Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD5BB7B@FMSMSX114.amr.corp.intel.com> Hello, We talked previously about having the testing repo within the OpenStack repositories, as all the rest of the StarlingX subprojects. Can someone (Dean?) help us to create it? Shall I create a storyboard for this task? Thanks a lot Ada From shuicheng.lin at intel.com Wed Jan 9 14:31:27 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 9 Jan 2019 14:31:27 +0000 Subject: [Starlingx-discuss] puppet related srpm be upgraded in CentOS 7.6 Message-ID: <9700A18779F35F49AF027300A49E7C765FE6A86B@SHSMSX101.ccr.corp.intel.com> Hi all, Here is the puppet related srpm we plan to upgrade in CentOS 7.6 feature branch. Please help review whether the upgrade is needed or not. Thanks. You could find them in below story as task also: https://storyboard.openstack.org/#!/story/2004522 puppet-keystone puppet-oslo puppet-cinder puppet-glance puppet-nova puppet-ceilometer puppet-heat puppet-ironic puppet-magnum puppet-murano puppet-neutron puppet-panko puppet-swift puppet-gnocchi Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Jan 9 14:43:02 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 9 Jan 2019 14:43:02 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 1/9 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E3D237@SHSMSX103.ccr.corp.intel.com> Agenda for 1/9 meetings: 1. CentOS 7.6 upgrade status (Shuicheng/Martin) 1.1 Kernel & out of tree kernel driver upgrade status: Patch is ready for review. Need deploy test before code merge. 10 modules upgraded: i40e/i40evf/ixgbe/ixgbevf/tpm/integrity/mellanox series/openswitch. For mellanox adapter support: driver itself is upgraded, but mellanox support in DPDK is disabled. And openvswitch/DPDK upgrade is postponed to wait openvswitch new release. Agreed with network team, longer term plan is going to re-enable Mallanox driver support in DPDK 18.11 when OpenVSwitch latest version support 18.11 DPDK. AR: Shuicheng to create a story to track the Mallanox driver re-enable work in DPDK. 1.2 srpm & rpm upgrade status: Total 49 srpm: 17 merged. 10 under review. 5 abandoned (3 due to srpm on longer used 2 replaced by RPM; 1 no-longer needed; 1 due to higher version package is from fedora repo.) Issue w/ Puppet-RabbitMQ, will not upgraded. 17 srpm still under work. 15 of them are openstack puppet related, need be upgraded and verified together. AR: Shuicheng to send the list of Puppet related SRPM for OpenStack; it's very likely that those OpenStack related puppet sRPM will not require upgrade due to the containerization work is going on, and they are not required. Story created: https://storyboard.openstack.org/#!/story/2004743 rpm upgrade is on-going. 2. Ceph upgrade status (Vivian/Changcheng) Ceph REST API has been removed from src. Two controllers can be un-locked now. Scripts have been writen to enable Ceph-mgr RESTFUl plug-in. One bug has been fixed and patch submitted to upstream Ceph community. Upgrade to Ceph itself to 13.2.2 has been done; work is more related to the other component which interact w/ Ceph. Puppet module and service-mgr Python-ceph-client needs to be updated to adapt to the new Ceph; Effort estimation not available, no idea for whatelse needs to be updated to adapt it. Frank: These 3 components shall be updated due to the removed REST API. Documentation available? Ovidiu is trying to help but the team still tight w/ other commitment; trainings have been setup. AR: Frank/Ovidiu to send invitation to Cindy as well so that Cindy can see if there is somebody else can join and help Changcheng. Goal for Ceph upgrade: the new Ceph 13.2.2 shall be up and running with StarlingX. Need to get this done earlier. 3. Python2to3 status, flocks and OS packages (Austin) stx-config: last task to enable python3.5 unit testing WiP; stx-integ: Python related patches need to be reviewed, will start after stx-config unit testing. stx-distcloud: just uploaded the patches and under code review now. stx-NFV: Ran is still working on this; right now, her resource is occupied w/ 7.6 upgrade; will shift to this task probably next week; stx-distcloudclient: was assigned to Victor. other upstream RPMs Python3 transition status analysis: https://bugs.launchpad.net/starlingx/+bug/1808073. AR: Cindy to bring up to StarlingX planning meetup next week so that we can have goal setting for how to address them. Goal: set target for this release; Need to do some what regarding upstream components. 4. Opens (all) None -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Shang, Dehao; 'Rowsell, Brent'; Wold, Saul; Waheed, Numan; Sun, Austin; Jones, Bruce E; Liu, ZhipengS; starlingx-discuss at lists.starlingx.io; Troyer, Dean; Hu, Yong; 'Khalil, Ghada'; Zhu, Vivian; Lin, Shuicheng; Somerville, Jim Cc: 'Young, Ken'; Hu, Wei W; Armstrong, Robert H; Martinez Monroy, Elio; 'Hellmann, Gil'; 'Chen, Jacky'; 'Eslimi, Dariush'; Lara, Cesar; Cobbley, David A; 'Waines, Greg'; Gomez, Juan P; Martinez Landa, Hayde; Arce Moreno, Abraham; Perez Rodriguez, Humberto I; Perez Carranza, Jose; 'Seiler, Glenn' Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, January 9, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Brent.Rowsell at windriver.com Wed Jan 9 14:46:39 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 9 Jan 2019 14:46:39 +0000 Subject: [Starlingx-discuss] puppet related srpm be upgraded in CentOS 7.6 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE6A86B@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE6A86B@SHSMSX101.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB37B01F@ALA-MBD.corp.ad.wrs.com> Shuicheng, You can abandon the 7.6 upgrade for all of these. Brent From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Wednesday, January 9, 2019 9:31 AM To: Rowsell, Brent ; starlingx-discuss at lists.starlingx.io Subject: puppet related srpm be upgraded in CentOS 7.6 Hi all, Here is the puppet related srpm we plan to upgrade in CentOS 7.6 feature branch. Please help review whether the upgrade is needed or not. Thanks. You could find them in below story as task also: https://storyboard.openstack.org/#!/story/2004522 puppet-keystone puppet-oslo puppet-cinder puppet-glance puppet-nova puppet-ceilometer puppet-heat puppet-ironic puppet-magnum puppet-murano puppet-neutron puppet-panko puppet-swift puppet-gnocchi Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Jan 9 15:08:49 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 9 Jan 2019 09:08:49 -0600 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: References: <63bc811b29fb8298d8e9f57ef0f9212c44905aa3.camel@intel.com> Message-ID: On Tue, Jan 8, 2019 at 5:44 PM Botello Ortega, Luis wrote: > > Hi: > > Thank you all for your feedback! I also agree to first add the "warning as errors" flags, I abandoned the old patches and proposed new ones: > https://review.openstack.org/#/c/629329/ > https://review.openstack.org/#/c/629331/ > https://review.openstack.org/#/c/629332/ > Thanks Luis, can you post the result of the sanity tests after these patches ? to check that image build and basic tests pass Regards > > Once them are merged, we may add the remaining security flags > > Best Regards > Luis Botello > > -----Original Message----- > From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] > Sent: Monday, January 7, 2019 4:14 PM > To: vm.rod25 at gmail.com; Ken.Young at windriver.com > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security > > On Wed, 2019-01-02 at 14:40 -0600, Victor Rodriguez wrote: > > > > > > > > On Wed, Jan 2, 2019 at 10:35 AM Young, Ken > > wrote: > > > > > > Victor, > > > > > > > > > > > > Security work is never completed. There is always a long list of > > > inventive new vulnerabilities and a laundry list of hardening work > > > to be completed. The vulnerability work, considering the > > > severity, is generally urgent. Hardening work is not urgent but > > > important. In this case, we are dealing with a hardening initiative > > > that focuses on a small area of the code. > > > > > I don't entirely agree with the hardening urgency, as hardening can prevent vulnerabilities. The lack of reported vulnerabilities not necessary means that software is secure, most probably is that the software hasn't been used/tested well enough. > > Prevention tends to be more cost-effective than mitigation. I'd prefer to treat the hardening as high priority. > > > > > > > The challenge is that these small change proposed have larger > > > implications. As was pointed out on the gerrit reviews, performance > > > and / or functional testing is required. My concern is that we > > > affect the timing / behaviour of stx-ha and stx-metal such that they > > > do not work together in some scenarios. This will need to be tested > > > and is certainly larger than a sanity. > > > > > I agree, Also I think that in order to improve the discussion and solution of these issues it would be helpful to understand the critical/specific use cases that people is worried about and the thresholds that we shouldn't cross. Then, find a way to measure them and start from there. > > Otherwise we will deal with a high level of ambiguity that could cause delays on solving these issues. > > > > Agree, our concern on the last TSC meeting was to come up with a > > proper framework to measure the performance impact of key changes in > > the project ( such as new compiler flags or new functionality > > options). The concern you have about timing /behavior of stx-ha and > > stx-metal is a key point that I would like to understand more, the > > idea is to improve security without affecting functionality at all > > > Also, I am wondering if there is a way to phase the effort. For > > > example, is there a way to break up the flag changes such that the > > > warnings are separated from the flags which change the compiled > > > code? That way, we are not trying to jam everything through at > > > once. > > > > We could came up with a V2 of the patches with just the warning flags > > and the fixes to those warnings, is that ok? > > Agree with apply the warning patch first to have some progress. > > > > > > > > > > > > > Hope this helps. Happy to discuss when you return from Holliday. > > > > Sure, thanks for the feedback ( I will be fully back Monday ) > > > > > > > > > > > > > > Regards, > > > > > > Ken Y > > > > > > > > > > > > From: Victor Rodriguez > > > Date: Friday, December 28, 2018 at 7:34 PM > > > To: Curtis > > > Cc: "starlingx-discuss at lists.starlingx.io" > > .starlingx.io> > > > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for > > > security > > > > > > > > > > > > > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > > > > > > > > > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez > > m> wrote: > > > > > > Hi StarlingX community > > > > > > We can all agree that security is an important feature to be taken > > > into consideration in any SW project. In the aim of improving the > > > security of the StarlingX project, we have been taking the task to > > > propose the use of some compiler flags that prevent and detect some > > > security holes, especially by buffer overflow that could lead into > > > ROP attacks. > > > > > > The list of flags that we are proposing are : > > > > > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector- > > > strong” > > > > > > Fortify source: CFLAGS="-O2 > > > -D_FORTIFY_SOURCE=2" > > > Format string vulnerabilities: CFLAGS="-Wformat -Wformat- > > > security" > > > Stack execution protection: LDFLAGS="-z noexecstack" > > > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > > > > > > > These are being analyzed in the following Gerrit reviews (thanks a > > > lot for all the good feedback) > > > > > > https://review.openstack.org/#/c/623608/ > > > https://review.openstack.org/#/c/623603/ > > > https://review.openstack.org/#/c/623601/ > > > https://review.openstack.org/#/c/623599/ > > > > > > As requested in the Gerrit reviews, there is a proper need to first > > > understand what these compiler flags do and what is the impact they > > > have at the functional and performance area of the project. This is > > > a preliminary report, we will be following up with a test plan for > > > functional & performance test plans for the services as a next step. > > > This report includes: > > > > > > * Detailed description of what the compiler flag does > > > * Code example that shows how does it work to prevent attacks > > > * If there is a change in the binary, we create a microbenchmark > > > that shows us how the flag impact the performance > > > > > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing > > > _exercises/cflags_security > > > > > > As a result of the microbenchmark, the performance impact is not > > > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > > > details on the HW and SW specification upon requests) > > > > > > The areas of the code we are suggesting on the patches are: > > > > > > * stx-ha > > > * stx-metal > > > * stx-nfv > > > * stx-fault > > > > > > We do take care that these flags are not breaking the following > > > areas after being applied. > > > > > > * Build process of the image > > > * Sanity test cases after the image is created (Ada can give more > > > details on the sanity report of the image generated with these > > > flags) > > > > > > If running the sanity tests are not enough to prove that a change in > > > compiler flags do not affect functionality, please gave us the right > > > path to follow. > > > > > > As mentioned before, this is a preliminary report, and that we will > > > be following up with a test plan for functional & performance test > > > plans for the services as a next step. > > > > > > Hope this email helps to clarify some questions related to the flags > > > and start the follow-up discussion. > > > > > > > > > > > > Thanks for the context Victor, it's very helpful to me. > > > > > > > > > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > > > > > > > > > One thing I want to mention is something the Kata Containers team > > > was talking about at the Berlin OpenStack summit, which is when many > > > small performance hits start to add up. They have to be careful to > > > ensure they don't have a bunch of smallish looking changes that add > > > up to a large performance hit over a longer period of time. > > > > > > > > > > > > You are right, it's a valid point that we need to take care too > > > > > > > > > > > > Overall I'm sure the StarlingX project would like to have some > > > performance testing, if we don't already, though that can be > > > challenging for an open source project. I had mentioned OPNFV's > > > Functest and related projects on the TSC call, but now seeing which > > > components are affected I'm not sure that would be directly helpful. > > > I look forward to further discussions around this area. > > > > > > > > > > > > Thanks for let me know that, I will take a look at OPNFV's functest > > > and other projects before the next TSC of 2019 > > > > > > > > > > > > I will do my best to came up with a proposal for a better > > > performance testing. > > > > > > > > > > > > Thanks > > > > > > > > > > > > Victor Rodriguez > > > > > > > > > > > > Thanks, > > > > > > Curtis > > > > > > > > > > > > > > > Regards > > > > > > Victor Rodriguez > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > > s > > > > > > > > > > > > -- > > > > > > Blog: serverascode.com > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Jan 9 15:23:34 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 9 Jan 2019 16:23:34 +0100 (CET) Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore Message-ID: <1360000352.1296432.1547047414173@communicator.strato.com> Hi, I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. After starting the installation I'll get the following message: [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk [ 18.665925] localhost multipathd[636]: sda: add path (uevent) [ 18.667242] localhost multipathd[636]: sda: failed to get path uid [ 18.667399] localhost multipathd[636]: uevent trigger error [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through [ 21.990203] localhost kernel: sdb: sdb1 [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) [ 63.961502] localhost kernel: random: crng init done [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts ... [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. Any idea or hint is welcome! Thanks Marcel From ildiko.vancsa at gmail.com Wed Jan 9 15:39:30 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 9 Jan 2019 16:39:30 +0100 Subject: [Starlingx-discuss] [test][docs] Zoom conflicts during the Contributor Meetup In-Reply-To: References: Message-ID: <28D010EB-1F4D-414F-851A-6346AD7A1883@gmail.com> Hi, As a reminder we’ve discussed this topic on the weekly call today and as there were no objections we decided to cancel the weekly calls colliding with team calls next week and use the Zoom account to provide remote access to the Contributor Meetup sessions. The dial-in information to that will be distributed in a separate e-mail and will also be added to the meetup etherpad. Thanks and Best Regards, Ildikó > On 2019. Jan 7., at 21:40, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > The StarlingX Contributor Meetup[1] is approaching quickly and we are working on the last bits of logistics. > > We would like to provide the option to participate remotely for those of you who are not able to attend in person. An option is to use the Zoom account which we currently run the weekly project and team calls from which results in not being able to run the calls during the meetup hours on Tuesday and Wednesday next week. > > The affected calls are the Test and Documentation team calls. What would be the teams’ preference? Would you like to keep the weekly team call or cancel it in favor of the contributor meetup? > > Please respond to this thread and we will also discuss the topic on the project call this Wednesday. > > Please flag if I missed any other colliding calls. > > Thanks and Best Regards, > Ildikó > > [1] https://www.eventbrite.com/e/starlingx-contributor-meetup-january-2019-tickets-53250423450 > > From ildiko.vancsa at gmail.com Wed Jan 9 16:05:43 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 9 Jan 2019 17:05:43 +0100 Subject: [Starlingx-discuss] Contributor Meetup dial-in information Message-ID: <160A3823-0E6F-4815-9E67-4943A73663A7@gmail.com> Hi, We will be having the first StarlingX Contributor Meetup[1] next Tuesday-Wednesday. While it is a face to face workshop we will be providing a remote participation option for those of you who will not be able to join in person. Please see the Zoom link and additional dial-in information that we will use at the time of the meetup next week. All the colliding project team calls will be cancelled. Please let me know if you have any questions. Thanks and Best Regards, Ildikó Call-in details: • Join Zoom Meeting https://zoom.us/j/316228339 • One tap mobile • +16699006833,,316228339# US (San Jose) • +16468769923,,316228339# US (New York) • Dial by your location • +1 669 900 6833 US (San Jose) • +1 646 876 9923 US (New York) • Meeting ID: 316 228 339 • Find your local number: https://zoom.us/u/ddD6DnT3X [1] https://etherpad.openstack.org/p/stx-chandler-meetup From Don.Penney at windriver.com Wed Jan 9 16:08:20 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 9 Jan 2019 16:08:20 +0000 Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore In-Reply-To: <1360000352.1296432.1547047414173@communicator.strato.com> References: <1360000352.1296432.1547047414173@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> I just checked the latest CENGN ISO, and it's still using the unmodified installer image. See attached email thread. Scott, any update on this issue? -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, January 09, 2019 10:24 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore Hi, I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. After starting the installation I'll get the following message: [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk [ 18.665925] localhost multipathd[636]: sda: add path (uevent) [ 18.667242] localhost multipathd[636]: sda: failed to get path uid [ 18.667399] localhost multipathd[636]: uevent trigger error [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through [ 21.990203] localhost kernel: sdb: sdb1 [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) [ 63.961502] localhost kernel: random: crng init done [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts ... [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. Any idea or hint is welcome! Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An embedded message was scrubbed... From: "Penney, Don" Subject: Re: [Starlingx-discuss] STX install_state status Date: Fri, 4 Jan 2019 20:01:27 +0000 Size: 30426 URL: From Bin.Qian at windriver.com Wed Jan 9 16:28:05 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Wed, 9 Jan 2019 16:28:05 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F40E7D@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> , <0D7994A90DD70040A9F5E77C4D23C57D50F40E7D@SHSMSX103.ccr.corp.intel.com> Message-ID: The code is no longer used. I will setup a task to do some cleanup. Bin ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Tuesday, January 08, 2019 7:57 AM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Please help submit patch to remove them from source code if we could make sure they’re not needed anymore. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 8, 2019 10:08 PM To: Liu, Changcheng ; Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I don’t think any of those files in that patches folder are needed anymore. They were added to the c ode back to 2015. Bin will know for sure, whether or not they can be removed. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Monday, January 07, 2019 8:26 PM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what’s below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I’m removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don’t know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng >; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin > Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate “V1” tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What’s the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Wed Jan 9 17:26:34 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 9 Jan 2019 17:26:34 +0000 Subject: [Starlingx-discuss] [Release] Release Planning Prep for F2F meeting Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4A1BDA@ALA-MBD.corp.ad.wrs.com> As discussed in the community call, we would like to fill in the release plan as much as possible prior to the F2F meeting next week. The spreadsheet is available at: https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=405844719 1. All: Review the Field Definition in the 3rd worksheet and provide any feedback Note: Do we need to track any other milestones per deliverable at the release level? 2. Project Leads: Fill in the plans for your items as much as possible before the F2F meeting. Come prepared to discuss status, challenges, etc. The goal of the first day in the F2F is to align on release content and milestones and start tracking to the plan. If you have any questions, please do not hesitate to contact me. Regards, Ghada From Tee.Ngo at windriver.com Wed Jan 9 18:53:26 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Wed, 9 Jan 2019 18:53:26 +0000 Subject: [Starlingx-discuss] Deployment Improvements Proposal In-Reply-To: References: Message-ID: <80ED4CE81E3D8F4099306648E95DAFE44C9E44CA@ALA-MBD.corp.ad.wrs.com> Hello, The Ansible bootstrap deployment specification is now available for review. https://review.openstack.org/#/c/629581/ Your feedback is welcome and appreciated. Regards, Tee From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: December-13-18 2:11 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Deployment Improvements Proposal Hello, Attached are the slides I presented during the TSC call on Dec 13, 2018 for the proposed improvements to the StarlingX initial bootstrap and system inventory. As indicated on the call, a detailed stx-spec will follow, but wanted to share the high-level changes being proposed before the arrival of the spec to get some early feedback. Regards, Matt -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Wed Jan 9 19:24:07 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 9 Jan 2019 19:24:07 +0000 Subject: [Starlingx-discuss] document outdated In-Reply-To: References: Message-ID: Hi Wang Guo, > https://docs.starlingx.io/developer_guide/index.html#setup-repository- > docker-container > > After executing "cd $HOME/stx-tools/centos-mirror-tools/", cannot find > Dockerfile at this directory; then execute "docker build --tag $USER:centos- > mirror-repository --file Dockerfile" failed. The required changes are being reviewed https://review.openstack.org/#/c/619043/ We will post them no later than tomorrow. For now please refer to http://git.openstack.org/cgit/openstack/stx-tools/tree/README.rst From scott.little at windriver.com Wed Jan 9 20:54:34 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 9 Jan 2019 15:54:34 -0500 Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> References: <1360000352.1296432.1547047414173@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> Message-ID: <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> Fix as of this image ... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190109T162801Z/outputs/iso/bootimage.iso On 2019-01-09 11:08 a.m., Penney, Don wrote: > I just checked the latest CENGN ISO, and it's still using the unmodified installer image. See attached email thread. > > Scott, any update on this issue? > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, January 09, 2019 10:24 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore > > Hi, > > I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. > > After starting the installation I'll get the following message: > > [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! > [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi > [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or > [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi > [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf > [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 > [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk > [ 18.665925] localhost multipathd[636]: sda: add path (uevent) > [ 18.667242] localhost multipathd[636]: sda: failed to get path uid > [ 18.667399] localhost multipathd[636]: uevent trigger error > [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS > [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS > [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found > [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached > [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) > [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off > [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 > [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found > [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through > [ 21.990203] localhost kernel: sdb: sdb1 > [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk > [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) > [ 63.961502] localhost kernel: random: crng init done > [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts > ... > [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. > [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). > [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist > [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... > [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). > > I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. > > Any idea or hint is welcome! > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Wed Jan 9 21:14:33 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 9 Jan 2019 21:14:33 +0000 Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore In-Reply-To: <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> References: <1360000352.1296432.1547047414173@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA411EFF@ALA-MBD.corp.ad.wrs.com> Thanks Scott. From: Little, Scott Sent: Wednesday, January 09, 2019 3:55 PM To: Penney, Don; Marcel Schaible; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore Fix as of this image ... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190109T162801Z/outputs/iso/bootimage.iso On 2019-01-09 11:08 a.m., Penney, Don wrote: I just checked the latest CENGN ISO, and it's still using the unmodified installer image. See attached email thread. Scott, any update on this issue? -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, January 09, 2019 10:24 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore Hi, I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. After starting the installation I'll get the following message: [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk [ 18.665925] localhost multipathd[636]: sda: add path (uevent) [ 18.667242] localhost multipathd[636]: sda: failed to get path uid [ 18.667399] localhost multipathd[636]: uevent trigger error [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through [ 21.990203] localhost kernel: sdb: sdb1 [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) [ 63.961502] localhost kernel: random: crng init done [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts ... [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. Any idea or hint is welcome! Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Wed Jan 9 21:27:25 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 9 Jan 2019 21:27:25 +0000 Subject: [Starlingx-discuss] [build][meetings] build team meeting Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105B8595@fmsmsx104.amr.corp.intel.com> Build team meeting agenda for 1/10/2018 - Cengn build update - any help needed? - Build team priorities for next release - review tasks assigned from TSC - Chandler community meeting - Opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Wed Jan 9 23:41:35 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 9 Jan 2019 23:41:35 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190109 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C89E48@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-09 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Launchpad: https://bugs.launchpad.net/starlingx/+bug/1810553 (Fixed) Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Jan 10 00:40:43 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 10 Jan 2019 00:40:43 +0000 Subject: [Starlingx-discuss] puppet related srpm be upgraded in CentOS 7.6 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB37B01F@ALA-MBD.corp.ad.wrs.com> References: <9700A18779F35F49AF027300A49E7C765FE6A86B@SHSMSX101.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB37B01F@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE6AAC1@SHSMSX101.ccr.corp.intel.com> Get it. Thanks Brent. Best Regards Shuicheng From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, January 9, 2019 10:47 PM To: Lin, Shuicheng ; starlingx-discuss at lists.starlingx.io Subject: RE: puppet related srpm be upgraded in CentOS 7.6 Shuicheng, You can abandon the 7.6 upgrade for all of these. Brent From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Wednesday, January 9, 2019 9:31 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: puppet related srpm be upgraded in CentOS 7.6 Hi all, Here is the puppet related srpm we plan to upgrade in CentOS 7.6 feature branch. Please help review whether the upgrade is needed or not. Thanks. You could find them in below story as task also: https://storyboard.openstack.org/#!/story/2004522 puppet-keystone puppet-oslo puppet-cinder puppet-glance puppet-nova puppet-ceilometer puppet-heat puppet-ironic puppet-magnum puppet-murano puppet-neutron puppet-panko puppet-swift puppet-gnocchi Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Thu Jan 10 01:31:38 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Thu, 10 Jan 2019 01:31:38 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: References: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> , <0D7994A90DD70040A9F5E77C4D23C57D50F40E7D@SHSMSX103.ccr.corp.intel.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F4184E@SHSMSX103.ccr.corp.intel.com> Thanks Bin. From: Qian, Bin [mailto:Bin.Qian at windriver.com] Sent: Thursday, January 10, 2019 12:28 AM To: Liu, Changcheng ; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code The code is no longer used. I will setup a task to do some cleanup. Bin ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Tuesday, January 08, 2019 7:57 AM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Please help submit patch to remove them from source code if we could make sure they're not needed anymore. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 8, 2019 10:08 PM To: Liu, Changcheng >; Qian, Bin > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I don't think any of those files in that patches folder are needed anymore. They were added to the c ode back to 2015. Bin will know for sure, whether or not they can be removed. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Monday, January 07, 2019 8:26 PM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what's below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I'm removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don't know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng >; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin > Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Thu Jan 10 02:56:53 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Thu, 10 Jan 2019 02:56:53 +0000 Subject: [Starlingx-discuss] RFE Discussion for patch 9f926a5 for StartlingX upstreaming Message-ID: Hi Allain, Miguel posted two more questions on "[RFE] Add l2pop support for floating IP resources". I don't know how to answer the question "how to route external provider network whose type is VXLAN to outside"? https://bugs.launchpad.net/neutron/+bug/1803494 Because external provider network is a mapping to the outside network, I tried to setup an VXLAN network environment (OVS is used as VTEP) and add a VXLAN port on br-ex to route the network traffic from OpenStack to VXLAN network environment. But it didn't work. Could you please give some advice on how to setup the environment? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Jan 10 13:45:10 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 10 Jan 2019 14:45:10 +0100 Subject: [Starlingx-discuss] Use cases mapping to MVP architectures - FEEDBACK NEEDED Message-ID: Hi, We are reaching out to you about the use cases for edge cloud infrastructure that the Edge Computing Group is working on to collect. They are recorded in our wiki [1] and they describe high level scenarios when an edge cloud infrastructure would be needed. During the second Denver PTG discussions we drafted two MVP architectures what we could build from the current functionality of OpenStack with some slight modifications [2]. These are based on the work of James and his team from Oath. We differentiate between a distributed [3] and a centralized [4] control plane architecture scenarios. In one of the Berlin Forum sessions we were asked to map the MVP architecture scenarios to the use cases so I made an initial mapping and now we are looking for feedback. This mapping only means, that the listed use case can be implemented using the MVP architecture scenarios. It should be noted, that none of the MVP architecture scenarios provide solution for edge cloud infrastructure upgrade or centralized management. Please comment on the wiki or in a reply to this mail in case you have questions or disagree with the initial mapping we put together. Please let us know if you have any questions. Here is the use cases and the mapped architecture scenarios: Mobile service provider 5G/4G virtual RAN deployment and Edge Cloud B2B2X [5] Both distributed [3] and centralized [4] Universal customer premise equipment (uCPE) for Enterprise Network Services[6] Both distributed [3] and centralized [4] Unmanned Aircraft Systems (Drones) [7] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Cloud Storage Gateway - Storage at the Edge [8] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Open Caching - stream/store data at the edge [9] Both distributed [3] and centralized [4] Smart City as Software-Defined closed-loop system [10] The use case is not complete enough to figure out Augmented Reality -- Sony Gaming Network [11] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Analytics/control at the edge [12] The use case is not complete enough to figure out Manage retail chains - chick-fil-a [13] The use case is not complete enough to figure out At this moment chick-fil-a uses a different Kubernetes cluster in every edge location and they manage them using Git [14] Smart Home [15] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event Data Collection - Smart cooler/cold chain tracking [16] None - assuming that this Use Case requires a Small Edge instance which can work in case of a network partitioning event VPN Gateway Service Delivery [17] The use case is not complete enough to figure out [1]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases [2]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures [3]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Distributed_Control_Plane_Scenario [4]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Edge_Reference_Architectures#Centralized_Control_Plane_Scenario [5]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Mobile_service_provider_5G.2F4G_virtual_RAN_deployment_and_Edge_Cloud_B2B2X. [6]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Universal_customer_premise_equipment_.28uCPE.29_for_Enterprise_Network_Services [7]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Unmanned_Aircraft_Systems_.28Drones.29 [8]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Cloud_Storage_Gateway_-_Storage_at_the_Edge [9]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Open_Caching_-_stream.2Fstore_data_at_the_edge [10]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Smart_City_as_Software-Defined_closed-loop_system [11]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Augmented_Reality_--_Sony_Gaming_Network [12]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Analytics.2Fcontrol_at_the_edge [13]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Manage_retail_chains_-_chick-fil-a [14]: https://schd.ws/hosted_files/kccna18/34/GitOps.pdf [15]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Smart_Home [16]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#Data_Collection_-_Smart_cooler.2Fcold_chain_tracking [17]: https://wiki.openstack.org/wiki/Edge_Computing_Group/Use_Cases#VPN_Gateway_Service_Delivery Thanks and Best Regards, Gergely and Ildikó From Allain.Legacy at windriver.com Thu Jan 10 14:09:46 2019 From: Allain.Legacy at windriver.com (Legacy, Allain) Date: Thu, 10 Jan 2019 14:09:46 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC543E0@ALA-MBD.corp.ad.wrs.com> Message-ID: <70A7408C6E1BFB41B192A929744D8523BAC602BD@ALA-MBD.corp.ad.wrs.com> I read through the meeting minutes. I am not familiar with resource queues or the oslo purge queue functionality but it does sound like it might provide a more deterministic solution. I recommend that we abandon our stale rpc mechanism and work with the neutron developers to investigate the feasibility of implementing a different approach based on their recommendations. Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Friday, December 21, 2018 5:52 PM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I discussed this RFE in the Neutron driver meeting last night. It was a heated discussion and took up almost all the meeting time. However, the Neutron driver team thought the delay approach was not reliable and it won't perform predictably in all situations (no perfect setting for every deployment), along with some other concerns. They would prefer ways like purge_queue in rabbitmq/possibly in oslo_messaging (https://www.rabbitmq.com/rabbitmqctl.8.html#purge_queue) OR use a resource queue as the l3-agent does, if we do want the RFE to move forward. Please kindly see the MM for further details: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-12-21-14.00.log.html. What do you think or suggest? Thanks. BR, Kailun From: Qin, Kailun Sent: Wednesday, December 19, 2018 10:12 AM To: Legacy, Allain >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From kailun.qin at intel.com Thu Jan 10 14:14:30 2019 From: kailun.qin at intel.com (Qin, Kailun) Date: Thu, 10 Jan 2019 14:14:30 +0000 Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming In-Reply-To: <70A7408C6E1BFB41B192A929744D8523BAC602BD@ALA-MBD.corp.ad.wrs.com> References: <70A7408C6E1BFB41B192A929744D8523BAC380CE@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC5363A@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC543E0@ALA-MBD.corp.ad.wrs.com> <70A7408C6E1BFB41B192A929744D8523BAC602BD@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Allain, Thanks for your comments. I'll abandon our current patch and start investigating the feasibility of implementing the different approach as they recommended. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Thursday, January 10, 2019 10:10 PM To: Qin, Kailun ; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming I read through the meeting minutes. I am not familiar with resource queues or the oslo purge queue functionality but it does sound like it might provide a more deterministic solution. I recommend that we abandon our stale rpc mechanism and work with the neutron developers to investigate the feasibility of implementing a different approach based on their recommendations. Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Friday, December 21, 2018 5:52 PM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I discussed this RFE in the Neutron driver meeting last night. It was a heated discussion and took up almost all the meeting time. However, the Neutron driver team thought the delay approach was not reliable and it won't perform predictably in all situations (no perfect setting for every deployment), along with some other concerns. They would prefer ways like purge_queue in rabbitmq/possibly in oslo_messaging (https://www.rabbitmq.com/rabbitmqctl.8.html#purge_queue) OR use a resource queue as the l3-agent does, if we do want the RFE to move forward. Please kindly see the MM for further details: http://eavesdrop.openstack.org/meetings/neutron_drivers/2018/neutron_drivers.2018-12-21-14.00.log.html. What do you think or suggest? Thanks. BR, Kailun From: Qin, Kailun Sent: Wednesday, December 19, 2018 10:12 AM To: Legacy, Allain >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for the feedbacks! Excuse me that I missed the "_wait_if_syncing" decorator somehow. Exactly, w/ this wrapper we should not have any problem for the case cited by the community. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Tuesday, December 18, 2018 9:23 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming The RPC handlers (e.g., port_update_end) are all wrapped with "_wait_if_syncing" so they don't actually start processing until after sync has completed. We are only trying to prevent messages from being processed between the start of the process lifetime and the beginning of the initial sync. That window is what leads to the issues we have noted. To address yesterday's question about the initial delay being long, I don't think that it needs to be more than ~10 seconds. Any stale RPC messages would be consumed quickly since they are discarded without doing any real work in the agent. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, December 18, 2018 6:42 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hello Allain, The community responds w/ another question/case: 1. Agent starts and is doing full sync - so it gets list of ports and networks from server and starts configuring it one by one, right? 2. During this time, processing of incoming RPC messages is blocked, right? 3. Now (still during initial full sync) someone deleted ports so port-delete-end message is send to DHCP agent but this agent refuse to process this message, right? 4. Full sync is end and agent is still handling port which was deleted in 3. - am I right? Or will it be cleaned somehow? It sounds like a good question to me based on our current implementation. What do you think? BR, Kailun From: Qin, Kailun Sent: Tuesday, December 18, 2018 8:36 AM To: 'Legacy, Allain' >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming Allain, Thanks a lot for your comments. Make sense to me. Let's keep w/ the proposed agent delay approach and see how it goes with Neutron team. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Monday, December 17, 2018 11:11 PM To: Qin, Kailun >; Peters, Matt > Cc: 'starlingx-discuss at lists.starlingx.io' > Subject: RE: Questions about patch fd6cfc upstreaming In my opinion, it does not matter how long the full sync takes. Processing any RPC messages, even ones that are not stale, before the initial full sync completes is not guaranteed to provide consistent results. For example, if a port-update-end arrives before that port is received as part of the initial sync it will unnecessarily result in a full resync on that port's network. Similarly, if a port-delete-end arrives before that port is received as part of the initial sync then it will be added to the "deleted_ports" list but that list is not referenced during the full sync so the information for that port will remain in the DHCP configuration for that network even though the port no longer exists. That will cause issues later when a new port is created and uses the IP address of that deleted port. If the core reviewers prefer using timestamps embedded within the RPC payload then we can explore that option, but that will come with backward compatibility constraints and additional complexity. Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Monday, December 17, 2018 9:29 AM To: Legacy, Allain; Peters, Matt Cc: 'starlingx-discuss at lists.starlingx.io' Subject: RE: Questions about patch fd6cfc upstreaming Hi Allain, Matt, I followed up this RFE (https://bugs.launchpad.net/neutron/+bug/1795212) and received one feedback from a Neutron core reviewer. He thinks that the delay isn't good approach because how much time agent will need to do full sync after restart is unknown. He prefers something based on timestamps and discard messages which came before agent was started. I believe you've also considered the timestamp/sequence/lifetime number based approach so that stale messages can be discarded w/ more certainty. What's your opinion? Should we keep w/ the delay approach of DHCP agent and discuss further in the driver meeting to see more feedbacks, in the spirit of avoiding compatibility changes and changes that would impact running against an unmodified server? Or we change our investigation direction to the timestamp-based solution? Thanks! BR, Kailun From: Qin, Kailun Sent: Wednesday, November 21, 2018 11:01 AM To: Legacy, Allain >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming Allain, Great thanks for the information! The scenario is reasonable and detailed enough for me. Let's feedback this along with some other follow-up answers to the Neutron team and see how it goes. BR, Kailun From: Legacy, Allain [mailto:Allain.Legacy at windriver.com] Sent: Wednesday, November 21, 2018 2:13 AM To: Qin, Kailun >; Peters, Matt > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Questions about patch fd6cfc upstreaming We only observed this type of issue in a large office configuration where the neutron-server is overloaded during a DOR test (dead office recovery) where all nodes are powered off and back on. In such a scenario the system is overloaded for an extended period and there is a long delay between when events occur and when notifications are received by subscribers. It is difficult to reproduce this on small systems where the time between event and notification is short. I don't remember the exact details of the entire scenario, but the high level issue was that we wanted to avoid agents receiving and processing RPC messages that were sent to them before they started up. That happens more frequently in a DOR test because the server has a stale view of the system state and can send RPC messages to nodes that are not enabled yet. That is, its agent DB table may show that all agents are healthy depending on how long it took for the DOR to recover the controller node. What we found was that it was possible for the server to think that the agent was up when it was actually down. During the window where the server sees the agent as up it can send it RPC messages. Those messages get queued up and delivered to the agent once it is finally up. The problem is since the agent was not actually up in the first place those messages were never really valid. Therefore we wanted the agent to discard any RPC requests until after it was able to resync to the server. This allowed the system to avoid unnecessary transitions based on old data. One of the specific problems that this was addressing was something like this: 1. A subnet had no remaining IP addresses to allocated 2. A DCHP agent (agent-X) received a stale message to "create network" so it reserved a DHCP port with an IP address (this used the last available IP address) 3. Meanwhile, the DHCP agent (agent-Y) that actually was assigned the network came up and was not able to reserve a DHCP port because there were no IP addresses available 4. The first agent (agent-X) was taken down because its node was rebooted by system maintenance 5. The second agent (agent-Y) never retries the DHCP port creation because the DHCP agent has no periodic audit so there was no DHCP server servicing the network Regards, Allain Allain Legacy, Software Developer, Wind River direct 613.270.2279 fax 613.492.7870 skype allain.legacy 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 [WIND] From: Qin, Kailun [mailto:kailun.qin at intel.com] Sent: Tuesday, November 20, 2018 2:02 AM To: Peters, Matt Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Questions about patch fd6cfc upstreaming Hi Matt, I'm working on the patch fd6cfc upstreaming, which tries to address the stale RPC message issue when DHCP agent restarting up. The patch was in good shape https://review.openstack.org/609463/ whereas the neutron community was questioning about the exact failure modes of this issue. The DHCP agent will have a full sync after the agent restarting up (https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py#L195), what kind of corner cases and negative behaviors could happen even w/ this full sync? Based on the commit message, I tried to reproduce this issue w/ the following steps: 1. schedule network1 to agent1. 2. turn down agent1 at almost the same time. 3. network1 is rescheduled to agent2 after finding that agent1 is dead. 4. turn up agent1, expecting stale RPC messages to be received by agent1 so that both agent1 and agent2 are servicing network1. However, I can only meet the described failure mode by sending another scheduling operation (network1->agent1) after step 2) is done. For the others, they seem to work as expected. Would you please kindly help provide more details about the failure pattern of this issue and/or the reproduction steps? Thanks a lot. BR, Kailun -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 1807 bytes Desc: image001.png URL: From Ken.Young at windriver.com Thu Jan 10 15:34:29 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Thu, 10 Jan 2019 15:34:29 +0000 Subject: [Starlingx-discuss] CENGN Mirror Robustness Message-ID: Team, We are still working through some integration issues with moving the mirror to the kubernetes cluster. We will not be switching over on Monday, Jan 14th. We are working on revising our timelines. I will update the community when I have better visibility. Regards, Ken Y From: Ken Young Date: Friday, January 4, 2019 at 1:49 PM To: starlingx Cc: Raymond Maika Subject: [Starlingx-discuss] CENGN Mirror Robustness All, We are working on a plan to make the Mirror more robust for the community as a whole. In co-operation with CENGN, we are moving the mirror from the current bare metal server to a kubernetes container with a CEPH backend. From a community perspective, we are expecting to have the transition to be seamless. We are sending the email from an awareness perspective. The current timeline we are targeting is: * Have the environment ready by Jan 8th * Perform testing in parallel with the existing and the new environment until Jan 11th * Transition to the new implementation of the mirror on Jan 14th. We are planning to maintain the current environment in case we need to switch back to the existing implementation. We will keep you in the loop as we progress towards this transition. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Jan 10 16:09:35 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 10 Jan 2019 10:09:35 -0600 Subject: [Starlingx-discuss] Issues with build instructions Message-ID: Hi team I am following the image build instructions from : https://docs.starlingx.io/developer_guide/ If these are not the correct instructions please let me know. I am stuck at the point of: $ docker build --tag $USER:centos-mirror-repository --file Dockerfile . I am behind a proxy and i already set up docker for proxy ( i tested docker run hello-world and works fine ) and set up the Dockerfile as suggested: ENV http_proxy " http://your.actual_http_proxy.com:your_port " ENV https_proxy " https://your.actual_https_proxy.com:your_port " ENV ftp_proxy " http://your.actual_ftp_proxy.com:your_port " RUN echo " proxy=http://your-proxy.com:port " >> /etc/yum.conf However, I am still facing this error Cannot find a valid baseurl for repo: base/7/x86_64 Could not retrieve mirrorlist http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container error was 12: Timeout on http://mirrorlist.centos.org/?release=7&arch=x86_64&repo=os&infra=container: (28, 'Resolving timed out after 30541 milliseconds') The command '/bin/sh -c yum install -y epel-release sudo vim-enhanced net-tools git /usr/bin/yumdownloader rpm-build rpm-sign deltarpm wget bind bind-utils && rm /etc/yum.repos.d/CentOS-Sources.repo /etc/yum.repos.d/epel.repo' returned a non-zero code: 1 Full log at : https://hastebin.com/omuzimenid.sql Any help is more than welcome Regards Victor Rodriguez From scott.little at windriver.com Thu Jan 10 17:16:08 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 10 Jan 2019 12:16:08 -0500 Subject: [Starlingx-discuss] test e-mail, please ignore Message-ID: test From build.starlingx at gmail.com Thu Jan 10 17:35:15 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 12:35:15 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 29 - Still Failing! In-Reply-To: <1164981661.76.1547141331171.JavaMail.javamailuser@localhost> References: <1164981661.76.1547141331171.JavaMail.javamailuser@localhost> Message-ID: <461021720.78.1547141716862.JavaMail.javamailuser@localhost> Project: email-test Build #: 29 Status: Still Failing Timestamp: 20190110T173515Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 467 bytes Desc: not available URL: From Matt.Peters at windriver.com Thu Jan 10 17:51:56 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 10 Jan 2019 17:51:56 +0000 Subject: [Starlingx-discuss] Need input on patch 4ae5a58 & 88b7bc7 upstreaming In-Reply-To: References: Message-ID: <8FD5BA0F-A0E0-4352-A329-07F441B05E43@windriver.com> Hi Chenjie, Thanks for sending this email. See my responses inline. I have also added the starlingx-discuss mailing list to this reply so that the community can add additional comments or considerations. The reply to the RFE’s can be something like the following: This RFE is specific to the Neutron BGP-EVPN use-case which is currently not being pursued for upstreaming from the corresponding starlingx-staging projects (stx-neutron-dynamic-routing and stx-networking-bgpvpn). Therefore, until this feature is required, and an attempt is made to have it accepted by the OpenStack community, this feature cannot be used as a justification for this RFE. This RFE is being abandoned and can be revived if the BGP-EVPN feature becomes a priority. From: "Xu, Chenjie" Date: Thursday, January 10, 2019 at 11:54 AM To: "Peters, Matt" Cc: Ghada Khalil , "Zhao, Forrest" Subject: Need input on patch 4ae5a58 & 88b7bc7 upstreaming Hi Matt, There are 3 RFEs related to BGPVPN and one of them has been abandoned by today’s meeting. The abandoned RFE’s (9f926a5) link is below (we can call it RFE floatingIP): https://bugs.launchpad.net/neutron/+bug/1803494 MP> As discussed, we should abandon this RFE. We also need to ensure that we respond to the launchpad with the reasoning for the abandonment. Another RFE (4ae5a58) https://bugs.launchpad.net/neutron/+bug/1793653 we can call it RFE l2pop. RFE l2pop has been discussed on Neutron Driver Meeting for several times, and they think RFE l2pop lacks approved use cases. There are two potential use cases: BGPVPN and RFE floatingIP. The community of BGPVPN is not active and RFE l2pop has been pending on Thomas Morin’s input for a long time (Thomas leads the networking-bgpvpn project). Because RFE floatingIP has been abandoned and BGPVPN got no input, could you please provide another use case in Neutron to help the community accept this RFE? What’s more, another concern of community is that it’s hard to test the feature related to BGPVPN and I don’t have a BGPVPN environment to test this RFE either. MP> I agree that this is related and is only valid within the BGP-EVPN use-case. It should be abandoned also with the same justification. The RFE (88b7bc7) https://bugs.launchpad.net/neutron/+bug/1806316 we can call it RFE BgpDrAgent. Though RFE BgpDrAgent has not been triaged by Neutron team, I think it will face the same problem as RFE l2pop. Neutron team may think this RFE lacks approved use case. This RFE only has one use case which is neutron-dynamic-routing. I asked Ryan Tidwell who leads the neutron-dynamic-routing project to review this RFE and for now pending on his input. Could you please provide another use case in Neutron to help the community accept this RFE? MP> At the moment we do not have an alternate use-case for this change. It should be abandoned also with the same justification. Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Thu Jan 10 18:14:24 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 10 Jan 2019 18:14:24 +0000 Subject: [Starlingx-discuss] Issues with build instructions In-Reply-To: References: Message-ID: Victor, > I am following the image build instructions from : > > https://docs.starlingx.io/developer_guide/ > > If these are not the correct instructions please let me know. I am > stuck at the point of: > > $ docker build --tag $USER:centos-mirror-repository --file Dockerfile . Patch is in process [0] Please use README.rst from stx-tool repository for now [1] [0] https://review.openstack.org/#/c/619043 [1] http://git.openstack.org/cgit/openstack/stx-tools/tree/README.rst From build.starlingx at gmail.com Thu Jan 10 18:23:52 2019 From: build.starlingx at gmail.com (build starlingx) Date: Thu, 10 Jan 2019 13:23:52 -0500 Subject: [Starlingx-discuss] test email, please ignore Message-ID: test2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Jan 10 18:27:27 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 10 Jan 2019 12:27:27 -0600 Subject: [Starlingx-discuss] Issues with build instructions In-Reply-To: References: Message-ID: Thanks, I'll check it out. On Thu, Jan 10, 2019 at 12:14 PM Arce Moreno, Abraham wrote: > > Victor, > > > I am following the image build instructions from : > > > > https://docs.starlingx.io/developer_guide/ > > > > If these are not the correct instructions please let me know. I am > > stuck at the point of: > > > > $ docker build --tag $USER:centos-mirror-repository --file Dockerfile . > > Patch is in process [0] > Please use README.rst from stx-tool repository for now [1] > > [0] https://review.openstack.org/#/c/619043 > [1] http://git.openstack.org/cgit/openstack/stx-tools/tree/README.rst > From build.starlingx at gmail.com Thu Jan 10 18:42:01 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 13:42:01 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 30 - Still Failing! In-Reply-To: <461021720.78.1547141716862.JavaMail.javamailuser@localhost> References: <461021720.78.1547141716862.JavaMail.javamailuser@localhost> Message-ID: <860851589.80.1547145722628.JavaMail.javamailuser@localhost> Project: email-test Build #: 30 Status: Still Failing Timestamp: 20190110T184201Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 466 bytes Desc: not available URL: From build.starlingx at gmail.com Thu Jan 10 18:48:15 2019 From: build.starlingx at gmail.com (build starlingx) Date: Thu, 10 Jan 2019 13:48:15 -0500 Subject: [Starlingx-discuss] Fwd: test email, please ignore In-Reply-To: References: Message-ID: ---------- Forwarded message --------- From: build starlingx Date: Thu, 10 Jan 2019 at 13:23 Subject: test email, please ignore To: test2 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jan 10 18:52:59 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 13:52:59 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 31 - Still Failing! In-Reply-To: <860851589.80.1547145722628.JavaMail.javamailuser@localhost> References: <860851589.80.1547145722628.JavaMail.javamailuser@localhost> Message-ID: <349780165.82.1547146380734.JavaMail.javamailuser@localhost> Project: email-test Build #: 31 Status: Still Failing Timestamp: 20190110T185259Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 467 bytes Desc: not available URL: From build.starlingx at gmail.com Thu Jan 10 19:49:48 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 14:49:48 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 32 - Still Failing! In-Reply-To: <349780165.82.1547146380734.JavaMail.javamailuser@localhost> References: <349780165.82.1547146380734.JavaMail.javamailuser@localhost> Message-ID: <1794735540.84.1547149790038.JavaMail.javamailuser@localhost> Project: email-test Build #: 32 Status: Still Failing Timestamp: 20190110T194948Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 440 bytes Desc: not available URL: From scott.little at windriver.com Thu Jan 10 20:04:45 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 10 Jan 2019 15:04:45 -0500 Subject: [Starlingx-discuss] [build-report] email-test - Build # 32 - Still Failing! In-Reply-To: <1794735540.84.1547149790038.JavaMail.javamailuser@localhost> References: <349780165.82.1547146380734.JavaMail.javamailuser@localhost> <1794735540.84.1547149790038.JavaMail.javamailuser@localhost> Message-ID: <70f98898-7b00-ed9b-64f2-a55e2ab21f56@windriver.com> Apologies folks for the spam. That was the last one. I was just debugging the last link in the chain of getting build reports from CENGN to starlingx-discuss.  Turns out the only bug was to add my email to the receiver list.  Thunderbird happily merged the two e-mails onto one, using the direct mail's version of the Subject field (lacking [starlingx-discuss] prefix), thus tricking me into believing the messages weren't being accepted by the mailing list. Messages with *[build-report]* in the subject will now be from real build jobs.  Only build failures will be reported, plus the first successful build once the issue is corrected.  There will likely be two mails per failure, one from the master job, and one from the sub-job that had the failure. I encourage you to monitor these e-mails, particularly if you submitted code within 24 hours prior to a failure. Scott On 2019-01-10 2:49 p.m., build.starlingx at gmail.com wrote: > Project: email-test > Build #: 32 > Status: Still Failing > > Timestamp: 20190110T194948Z > > Check attached log for details. > -------------------------------------------------------------------------------- > Parameters > > P1: foo > P2: bar > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Jan 10 20:19:28 2019 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 10 Jan 2019 20:19:28 +0000 Subject: [Starlingx-discuss] REMINDER about updating STX REST API DOCUMENTATION Message-ID: <4FAE7C47-63CB-46A3-9619-AA901C72EC90@windriver.com> ALL StarlingX-Contributors, This email is probably about 5 months too late ... apologies. ATTENTION back in ~ September 2018, the StarlingX Documentation Team converted all of our StarlingX REST API Documents from those ugly wadl and yaml files into .rst / restructuredText files; basically aligning with the way all the other services in OpenStack are documenting REST APIs. The new API DOCS (.rst files) are now located under the applicable repos (or sub-repos). It is the responsibility of the contributor who changes / updates / creates STX REST APIs to also update the STX REST API DOCUMENTATION. ( ... again, apologies for telling you this 5 months late ... ) The HOWTO info on updating the new REST API DOCS can be found here: https://docs.starlingx.io/contributor/api_contribute_guide.html Greg. p.s. If you do know that you've changed an STX API in the last 5 months and know that you have NOT done the corresponding STX API DOC change, then minimally you need to raise a StarlingX Launchpad Bug on the issue with the details of the missing changes. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Thu Jan 10 23:39:17 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 10 Jan 2019 23:39:17 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190110 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8A0B8@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-10 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bin.Qian at windriver.com Fri Jan 11 02:11:49 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Fri, 11 Jan 2019 02:11:49 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F4184E@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> , <0D7994A90DD70040A9F5E77C4D23C57D50F40E7D@SHSMSX103.ccr.corp.intel.com> , <0D7994A90DD70040A9F5E77C4D23C57D50F4184E@SHSMSX103.ccr.corp.intel.com> Message-ID: Hi, Changcheng I've created the user story https://storyboard.openstack.org/#!/story/20047. I am wondering if you can take this task? Thanks, Bin ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Wednesday, January 09, 2019 5:31 PM To: Qian, Bin; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Thanks Bin. From: Qian, Bin [mailto:Bin.Qian at windriver.com] Sent: Thursday, January 10, 2019 12:28 AM To: Liu, Changcheng ; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code The code is no longer used. I will setup a task to do some cleanup. Bin ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Tuesday, January 08, 2019 7:57 AM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Please help submit patch to remove them from source code if we could make sure they're not needed anymore. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 8, 2019 10:08 PM To: Liu, Changcheng >; Qian, Bin > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I don't think any of those files in that patches folder are needed anymore. They were added to the c ode back to 2015. Bin will know for sure, whether or not they can be removed. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Monday, January 07, 2019 8:26 PM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what's below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I'm removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don't know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng >; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin > Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Fri Jan 11 04:58:19 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Fri, 11 Jan 2019 04:58:19 +0000 Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code In-Reply-To: References: <0D7994A90DD70040A9F5E77C4D23C57D50F40961@SHSMSX103.ccr.corp.intel.com> , <0D7994A90DD70040A9F5E77C4D23C57D50F40E7D@SHSMSX103.ccr.corp.intel.com> , <0D7994A90DD70040A9F5E77C4D23C57D50F4184E@SHSMSX103.ccr.corp.intel.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F431CD@SHSMSX103.ccr.corp.intel.com> Hi Bin, Thanks for creating the story to track it. Sorry, I don't have time to take this task. Currently, I'm busy at changing other components to adapt to new ceph(v13.2.2). B.R. Changcheng From: Qian, Bin [mailto:Bin.Qian at windriver.com] Sent: Friday, January 11, 2019 10:12 AM To: Liu, Changcheng ; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi, Changcheng I've created the user story https://storyboard.openstack.org/#!/story/20047. I am wondering if you can take this task? Thanks, Bin ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Wednesday, January 09, 2019 5:31 PM To: Qian, Bin; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Thanks Bin. From: Qian, Bin [mailto:Bin.Qian at windriver.com] Sent: Thursday, January 10, 2019 12:28 AM To: Liu, Changcheng >; Bailey, Henry Albert (Al) > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code The code is no longer used. I will setup a task to do some cleanup. Bin ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Tuesday, January 08, 2019 7:57 AM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Please help submit patch to remove them from source code if we could make sure they're not needed anymore. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 8, 2019 10:08 PM To: Liu, Changcheng >; Qian, Bin > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I don't think any of those files in that patches folder are needed anymore. They were added to the c ode back to 2015. Bin will know for sure, whether or not they can be removed. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Monday, January 07, 2019 8:26 PM To: Bailey, Henry Albert (Al); Qian, Bin Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi AI & Bin, Does sm_patch.py make effect when installing the ISO image or it makes effect to apply all the patches when building the source code? Specifically, what's below patch used for? Could I remove it from source code directly? cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch I'm removing ceph-rest-api from source code and replace it with other service. The above patch define some service realted with ceph-rest-api. I don't know whether I need change that patch file. B.R. Changcheng From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, January 7, 2019 9:50 PM To: Liu, Changcheng >; starlingx-discuss at lists.starlingx.io Cc: Qian, Bin > Subject: RE: [Starlingx-discuss] What is the function of patches directory in StarlingX source code I believe the contents of that folder are installed into a patches folder, and acted upon by the sm-patch utility https://github.com/openstack/stx-ha/blob/master/service-mgmt-tools/sm-tools/sm_tools/sm_patch.py It looks like they populate "V1" tables. Bin should be able to indicate if we still make use of the V1 tables or not. Al From: Liu, Changcheng [mailto:changcheng.liu at intel.com] Sent: Sunday, January 06, 2019 10:57 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] What is the function of patches directory in StarlingX source code Hi all, What's the function of patches directory in StartlingX source code? For example: cgcs-root/stx/stx-ha/service-mgmt/sm-db-1.0.0/patches/sm_db_ceph_install.patch Regards, Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Fri Jan 11 13:53:32 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Fri, 11 Jan 2019 13:53:32 +0000 Subject: [Starlingx-discuss] ceph.pp: puppet usage Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F4344C@SHSMSX103.ccr.corp.intel.com> Hi Ovidiu, What does "<| |>" stand for in below file? cgcs-root/stx/stx-config/puppet-manifests/src/modules/platform/manifests/ceph.pp I GOOGLED a lot for this symbol, but no one introduce it. --Thanks Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From andy.ning at windriver.com Fri Jan 11 14:46:27 2019 From: andy.ning at windriver.com (Andy Ning) Date: Fri, 11 Jan 2019 09:46:27 -0500 Subject: [Starlingx-discuss] ceph.pp: puppet usage In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F4344C@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4344C@SHSMSX103.ccr.corp.intel.com> Message-ID: <94315e14-e8df-c8ca-aeb3-d97de9e5bd26@windriver.com> It's puppet collectors. https://puppet.com/docs/puppet/5.3/lang_collectors.html Andy On 2019-01-11 08:53 AM, Liu, Changcheng wrote: > > Hi Ovidiu, > > What does “<| |>” stand for in below file? > > cgcs-root/stx/stx-config/puppet-manifests/src/modules/platform/manifests/ceph.pp > > I GOOGLED a lot for this symbol, but no one introduce it. > > --Thanks > > Changcheng > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Jan 11 15:25:58 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 11 Jan 2019 15:25:58 +0000 Subject: [Starlingx-discuss] ceph.pp: puppet usage In-Reply-To: <94315e14-e8df-c8ca-aeb3-d97de9e5bd26@windriver.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4344C@SHSMSX103.ccr.corp.intel.com> <94315e14-e8df-c8ca-aeb3-d97de9e5bd26@windriver.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E40025@SHSMSX103.ccr.corp.intel.com> Changcheng, Wang Yi in my team is good at puppet, you can consult him for the similar question to get real-time Q/A. Thx. - cindy From: Andy Ning [mailto:andy.ning at windriver.com] Sent: Friday, January 11, 2019 10:46 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] ceph.pp: puppet usage It's puppet collectors. https://puppet.com/docs/puppet/5.3/lang_collectors.html Andy On 2019-01-11 08:53 AM, Liu, Changcheng wrote: Hi Ovidiu, What does “<| |>” stand for in below file? cgcs-root/stx/stx-config/puppet-manifests/src/modules/platform/manifests/ceph.pp I GOOGLED a lot for this symbol, but no one introduce it. --Thanks Changcheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Jan 11 21:22:12 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 11 Jan 2019 21:22:12 +0000 Subject: [Starlingx-discuss] No MultiOS meeting next week Message-ID: <9A85D2917C58154C960D95352B22818BB28CAB76@fmsmsx121.amr.corp.intel.com> We will not hold the MultiOS call next week. Many of the attendees will be on airplanes at the time. Amd most other StarlingX calls are also cancelled next week due to the community meetup in Chandler. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Jan 11 22:37:16 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 11 Jan 2019 22:37:16 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190111 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8A26B@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-11 (link) Sanity Test is executed in a Virtual Environment Status: RED Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] Multinode Dedicated Storage Setup 03 TCs [PASS] | 01 TCs [FAIL] Provisioning 00 TCs [PASS] | 01 TCs [FAIL] Sanity 00 TCs [PASS] | 19 TCs [FAIL] TOTAL: [ 03 TCs PASS ] | [ 21 TCs FAIL ] ------------------------------------------------------------------ config_controller: Populating Initial System Inventory failed 04/08: Populating initial system inventory ... No handlers could be found for logger "cgtsclient.common.http" Launchpad: https://bugs.launchpad.net/starlingx/+bug/1811473 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Sat Jan 12 00:03:04 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Sat, 12 Jan 2019 00:03:04 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190111 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8A2D3@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-11 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 18 TCs [PASS] TOTAL: [ 23 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 19 TCs [PASS] TOTAL: [ 24 TCs PASS ] ------------------------------------------------------------------ config_controller pass after increase VMs partition size. Launchpad: https://bugs.launchpad.net/starlingx/+bug/1811473 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Sat Jan 12 01:59:02 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Sat, 12 Jan 2019 01:59:02 +0000 Subject: [Starlingx-discuss] ceph.pp: puppet usage In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35E40025@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4344C@SHSMSX103.ccr.corp.intel.com> <94315e14-e8df-c8ca-aeb3-d97de9e5bd26@windriver.com> <2FD5DDB5A04D264C80D42CA35194914F35E40025@SHSMSX103.ccr.corp.intel.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F43927@SHSMSX103.ccr.corp.intel.com> Get it. Thanks Andy & Cindy. From: Xie, Cindy Sent: Friday, January 11, 2019 11:26 PM To: Andy Ning ; starlingx-discuss at lists.starlingx.io; Liu, Changcheng ; Wang, Yi C Subject: RE: [Starlingx-discuss] ceph.pp: puppet usage Changcheng, Wang Yi in my team is good at puppet, you can consult him for the similar question to get real-time Q/A. Thx. - cindy From: Andy Ning [mailto:andy.ning at windriver.com] Sent: Friday, January 11, 2019 10:46 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] ceph.pp: puppet usage It's puppet collectors. https://puppet.com/docs/puppet/5.3/lang_collectors.html Andy On 2019-01-11 08:53 AM, Liu, Changcheng wrote: Hi Ovidiu, What does “<| |>” stand for in below file? cgcs-root/stx/stx-config/puppet-manifests/src/modules/platform/manifests/ceph.pp I GOOGLED a lot for this symbol, but no one introduce it. --Thanks Changcheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Andy Ning Cube: 3071 Tel: 613-9631408 (int: 4408) Skype: andy.ning.wr -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Sun Jan 13 22:24:40 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Sun, 13 Jan 2019 22:24:40 +0000 Subject: [Starlingx-discuss] No containerization meeting Monday Message-ID: Similar to Bruce's note, we will not hold a containerization meeting on Monday Jan 14th as several attendees will be travelling to the community meetup in Chandler. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jan 14 02:28:26 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 14 Jan 2019 02:28:26 +0000 Subject: [Starlingx-discuss] RT kernel upgrade in CentOS 7.6 Message-ID: <9700A18779F35F49AF027300A49E7C765FE6B610@SHSMSX101.ccr.corp.intel.com> Hi all, For CentOS 7.6, the latest source rpm package we have for std and rt kernel are: Std kernel: kernel-3.10.0-957.1.3.el7.src.rpm Rt kernel: kernel-rt-3.10.0-957.rt56.910.el7.src.rpm You could find std kernel is 1 minor version ahead of rt kernel. And there is some discussion about it in patch [0]. There are some question I want to ask, and need your suggestion. Thanks in advance. 1st question: Is it acceptable to have minor different version for std and rt kernel? If not, why? 2nd question: If we decide to upgrade rt kernel to the same version as std. There are several way to do it: a) We generate the src rpm based on code in GIT manually. And save the rpm package to CENGN server. So mirror downloader script could get the new src rpm as previous. b) We switch to git code instead of src rpm package. We Add the rt kernel git to our manifest file, and download rt kernel code when do "repo sync". c) Keep current RT kernel first, and upgrade to new rt kernel when new src rpm is available. I don't know why it is not available yet. :) [0]: https://review.openstack.org/625773 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jan 14 02:43:19 2019 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 14 Jan 2019 02:43:19 +0000 Subject: [Starlingx-discuss] RT kernel upgrade in CentOS 7.6 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE6B610@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE6B610@SHSMSX101.ccr.corp.intel.com> Message-ID: <67660205-4ACC-4689-9F2F-619EAE1CFAD1@intel.com> In my own opinion, as long as there are no explicit known issues, such as CVE issues, we could keep up with whatever the latest version of RT kernel CentOS provides. It means I vote #2.c. Of course, WR folks might have the best insight whether there are any potential issues on such a combined case that controller nodes run with standard kernel while a slight different RT kernel runs on the compute node. Regards, Yong From: "Lin, Shuicheng" Date: Monday, 14 January 2019 at 10:28 AM To: "Somerville, Jim" , "Little, Scott" , "Wold, Saul" , "Rowsell, Brent" , "Hu, Yong" , "Xie, Cindy" Cc: "starlingx-discuss at lists.starlingx.io" Subject: RT kernel upgrade in CentOS 7.6 Hi all, For CentOS 7.6, the latest source rpm package we have for std and rt kernel are: Std kernel: kernel-3.10.0-957.1.3.el7.src.rpm Rt kernel: kernel-rt-3.10.0-957.rt56.910.el7.src.rpm You could find std kernel is 1 minor version ahead of rt kernel. And there is some discussion about it in patch [0]. There are some question I want to ask, and need your suggestion. Thanks in advance. 1st question: Is it acceptable to have minor different version for std and rt kernel? If not, why? 2nd question: If we decide to upgrade rt kernel to the same version as std. There are several way to do it: 1. We generate the src rpm based on code in GIT manually. And save the rpm package to CENGN server. So mirror downloader script could get the new src rpm as previous. 2. We switch to git code instead of src rpm package. We Add the rt kernel git to our manifest file, and download rt kernel code when do “repo sync”. 3. Keep current RT kernel first, and upgrade to new rt kernel when new src rpm is available. I don’t know why it is not available yet. :) [0]: https://review.openstack.org/625773 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Mon Jan 14 02:52:41 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 14 Jan 2019 02:52:41 +0000 Subject: [Starlingx-discuss] RT kernel upgrade in CentOS 7.6 In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE6B610@SHSMSX101.ccr.corp.intel.com> References: <9700A18779F35F49AF027300A49E7C765FE6B610@SHSMSX101.ccr.corp.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB385150@ALA-MBD.corp.ad.wrs.com> Please see inline Brent From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, January 13, 2019 9:28 PM To: Somerville, Jim ; Little, Scott ; Wold, Saul ; Rowsell, Brent ; Hu, Yong ; Xie, Cindy Cc: starlingx-discuss at lists.starlingx.io Subject: RT kernel upgrade in CentOS 7.6 Hi all, For CentOS 7.6, the latest source rpm package we have for std and rt kernel are: Std kernel: kernel-3.10.0-957.1.3.el7.src.rpm Rt kernel: kernel-rt-3.10.0-957.rt56.910.el7.src.rpm You could find std kernel is 1 minor version ahead of rt kernel. And there is some discussion about it in patch [0]. There are some question I want to ask, and need your suggestion. Thanks in advance. 1st question: Is it acceptable to have minor different version for std and rt kernel? If not, why? [BR] We want to align on a single version as we do not want to deal with two different kernel versions for changes (ex. CVE's) 2nd question: If we decide to upgrade rt kernel to the same version as std. There are several way to do it: a) We generate the src rpm based on code in GIT manually. And save the rpm package to CENGN server. So mirror downloader script could get the new src rpm as previous. b) We switch to git code instead of src rpm package. We Add the rt kernel git to our manifest file, and download rt kernel code when do "repo sync". [BR] I think this is the preferred path. c) Keep current RT kernel first, and upgrade to new rt kernel when new src rpm is available. I don't know why it is not available yet. :) [0]: https://review.openstack.org/625773 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Mon Jan 14 08:05:24 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Mon, 14 Jan 2019 08:05:24 +0000 Subject: [Starlingx-discuss] RT kernel upgrade in CentOS 7.6 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB385150@ALA-MBD.corp.ad.wrs.com> References: <9700A18779F35F49AF027300A49E7C765FE6B610@SHSMSX101.ccr.corp.intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB385150@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE6B7C9@SHSMSX101.ccr.corp.intel.com> Hi all, I just find 957.1 rt kernel is the 3rd party repo now. So I will upgrade the rt kernel to 957.1, the same as std kernel. Here is the link: http://linuxsoft.cern.ch/cern/centos/7.6/rt/Sources/SPackages/kernel-rt-3.10.0-957.1.3.rt56.913.el7.src.rpm Next time, when we have different version for std and rt kernel, I will try to follow 2.b method. Thanks. Best Regards Shuicheng From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Monday, January 14, 2019 10:53 AM To: Lin, Shuicheng ; Somerville, Jim ; Little, Scott ; Wold, Saul ; Hu, Yong ; Xie, Cindy Cc: starlingx-discuss at lists.starlingx.io Subject: RE: RT kernel upgrade in CentOS 7.6 Please see inline Brent From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Sunday, January 13, 2019 9:28 PM To: Somerville, Jim >; Little, Scott >; Wold, Saul >; Rowsell, Brent >; Hu, Yong >; Xie, Cindy > Cc: starlingx-discuss at lists.starlingx.io Subject: RT kernel upgrade in CentOS 7.6 Hi all, For CentOS 7.6, the latest source rpm package we have for std and rt kernel are: Std kernel: kernel-3.10.0-957.1.3.el7.src.rpm Rt kernel: kernel-rt-3.10.0-957.rt56.910.el7.src.rpm You could find std kernel is 1 minor version ahead of rt kernel. And there is some discussion about it in patch [0]. There are some question I want to ask, and need your suggestion. Thanks in advance. 1st question: Is it acceptable to have minor different version for std and rt kernel? If not, why? [BR] We want to align on a single version as we do not want to deal with two different kernel versions for changes (ex. CVE's) 2nd question: If we decide to upgrade rt kernel to the same version as std. There are several way to do it: a) We generate the src rpm based on code in GIT manually. And save the rpm package to CENGN server. So mirror downloader script could get the new src rpm as previous. b) We switch to git code instead of src rpm package. We Add the rt kernel git to our manifest file, and download rt kernel code when do "repo sync". [BR] I think this is the preferred path. c) Keep current RT kernel first, and upgrade to new rt kernel when new src rpm is available. I don't know why it is not available yet. :) [0]: https://review.openstack.org/625773 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From quickconvey at gmail.com Mon Jan 14 08:55:38 2019 From: quickconvey at gmail.com (Quick Convey) Date: Mon, 14 Jan 2019 14:25:38 +0530 Subject: [Starlingx-discuss] Starlingx network requirement Message-ID: Dear All, I am planing to setup Starlingx in bare-metal (controller-storage deployment) https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage I have couple of questions *Q1)* What is the network requirements for this setup. All nodes should be in same network, that is the only requirement, right ? *Q2)* In "Hardware Requirements" section, I seen *"Data: n x 10GE Compute"*, what that means, is it number of physical interfaces needed for data ? what that* "n" *indicate ? is it number of compute nodes ? *Q3) *What is the number of physical interfaces needed in controller and compute bare-metal nodes ?. From the document I understand that only 2 physical interfaces are enough, right ? *Q4) *Is there any picture which shows *Management*, *OAM* and *Data* interface connections between controller and compute nodes ? *Thanks,* -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Mon Jan 14 14:59:36 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Mon, 14 Jan 2019 15:59:36 +0100 (CET) Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore In-Reply-To: <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> References: <1360000352.1296432.1547047414173@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> Message-ID: <300960740.1365669.1547477976869@communicator.strato.com> Hi, the image lloks better. At least I am getting now the installer screen. Now I am getting the following error: <--- snipp ---> anaconda 21.48.22.121-1 for CentOS 7 started. * installation log files are stored in /tmp during the installation * shell is available on TTY2 * when reporting a bug add logs from /tmp as separate text/plain attachments 02:17:51 Running pre-installation scripts There was an error running the kickstart script at line 256. This is a fatal error and installation will be aborted. The details of this error are: Installation failed. ERROR: Specified installation (sda) or boot (sda) device is a USB drive. <--- snipp ---> Question: How can I confugure the installation/root device? I my case I have 2 1TB nvme flash drives. Thanks for your help in advance Marcel > Scott Little hat am 9. Januar 2019 um 21:54 geschrieben: > > > Fix as of this image ... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190109T162801Z/outputs/iso/bootimage.iso > > > > On 2019-01-09 11:08 a.m., Penney, Don wrote: > > I just checked the latest CENGN ISO, and it's still using the unmodified installer image. See attached email thread. > > > > Scott, any update on this issue? > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, January 09, 2019 10:24 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore > > > > Hi, > > > > I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. > > > > After starting the installation I'll get the following message: > > > > [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! > > [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi > > [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or > > [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi > > [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf > > [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 > > [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk > > [ 18.665925] localhost multipathd[636]: sda: add path (uevent) > > [ 18.667242] localhost multipathd[636]: sda: failed to get path uid > > [ 18.667399] localhost multipathd[636]: uevent trigger error > > [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS > > [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS > > [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found > > [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached > > [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) > > [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off > > [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 > > [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found > > [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through > > [ 21.990203] localhost kernel: sdb: sdb1 > > [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk > > [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) > > [ 63.961502] localhost kernel: random: crng init done > > [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts > > ... > > [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. > > [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). > > [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist > > [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... > > [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). > > > > I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. > > > > Any idea or hint is welcome! > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From Don.Penney at windriver.com Mon Jan 14 15:05:15 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Mon, 14 Jan 2019 15:05:15 +0000 Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore In-Reply-To: <300960740.1365669.1547477976869@communicator.strato.com> References: <1360000352.1296432.1547047414173@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> <300960740.1365669.1547477976869@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA4133BC@ALA-MBD.corp.ad.wrs.com> Hi Marcel, When you're at the boot option you want to choose from the installer menu, hit to edit the boot command-line and modify the boot_device and rootfs_device parameters from the default sda to the correct device name (ie. nvme0n1). Cheers, Don. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Monday, January 14, 2019 10:00 AM To: Little, Scott; Penney, Don; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore Hi, the image lloks better. At least I am getting now the installer screen. Now I am getting the following error: <--- snipp ---> anaconda 21.48.22.121-1 for CentOS 7 started. * installation log files are stored in /tmp during the installation * shell is available on TTY2 * when reporting a bug add logs from /tmp as separate text/plain attachments 02:17:51 Running pre-installation scripts There was an error running the kickstart script at line 256. This is a fatal error and installation will be aborted. The details of this error are: Installation failed. ERROR: Specified installation (sda) or boot (sda) device is a USB drive. <--- snipp ---> Question: How can I confugure the installation/root device? I my case I have 2 1TB nvme flash drives. Thanks for your help in advance Marcel > Scott Little hat am 9. Januar 2019 um 21:54 geschrieben: > > > Fix as of this image ... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190109T162801Z/outputs/iso/bootimage.iso > > > > On 2019-01-09 11:08 a.m., Penney, Don wrote: > > I just checked the latest CENGN ISO, and it's still using the unmodified installer image. See attached email thread. > > > > Scott, any update on this issue? > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, January 09, 2019 10:24 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore > > > > Hi, > > > > I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. > > > > After starting the installation I'll get the following message: > > > > [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! > > [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi > > [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or > > [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi > > [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf > > [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 > > [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk > > [ 18.665925] localhost multipathd[636]: sda: add path (uevent) > > [ 18.667242] localhost multipathd[636]: sda: failed to get path uid > > [ 18.667399] localhost multipathd[636]: uevent trigger error > > [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS > > [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS > > [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found > > [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached > > [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) > > [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off > > [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 > > [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found > > [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through > > [ 21.990203] localhost kernel: sdb: sdb1 > > [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk > > [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) > > [ 63.961502] localhost kernel: random: crng init done > > [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts > > ... > > [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. > > [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). > > [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist > > [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... > > [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). > > > > I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. > > > > Any idea or hint is welcome! > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From ildiko.vancsa at gmail.com Mon Jan 14 16:12:11 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 14 Jan 2019 09:12:11 -0700 Subject: [Starlingx-discuss] Contributor Meetup dial-in information Message-ID: Hi, As a reminder the first StarlingX Contributor Meetup is taking place in Chandler, Arizona on January 15-16. You can find further information and planned agenda on the this etherpad: https://etherpad.openstack.org/p/stx-chandler-meetup Those of you who cannot attend in person but would be interested in participating in the discussions we will have a Zoom meeting open for both days. You can find the dial-in information below as well as on the etherpad. It is also a friendly reminder that the colliding StarlingX team meetings will be cancelled this week in favor of providing the Contributor Meetup call. Please let me know if you have any questions. Thanks and Best Regards, Ildikó Call details: • Join Zoom Meeting https://zoom.us/j/316228339 • One tap mobile • +16699006833,,316228339# US (San Jose) • +16468769923,,316228339# US (New York) • Dial by your location • +1 669 900 6833 US (San Jose) • +1 646 876 9923 US (New York) • Meeting ID: 316 228 339 • Find your local number: https://zoom.us/u/ddD6DnT3X From marcel at schaible-consulting.de Mon Jan 14 16:20:38 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Mon, 14 Jan 2019 17:20:38 +0100 (CET) Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA4133BC@ALA-MBD.corp.ad.wrs.com> References: <1360000352.1296432.1547047414173@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA411CB3@ALA-MBD.corp.ad.wrs.com> <6fd62fe6-12ce-9c3e-d64a-cdd11661c0ea@windriver.com> <300960740.1365669.1547477976869@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA4133BC@ALA-MBD.corp.ad.wrs.com> Message-ID: <1537574546.1372014.1547482838631@communicator.strato.com> Hi Don, ok, it worked. Is it also possible to use a raid like /dev/md for installation and booting? Thanks Marcel > "Penney, Don" hat am 14. Januar 2019 um 16:05 geschrieben: > > > Hi Marcel, > > When you're at the boot option you want to choose from the installer menu, hit to edit the boot command-line and modify the boot_device and rootfs_device parameters from the default sda to the correct device name (ie. nvme0n1). > > Cheers, > Don. > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Monday, January 14, 2019 10:00 AM > To: Little, Scott; Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore > > Hi, > > the image lloks better. At least I am getting now the installer screen. > > Now I am getting the following error: > > <--- snipp ---> > anaconda 21.48.22.121-1 for CentOS 7 started. > * installation log files are stored in /tmp during the installation > * shell is available on TTY2 > * when reporting a bug add logs from /tmp as separate text/plain attachments > 02:17:51 Running pre-installation scripts > > There was an error running the kickstart script at line 256. This is a fatal error and installation will be aborted. The details of this error are: > > Installation failed. > > ERROR: Specified installation (sda) or boot (sda) device is a USB drive. > <--- snipp ---> > > > Question: How can I confugure the installation/root device? I my case I have 2 1TB nvme flash drives. > > Thanks for your help in advance > > Marcel > > Scott Little hat am 9. Januar 2019 um 21:54 geschrieben: > > > > > > Fix as of this image ... > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190109T162801Z/outputs/iso/bootimage.iso > > > > > > > > On 2019-01-09 11:08 a.m., Penney, Don wrote: > > > I just checked the latest CENGN ISO, and it's still using the unmodified installer image. See attached email thread. > > > > > > Scott, any update on this issue? > > > > > > -----Original Message----- > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > Sent: Wednesday, January 09, 2019 10:24 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] StarlingX ISO Image from Cengn Mirror not working on Artesyn/MaxCore > > > > > > Hi, > > > > > > I am trying to give the above mentioned ISO image a try on our ArteSyn/MaxCore box without success. > > > > > > After starting the installation I'll get the following message: > > > > > > [ 18.503656] localhost iscsid[643]: iSCSI daemon with pid=644 started! > > > [ 18.503816] localhost iscsid[643]: can't open InitiatorName configuration file /etc/iscsi/initiatorname.iscsi > > > [ 18.503949] localhost iscsid[643]: Warning: InitiatorName file /etc/iscsi/initiatorname.iscsi does not exist or does not contain a properly formatted InitiatorName. If using software iscsi (iscsi_tcp or ib_iser) or partial offload (bnx2i or cxgbi iscsi), you may not be able to log into or > > > [ 18.504121] localhost iscsid[643]: can't open InitiatorAlias configuration file /etc/iscsi/initiatorname.iscsi > > > [ 18.504254] localhost iscsid[643]: can't open iscsid.safe_logout configuration file /etc/iscsi/iscsid.conf > > > [ 21.185105] localhost kernel: scsi 0:0:0:0: Direct-Access Generic Ultra HS-COMBO 1.98 PQ: 0 ANSI: 0 > > > [ 21.204488] localhost kernel: sd 0:0:0:0: [sda] Attached SCSI removable disk > > > [ 18.665925] localhost multipathd[636]: sda: add path (uevent) > > > [ 18.667242] localhost multipathd[636]: sda: failed to get path uid > > > [ 18.667399] localhost multipathd[636]: uevent trigger error > > > [ 21.930592] localhost kernel: scsi 1:0:0:0: Direct-Access TrekStor TrekStor USB CS PQ: 0 ANSI: 0 CCS > > > [ 21.940187] localhost kernel: scsi 1:0:0:0: alua: supports implicit and explicit TPGS > > > [ 21.947210] localhost kernel: scsi 1:0:0:0: alua: No target port descriptors found > > > [ 21.953921] localhost kernel: scsi 1:0:0:0: alua: not attached > > > [ 21.959034] localhost kernel: sd 1:0:0:0: [sdb] 15257600 512-byte logical blocks: (7.81 GB/7.27 GiB) > > > [ 21.968220] localhost kernel: sd 1:0:0:0: [sdb] Write Protect is off > > > [ 21.973575] localhost kernel: sd 1:0:0:0: [sdb] Mode Sense: 43 00 00 00 > > > [ 21.974314] localhost kernel: sd 1:0:0:0: [sdb] No Caching mode page found > > > [ 21.980260] localhost kernel: sd 1:0:0:0: [sdb] Assuming drive cache: write through > > > [ 21.990203] localhost kernel: sdb: sdb1 > > > [ 21.994973] localhost kernel: sd 1:0:0:0: [sdb] Attached SCSI removable disk > > > [ 19.533105] localhost multipathd[636]: sdb: add path (uevent) > > > [ 63.961502] localhost kernel: random: crng init done > > > [ 142.536373] localhost dracut-initqueue[646]: Warning: dracut-initqueue timeout - starting timeout scripts > > > ... > > > [ 203.386668] localhost dracut-initqueue[646]: Warning: Could not boot. > > > [ 204.505706] localhost systemd[1]: Received SIGRTMIN+20 from PID 625 (plymouthd). > > > [ 204.505980] localhost dracut-initqueue[646]: Warning: /dev/root does not exist > > > [ 204.514883] localhost systemd[1]: Starting Dracut Emergency Shell... > > > [ 204.532145] localhost systemd[1]: Received SIGRTMIN+21 from PID 625 (plymouthd). > > > > > > I suspect that the installer is not finding the disk, which is in our case nvme0n1 and nvme1n1 flash storage. > > > > > > Any idea or hint is welcome! > > > > > > Thanks > > > > > > Marcel > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > From Matt.Peters at windriver.com Mon Jan 14 17:19:34 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Mon, 14 Jan 2019 17:19:34 +0000 Subject: [Starlingx-discuss] Starlingx network requirement In-Reply-To: References: Message-ID: <9F21D86C-B952-4D15-B25C-50CD2484FB23@windriver.com> See inline. From: Quick Convey Date: Monday, January 14, 2019 at 3:56 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Starlingx network requirement Dear All, I am planing to setup Starlingx in bare-metal (controller-storage deployment) https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage I have couple of questions Q1) What is the network requirements for this setup. All nodes should be in same network, that is the only requirement, right ? The Management network is on all hosts and also serves as the PXEBoot network for booting other hosts from the Controller hosts. The OAM network is required for controller hosts only. The Data network is required for compute hosts only. Q2) In "Hardware Requirements" section, I seen "Data: n x 10GE Compute", what that means, is it number of physical interfaces needed for data ? what that "n" indicate ? is it number of compute nodes ? The ‘n’ indicates you can have more than 1 port if required for your application deployment. The data networks are not used by the platform, so it is up to the application requirements to decide the required number of ports and network topology. Q3) What is the number of physical interfaces needed in controller and compute bare-metal nodes ?. From the document I understand that only 2 physical interfaces are enough, right ? Controller: 1 Mgmt, 1 OAM Compute: 1 Mgmt, N Data (where N>=1) Q4) Is there any picture which shows Management, OAM and Data interface connections between controller and compute nodes ? I don’t think there is a StarlingX document that shows the interconnection. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Tue Jan 15 09:17:46 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Tue, 15 Jan 2019 09:17:46 +0000 Subject: [Starlingx-discuss] how to create feature branch remotely on StarlingX code repository Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> Hi all, I need create remote feature branch for cooperation to upgrade Ceph. Does anyone know how to create feature branch remotely on StarlingX? B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Tue Jan 15 09:39:31 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Tue, 15 Jan 2019 09:39:31 +0000 Subject: [Starlingx-discuss] how to create feature branch remotely on StarlingX code repository In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D6165A7@ALA-MBD.corp.ad.wrs.com> Hi, till someone with direct experience with this process can help, you may want to try this: https://hyperledger-fabric.readthedocs.io/en/release-1.3/Gerrit/best-practices.html#using-draft-branches * Using Draft Branches * Using Sandbox changes Haven't personally tried it as I use the alternate approach - topic branches. For this I test everything in my workspace and create multiple dependent commits with the same topic (so can't help you directly to create the feature branch). Difference is this is not draft code! Ovidiu ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Tuesday, January 15, 2019 11:17 AM To: starlingx-discuss at lists.starlingx.io; Poncea, Ovidiu Subject: how to create feature branch remotely on StarlingX code repository Hi all, I need create remote feature branch for cooperation to upgrade Ceph. Does anyone know how to create feature branch remotely on StarlingX? B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Tue Jan 15 10:20:28 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Tue, 15 Jan 2019 10:20:28 +0000 Subject: [Starlingx-discuss] how to create feature branch remotely on StarlingX code repository In-Reply-To: <4C60D9C5C8176C47874FFF36647AA19E9D6165A7@ALA-MBD.corp.ad.wrs.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> <4C60D9C5C8176C47874FFF36647AA19E9D6165A7@ALA-MBD.corp.ad.wrs.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F471F6@SHSMSX103.ccr.corp.intel.com> Thanks Ovidiu. I'll try it later. From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Tuesday, January 15, 2019 5:40 PM To: Liu, Changcheng ; starlingx-discuss at lists.starlingx.io Subject: RE: how to create feature branch remotely on StarlingX code repository Hi, till someone with direct experience with this process can help, you may want to try this: https://hyperledger-fabric.readthedocs.io/en/release-1.3/Gerrit/best-practices.html#using-draft-branches * Using Draft Branches * Using Sandbox changes Haven't personally tried it as I use the alternate approach - topic branches. For this I test everything in my workspace and create multiple dependent commits with the same topic (so can't help you directly to create the feature branch). Difference is this is not draft code! Ovidiu ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Tuesday, January 15, 2019 11:17 AM To: starlingx-discuss at lists.starlingx.io; Poncea, Ovidiu Subject: how to create feature branch remotely on StarlingX code repository Hi all, I need create remote feature branch for cooperation to upgrade Ceph. Does anyone know how to create feature branch remotely on StarlingX? B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Tue Jan 15 13:41:08 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Tue, 15 Jan 2019 14:41:08 +0100 (CET) Subject: [Starlingx-discuss] Installation on a specific raid configuration? Message-ID: <57371165.1423631.1547559668744@communicator.strato.com> Hi, I am trying to install the 20190109 iso on a raid configuration with two nvme disks. The installer by default creates the following disk layout and does not obye an existing partioning scheme: # Start End Size Type Name 1 2048 616447 300M EFI System EFI System Partition 2 616448 1640447 500M Microsoft basic 3 1640448 42600447 19.5G Microsoft basic 4 42600448 430573567 185G Linux LVM Questions: For waht are these partition #2 and #3 are used? Actually we would like to have for a fail over scenario the following layout: # Start End Size Type Name 1 2048 616447 300M EFI System EFI System Partition 2 616448 ...... 950G Raid mirrored to second nvme1n1p2 3 ....... ...... 32G swap Is there a way to tell the installer this configuartion? Thanks Marcel From km.giuseppesannino at gmail.com Tue Jan 15 14:54:50 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 15 Jan 2019 15:54:50 +0100 Subject: [Starlingx-discuss] Unable to create tenant network in case of a flat provider network Message-ID: Hi all, As per the subject, It seems I can't create a tenant network in case the provider network has been configured as flat. That's the way the system looks now: [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-list controller-0 +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+---------------------------+-------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | provider networks | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+---------------------------+-------------------+ | 38da435a-5fc5-44ac-b038-52af9f23d52d | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | 7f806264-9c19-45f4-b7d7-df1f90e9d540 | eno5 | platform | ethernet | None | [u'eno5'] | [] | [] | MTU=1500 | None | | 9f7365e8-bc9a-4c9c-8725-72d40f5a18ff | eno6 | data | ethernet | None | [u'eno6'] | [] | [] | MTU=1500,accelerated=True | public_flat | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+---------------------------+-------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet list +--------------------------------------+---------------+------+------+--------+ | ID | Name | Type | MTU | Ranges | +--------------------------------------+---------------+------+------+--------+ | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | public_flat | flat | 1500 | | +--------------------------------------+---------------+------+------+--------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet show public_flat +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | None | | id | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | | mtu | 1500 | | name | public_flat | | ranges | | | status | ACTIVE | | type | flat | | vlan_transparent | False | +------------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack network list +--------------------------------------+---------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+---------------+--------------------------------------+ | 9d50e31b-e533-4424-b459-53679d2d4eb5 | provider_flat | 2d591fde-0f11-446e-928f-1592ea4e42a7 | +--------------------------------------+---------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack network show provider_flat +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2019-01-10T10:58:51Z | | description | | | dns_domain | None | | id | 9d50e31b-e533-4424-b459-53679d2d4eb5 | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | False | | is_vlan_transparent | False | | mtu | 1500 | | name | provider_flat | | port_security_enabled | False | | project_id | dc1c43142a0d43578783bc64dc0650f7 | | provider:network_type | flat | | provider:physical_network | public_flat | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 4 | | router:external | External | | segments | None | | shared | True | | status | ACTIVE | | subnets | 2d591fde-0f11-446e-928f-1592ea4e42a7 | | tags | | | updated_at | 2019-01-10T11:01:41Z | +---------------------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack subnet list +--------------------------------------+------------------+--------------------------------------+-------------+-----------------------+-----------------+ | ID | Name | Network | Subnet | Allocation Pools | WRS-Net:VLAN ID | +--------------------------------------+------------------+--------------------------------------+-------------+-----------------------+-----------------+ | 2d591fde-0f11-446e-928f-1592ea4e42a7 | sn_provider_flat | 9d50e31b-e533-4424-b459-53679d2d4eb5 | 10.1.8.0/24 | 10.1.8.100-10.1.8.200 | | +--------------------------------------+------------------+--------------------------------------+-------------+-----------------------+-----------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack subnet show sn_provider_flat +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | allocation_pools | 10.1.8.100-10.1.8.200 | | cidr | 10.1.8.0/24 | | created_at | 2019-01-10T10:59:29Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 10.1.8.254 | | host_routes | | | id | 2d591fde-0f11-446e-928f-1592ea4e42a7 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | sn_provider_flat | | network_id | 9d50e31b-e533-4424-b459-53679d2d4eb5 | | project_id | dc1c43142a0d43578783bc64dc0650f7 | | revision_number | 1 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2019-01-10T11:01:41Z | | use_default_subnet_pool | None | | wrs_net_vlan_id | None | +-------------------------+--------------------------------------+ If I try to create a tenant network now, I get: [wrsroot at controller-0 ~(keystone_admin)]$ openstack network create new_nw Error while executing command: Unable to create the network. No tenant network is available for allocation. (HTTP 503) (Request-ID: req-14d0ad64-cdc4-47f4-9b43-4299a3aec8d5) In case of a "vlan" provider network I have no limitation. Any suggestion ? Many thanks BR /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Jan 15 15:21:37 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 15 Jan 2019 15:21:37 +0000 Subject: [Starlingx-discuss] Installation on a specific raid configuration? In-Reply-To: <57371165.1423631.1547559668744@communicator.strato.com> References: <57371165.1423631.1547559668744@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA413C3E@ALA-MBD.corp.ad.wrs.com> The initial primary disk filesystems are preconfigured by the kickstarts for each node. For the standard controller, for example, this is: - 500M for a boot partition - 20G for the rootfs (/) - the remainder for the primary volume group (currently named cgts-vg), which is used for various system volumes. In addition, you've got the EFI partition that's automatically created as 300M. For legacy boot systems, there is a smaller BIOS boot partition. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Tuesday, January 15, 2019 8:41 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Installation on a specific raid configuration? Hi, I am trying to install the 20190109 iso on a raid configuration with two nvme disks. The installer by default creates the following disk layout and does not obye an existing partioning scheme: # Start End Size Type Name 1 2048 616447 300M EFI System EFI System Partition 2 616448 1640447 500M Microsoft basic 3 1640448 42600447 19.5G Microsoft basic 4 42600448 430573567 185G Linux LVM Questions: For waht are these partition #2 and #3 are used? Actually we would like to have for a fail over scenario the following layout: # Start End Size Type Name 1 2048 616447 300M EFI System EFI System Partition 2 616448 ...... 950G Raid mirrored to second nvme1n1p2 3 ....... ...... 32G swap Is there a way to tell the installer this configuartion? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Tue Jan 15 15:29:45 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 15 Jan 2019 08:29:45 -0700 Subject: [Starlingx-discuss] Issues with build instructions In-Reply-To: References: Message-ID: Abarham I am facing the following issues step #4: done successfully IMPORTANT: The following 3 files are just bootstrap versions. Based on them, the workable images for StarlingX could be generated by running "update-pxe-network-installer" command after "build-iso" - out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img - out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img - out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz Warning: Not all download steps succeeded. You are likely missing files. [root at 16d0f71fd731 localdisk]# bash download_mirror.sh Is it ok to follow with the next step? ( BTW I am Ubuntu 16 ) Regards Victor Rodriguez On Thu, Jan 10, 2019 at 11:27 AM Victor Rodriguez wrote: > > Thanks, I'll check it out. > > On Thu, Jan 10, 2019 at 12:14 PM Arce Moreno, Abraham > wrote: > > > > Victor, > > > > > I am following the image build instructions from : > > > > > > https://docs.starlingx.io/developer_guide/ > > > > > > If these are not the correct instructions please let me know. I am > > > stuck at the point of: > > > > > > $ docker build --tag $USER:centos-mirror-repository --file Dockerfile . > > > > Patch is in process [0] > > Please use README.rst from stx-tool repository for now [1] > > > > [0] https://review.openstack.org/#/c/619043 > > [1] http://git.openstack.org/cgit/openstack/stx-tools/tree/README.rst > > From marcel at schaible-consulting.de Tue Jan 15 16:04:10 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Tue, 15 Jan 2019 17:04:10 +0100 (CET) Subject: [Starlingx-discuss] Installation on a specific raid configuration? In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA413C3E@ALA-MBD.corp.ad.wrs.com> References: <57371165.1423631.1547559668744@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA413C3E@ALA-MBD.corp.ad.wrs.com> Message-ID: <1276855519.1435728.1547568250762@communicator.strato.com> Thanks Don for your quick response. So I guess that this partion scheme is fixed? Do you have any suggestions/documentation how to configure a mirrored system for fail-over? Marcel > "Penney, Don" hat am 15. Januar 2019 um 16:21 geschrieben: > > > The initial primary disk filesystems are preconfigured by the kickstarts for each node. For the standard controller, for example, this is: > - 500M for a boot partition > - 20G for the rootfs (/) > - the remainder for the primary volume group (currently named cgts-vg), which is used for various system volumes. > > In addition, you've got the EFI partition that's automatically created as 300M. For legacy boot systems, there is a smaller BIOS boot partition. > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Tuesday, January 15, 2019 8:41 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Installation on a specific raid configuration? > > Hi, > > I am trying to install the 20190109 iso on a raid configuration with two nvme disks. > > The installer by default creates the following disk layout and does not obye an existing partioning scheme: > > # Start End Size Type Name > 1 2048 616447 300M EFI System EFI System Partition > 2 616448 1640447 500M Microsoft basic > 3 1640448 42600447 19.5G Microsoft basic > 4 42600448 430573567 185G Linux LVM > > Questions: > For waht are these partition #2 and #3 are used? > > Actually we would like to have for a fail over scenario the following layout: > > # Start End Size Type Name > 1 2048 616447 300M EFI System EFI System Partition > 2 616448 ...... 950G Raid mirrored to second nvme1n1p2 > 3 ....... ...... 32G swap > > Is there a way to tell the installer this configuartion? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From km.giuseppesannino at gmail.com Tue Jan 15 16:11:44 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 15 Jan 2019 17:11:44 +0100 Subject: [Starlingx-discuss] Unable to create tenant network in case of a flat provider network Message-ID: Hi all, it seems like my previous mail was not sent. I apologize for spamming in case. As per the subject, It seems I can't create a tenant network in case the provider network has been configured as flat. That's the way the system looks now: [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-list controller-0 +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+---------------------------+-------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | provider networks | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+---------------------------+-------------------+ | 38da435a-5fc5-44ac-b038-52af9f23d52d | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | 7f806264-9c19-45f4-b7d7-df1f90e9d540 | eno5 | platform | ethernet | None | [u'eno5'] | [] | [] | MTU=1500 | None | | 9f7365e8-bc9a-4c9c-8725-72d40f5a18ff | eno6 | data | ethernet | None | [u'eno6'] | [] | [] | MTU=1500,accelerated=True | public_flat | +--------------------------------------+------+----------+----------+---------+-----------+----------+-------------+---------------------------+-------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet list +--------------------------------------+---------------+------+------+--------+ | ID | Name | Type | MTU | Ranges | +--------------------------------------+---------------+------+------+--------+ | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | public_flat | flat | 1500 | | +--------------------------------------+---------------+------+------+--------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet show public_flat +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | None | | id | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | | mtu | 1500 | | name | public_flat | | ranges | | | status | ACTIVE | | type | flat | | vlan_transparent | False | +------------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack network list +--------------------------------------+---------------+--------------------------------------+ | ID | Name | Subnets | +--------------------------------------+---------------+--------------------------------------+ | 9d50e31b-e533-4424-b459-53679d2d4eb5 | provider_flat | 2d591fde-0f11-446e-928f-1592ea4e42a7 | +--------------------------------------+---------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack network show provider_flat +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | | availability_zone_hints | | | availability_zones | nova | | created_at | 2019-01-10T10:58:51Z | | description | | | dns_domain | None | | id | 9d50e31b-e533-4424-b459-53679d2d4eb5 | | ipv4_address_scope | None | | ipv6_address_scope | None | | is_default | False | | is_vlan_transparent | False | | mtu | 1500 | | name | provider_flat | | port_security_enabled | False | | project_id | dc1c43142a0d43578783bc64dc0650f7 | | provider:network_type | flat | | provider:physical_network | public_flat | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 4 | | router:external | External | | segments | None | | shared | True | | status | ACTIVE | | subnets | 2d591fde-0f11-446e-928f-1592ea4e42a7 | | tags | | | updated_at | 2019-01-10T11:01:41Z | +---------------------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack subnet list +--------------------------------------+------------------+--------------------------------------+-------------+-----------------------+-----------------+ | ID | Name | Network | Subnet | Allocation Pools | WRS-Net:VLAN ID | +--------------------------------------+------------------+--------------------------------------+-------------+-----------------------+-----------------+ | 2d591fde-0f11-446e-928f-1592ea4e42a7 | sn_provider_flat | 9d50e31b-e533-4424-b459-53679d2d4eb5 | 10.1.8.0/24 | 10.1.8.100-10.1.8.200 | | +--------------------------------------+------------------+--------------------------------------+-------------+-----------------------+-----------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack subnet show sn_provider_flat +-------------------------+--------------------------------------+ | Field | Value | +-------------------------+--------------------------------------+ | allocation_pools | 10.1.8.100-10.1.8.200 | | cidr | 10.1.8.0/24 | | created_at | 2019-01-10T10:59:29Z | | description | | | dns_nameservers | | | enable_dhcp | True | | gateway_ip | 10.1.8.254 | | host_routes | | | id | 2d591fde-0f11-446e-928f-1592ea4e42a7 | | ip_version | 4 | | ipv6_address_mode | None | | ipv6_ra_mode | None | | name | sn_provider_flat | | network_id | 9d50e31b-e533-4424-b459-53679d2d4eb5 | | project_id | dc1c43142a0d43578783bc64dc0650f7 | | revision_number | 1 | | segment_id | None | | service_types | | | subnetpool_id | None | | tags | | | updated_at | 2019-01-10T11:01:41Z | | use_default_subnet_pool | None | | wrs_net_vlan_id | None | +-------------------------+--------------------------------------+ If I try to create a tenant network now, I get: [wrsroot at controller-0 ~(keystone_admin)]$ openstack network create new_nw Error while executing command: Unable to create the network. No tenant network is available for allocation. (HTTP 503) (Request-ID: req-14d0ad64-cdc4-47f4-9b43-4299a3aec8d5) In case of a "vlan" provider network I have no limitation. Any suggestion ? Many thanks BR /Giuseppe -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Tue Jan 15 17:27:28 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 15 Jan 2019 10:27:28 -0700 Subject: [Starlingx-discuss] how to create feature branch remotely on StarlingX code repository In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> Message-ID: <33544067-45d3-2da8-b3c2-c7eed45c095f@windriver.com> I don't think normal users have the permissions to create new branches on the master git repository. Ovidiu's suggestions might work, I don't know if anyone has actually tried them before. Chris On 1/15/2019 2:17 AM, Liu, Changcheng wrote: > > Hi all, > >      I need create remote feature branch for cooperation to upgrade Ceph. > >      Does anyone know how to create feature branch remotely on StarlingX? > > B.R. > > Changcheng > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Tue Jan 15 18:33:07 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 15 Jan 2019 18:33:07 +0000 Subject: [Starlingx-discuss] Issues with build instructions In-Reply-To: References: Message-ID: <04a951b1fdc127b2d3148561b0556b69e9fceb1c.camel@intel.com> You have warning, so it is probably that your mirror is incomplete. Try to check for missing files with: cat logs/*_missing_*.log -Erich On Tue, 2019-01-15 at 08:29 -0700, Victor Rodriguez wrote: > Abarham > > I am facing the following issues > > step #4: done successfully > IMPORTANT: The following 3 files are just bootstrap versions. Based > on them, the workable images for StarlingX could be generated by > running "update-pxe-network-installer" command after "build-iso" > - out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img > - out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img > - out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz > > Warning: Not all download steps succeeded. You are likely missing > files. > [root at 16d0f71fd731 localdisk]# bash download_mirror.sh > > Is it ok to follow with the next step? > > ( BTW I am Ubuntu 16 ) > > Regards > > Victor Rodriguez > > On Thu, Jan 10, 2019 at 11:27 AM Victor Rodriguez > wrote: > > > > Thanks, I'll check it out. > > > > On Thu, Jan 10, 2019 at 12:14 PM Arce Moreno, Abraham > > wrote: > > > > > > Victor, > > > > > > > I am following the image build instructions from : > > > > > > > > https://docs.starlingx.io/developer_guide/ > > > > > > > > If these are not the correct instructions please let me know. I > > > > am > > > > stuck at the point of: > > > > > > > > $ docker build --tag $USER:centos-mirror-repository --file > > > > Dockerfile . > > > > > > Patch is in process [0] > > > Please use README.rst from stx-tool repository for now [1] > > > > > > [0] https://review.openstack.org/#/c/619043 > > > [1] http://git.openstack.org/cgit/openstack/stx-tools/tree/README > > > .rst > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Jason.McKenna at windriver.com Tue Jan 15 18:56:57 2019 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Tue, 15 Jan 2019 18:56:57 +0000 Subject: [Starlingx-discuss] [build] go packages, version 2 Message-ID: Hi build team, At the previous build meeting I had identified an issue with the way some go based packages were being built be built (they required internet access), and promised I'd update the mailing list on a potential way forward that we were prototyping. Some preliminary points: - go usually attempts to resolve dependencies at build time, by going out to the internet and fetching stuff (like dependency source code) using the "go get" command - Sometimes the stuff fetched by "go get" isn't appropriate (i.e. "go get" fetches the latest version, but deprecated APIs may have been removed, etc) - Different versions of go packages may require different versions of dependencies - We want builds to be reproducible without unexpected code changes (i.e. we want to know what we're compiling in) - Some people build in environments where they don't have Internet access The initial solution (which didn't take into account the Internet access problem) was to use "dep". "dep" is an external tool which was an "official experiment" of the go project. Rather than fetch the latest dependencies from the internet (like "go get"), it allowed specific revisions of dependencies to be captured. "dep" fetched those versions from the Internet. This solved the deprecated API issue, the reproducible build issue, and the issue of not using rpms for the dependencies. However, if someone was attempting to build in an internet-less context, the system would fail. Enter this second revision. The dependency packages are now downloaded at download-mirrors.sh time as tarballs. The tarballs are produced as of a specific commit for each dependency. This allows us to hit all our bullet points - code is snapshotted, reproducible, we don't end up having to create a bunch of new rpms with dependency source code and potential version conflicts, and it requires no internet access (other than at download-mirrors.sh time) Jerry has posted a preview code review showing his work using this mechanism. I've marked the reviews as workflow -1 to give the build team a chance to see the mechanism. https://review.openstack.org/#/c/631001/ https://review.openstack.org/#/c/631002/ -Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jan 15 22:01:05 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 15 Jan 2019 17:01:05 -0500 Subject: [Starlingx-discuss] [build] go packages, version 2 In-Reply-To: References: Message-ID: <8ab6cd0f-c6f5-6ee3-bfa0-2c9f74a8b993@windriver.com> We will need a wiki update on the procedure to use.  i.e. how do I develop a list of tarballs I need to add to the .lst? On 2019-01-15 1:56 p.m., McKenna, Jason wrote: > > Hi build team, > > At the previous build meeting I had identified an issue with the way > some go based packages were being built be built (they required > internet access), and promised I’d update the mailing list on a > potential way forward that we were prototyping. > > Some preliminary points: > > -go usually attempts to resolve dependencies at build time, by going > out to the internet and fetching stuff (like dependency source code) > using the “go get” command > > -Sometimes the stuff fetched by “go get” isn’t appropriate (i.e. “go > get” fetches the latest version, but deprecated APIs may have been > removed, etc) > > -Different versions of go packages may require different versions of > dependencies > > -We want builds to be reproducible without unexpected code changes > (i.e. we want to know what we’re compiling in) > > -Some people build in environments where they don’t have Internet access > > The initial solution (which didn’t take into account the Internet > access problem) was to use “dep”.  “dep” is an external tool which was > an “official experiment” of the go project.  Rather than fetch the > latest dependencies from the internet (like “go get”), it allowed > specific revisions of dependencies to be captured.  “dep” fetched > those versions from the Internet. This solved the deprecated API > issue, the reproducible build issue, and the issue of not using rpms > for the dependencies.  However, if someone was attempting to build in > an internet-less context, the system would fail. > > Enter this second revision. > > The dependency packages are now downloaded at download-mirrors.sh time > as tarballs. The tarballs are produced as of a specific commit for > each dependency.  This allows us to hit all our bullet points – code > is snapshotted, reproducible, we don’t end up having to create a bunch > of new rpms with dependency source code and potential version > conflicts, and it requires no internet access (other than at > download-mirrors.sh time) > > Jerry has posted a preview code review showing his work using this > mechanism.  I’ve marked the reviews as workflow -1 to give the build > team a chance to see the mechanism. > > https://review.openstack.org/#/c/631001/ > > https://review.openstack.org/#/c/631002/ > > -Jason > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Jan 15 23:07:48 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 15 Jan 2019 16:07:48 -0700 Subject: [Starlingx-discuss] Issues with build instructions In-Reply-To: <04a951b1fdc127b2d3148561b0556b69e9fceb1c.camel@intel.com> References: <04a951b1fdc127b2d3148561b0556b69e9fceb1c.camel@intel.com> Message-ID: On Tue, Jan 15, 2019 at 11:33 AM Cordoba Malibran, Erich wrote: > > You have warning, so it is probably that your mirror is incomplete. > > Try to check for missing files with: > cat logs/*_missing_*.log > [root at 70e3766d080d localdisk]# cat logs/*_missing_*.log regards > -Erich > > On Tue, 2019-01-15 at 08:29 -0700, Victor Rodriguez wrote: > > Abarham > > > > I am facing the following issues > > > > step #4: done successfully > > IMPORTANT: The following 3 files are just bootstrap versions. Based > > on them, the workable images for StarlingX could be generated by > > running "update-pxe-network-installer" command after "build-iso" > > - out/stx-r1/CentOS/pike/Binary/LiveOS/squashfs.img > > - out/stx-r1/CentOS/pike/Binary/images/pxeboot/initrd.img > > - out/stx-r1/CentOS/pike/Binary/images/pxeboot/vmlinuz > > > > Warning: Not all download steps succeeded. You are likely missing > > files. > > [root at 16d0f71fd731 localdisk]# bash download_mirror.sh > > > > Is it ok to follow with the next step? > > > > ( BTW I am Ubuntu 16 ) > > > > Regards > > > > Victor Rodriguez > > > > On Thu, Jan 10, 2019 at 11:27 AM Victor Rodriguez > > wrote: > > > > > > Thanks, I'll check it out. > > > > > > On Thu, Jan 10, 2019 at 12:14 PM Arce Moreno, Abraham > > > wrote: > > > > > > > > Victor, > > > > > > > > > I am following the image build instructions from : > > > > > > > > > > https://docs.starlingx.io/developer_guide/ > > > > > > > > > > If these are not the correct instructions please let me know. I > > > > > am > > > > > stuck at the point of: > > > > > > > > > > $ docker build --tag $USER:centos-mirror-repository --file > > > > > Dockerfile . > > > > > > > > Patch is in process [0] > > > > Please use README.rst from stx-tool repository for now [1] > > > > > > > > [0] https://review.openstack.org/#/c/619043 > > > > [1] http://git.openstack.org/cgit/openstack/stx-tools/tree/README > > > > .rst > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.carlos.alonso at intel.com Tue Jan 15 23:20:01 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 15 Jan 2019 23:20:01 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190115 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8AAEC@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-15 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 25 TCs [PASS] TOTAL: [ 30 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Jan 15 23:59:30 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 15 Jan 2019 15:59:30 -0800 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review Message-ID: TSC Members: We have updated the Multi-OS overview to be more of a meta-specification giving the overview and direction for our approach to Multi-OS. Please take a look at: https://review.openstack.org/#/c/619801/13 Victor and I will be working on getting the "Source ReOrg" specification completed tonight ahead of the F2F session tomorrow. Sau! From erich.cm.lists at yandex.com Wed Jan 16 00:42:11 2019 From: erich.cm.lists at yandex.com (Erich Cordoba) Date: Tue, 15 Jan 2019 16:42:11 -0800 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review In-Reply-To: References: Message-ID: <13370101547599331@sas1-2b3c3045b736.qloud-c.yandex.net> An HTML attachment was scrubbed... URL: From changcheng.liu at intel.com Wed Jan 16 00:52:15 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Wed, 16 Jan 2019 00:52:15 +0000 Subject: [Starlingx-discuss] how to create feature branch remotely on StarlingX code repository In-Reply-To: <33544067-45d3-2da8-b3c2-c7eed45c095f@windriver.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4719D@SHSMSX103.ccr.corp.intel.com> <33544067-45d3-2da8-b3c2-c7eed45c095f@windriver.com> Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F4773A@SHSMSX103.ccr.corp.intel.com> Thanks Chris. I'll try to do it when I'm available. From: Chris Friesen [mailto:chris.friesen at windriver.com] Sent: Wednesday, January 16, 2019 1:27 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] how to create feature branch remotely on StarlingX code repository I don't think normal users have the permissions to create new branches on the master git repository. Ovidiu's suggestions might work, I don't know if anyone has actually tried them before. Chris On 1/15/2019 2:17 AM, Liu, Changcheng wrote: Hi all, I need create remote feature branch for cooperation to upgrade Ceph. Does anyone know how to create feature branch remotely on StarlingX? B.R. Changcheng _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Jan 16 01:53:38 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 15 Jan 2019 18:53:38 -0700 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review In-Reply-To: <13370101547599331@sas1-2b3c3045b736.qloud-c.yandex.net> References: <13370101547599331@sas1-2b3c3045b736.qloud-c.yandex.net> Message-ID: On Tue, Jan 15, 2019, 17:43 Erich Cordoba I took a really quick read in the spec and I think that the build system > approach is well covered but I’m wondering about software changes for multi > OS support. > > For example, does patching needs changes for rpm and non-rpm based distros > (I think it should)? Also, does puppet recipes run well on different OS? > > Should this other aspects (if they applied) be covered on this spec or > they need a different one? > This is a spec for build changes, the full spec for this parts will be in incoming specifications Regards > > Thanks > > -Erich > > -- > Sent from Yandex.Mail for mobile > > 15.01.2019, 18:00, "Saul Wold" : > > TSC Members: > > We have updated the Multi-OS overview to be more of a meta-specification > giving the overview and direction for our approach to Multi-OS. > > Please take a look at: https://review.openstack.org/#/c/619801/13 > > Victor and I will be working on getting the "Source ReOrg" specification > completed tonight ahead of the F2F session tomorrow. > > Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Jan 16 15:48:40 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 16 Jan 2019 08:48:40 -0700 Subject: [Starlingx-discuss] [build] go packages, version 2 In-Reply-To: References: Message-ID: Hi Jason On Tue, Jan 15, 2019 at 11:57 AM McKenna, Jason wrote: > > Hi build team, > > > > At the previous build meeting I had identified an issue with the way some go based packages were being built be built (they required internet access), and promised I’d update the mailing list on a potential way forward that we were prototyping. > > > > Some preliminary points: > > - go usually attempts to resolve dependencies at build time, by going out to the internet and fetching stuff (like dependency source code) using the “go get” command > > - Sometimes the stuff fetched by “go get” isn’t appropriate (i.e. “go get” fetches the latest version, but deprecated APIs may have been removed, etc) > > - Different versions of go packages may require different versions of dependencies > > - We want builds to be reproducible without unexpected code changes (i.e. we want to know what we’re compiling in) > > - Some people build in environments where they don’t have Internet access > > > > The initial solution (which didn’t take into account the Internet access problem) was to use “dep”. “dep” is an external tool which was an “official experiment” of the go project. Rather than fetch the latest dependencies from the internet (like “go get”), it allowed specific revisions of dependencies to be captured. “dep” fetched those versions from the Internet. This solved the deprecated API issue, the reproducible build issue, and the issue of not using rpms for the dependencies. However, if someone was attempting to build in an internet-less context, the system would fail. > > > > Enter this second revision. > > > > The dependency packages are now downloaded at download-mirrors.sh time as tarballs. The tarballs are produced as of a specific commit for each dependency. I agree taht this is a good solution , in terms of make it work , my concern is that we are mantaining a lot of tar balls , and that scares me a bit This allows us to hit all our bullet points – code is snapshotted, reproducible, we don’t end up having to create a bunch of new rpms with dependency source code and potential version conflicts, and it requires no internet access (other than at download-mirrors.sh time) If it works for you is fine , my concern will be to mantain changes of broken links in the future ( that sure go dep will fix ) +1 from my part as long as it does not geneerate a lot of mantainance does not create a problem for ourselvs How hard to aloud internet during build ? Regards > > > > Jerry has posted a preview code review showing his work using this mechanism. I’ve marked the reviews as workflow -1 to give the build team a chance to see the mechanism. > > https://review.openstack.org/#/c/631001/ > > https://review.openstack.org/#/c/631002/ > > > > -Jason > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Jan 16 17:46:35 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 16 Jan 2019 10:46:35 -0700 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review In-Reply-To: References: Message-ID: On Tue, Jan 15, 2019 at 4:59 PM Saul Wold wrote: > > TSC Members: > > We have updated the Multi-OS overview to be more of a meta-specification > giving the overview and direction for our approach to Multi-OS. > > Please take a look at: https://review.openstack.org/#/c/619801/13 > > Victor and I will be working on getting the "Source ReOrg" specification > completed tonight ahead of the F2F session tomorrow. > Hi team, this is the specification of the flock's code reorg: https://review.openstack.org/#/c/631288/ Thanks a lot for your feedback > Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Jan 16 20:13:49 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 16 Jan 2019 12:13:49 -0800 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: <266d1987-b300-97a3-e84e-147ff16058c8@windriver.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> <266d1987-b300-97a3-e84e-147ff16058c8@windriver.com> Message-ID: <64bb4214-baad-865e-fc53-7c49f51fa4b8@linux.intel.com> We had a quick discussion here at the F2F with some WRS folks on the phone. We came to an agreement that the value of TIS_PATCH_VER would be reset to 1 when a rebase occurs and incremented by 1 each additional change that is made to the specfile. As part of the future "re-branding" plan we will rename TIS_PATCH_VER to STX_PATCH_VER along with the %_tis_dist/%_tis_patch_ver variables. For the current rebase in flight, what's merged is merged and what's pending should be updated to match this new understanding. Thanks for your support! Sau! On 1/7/19 7:10 AM, Scott Little wrote: > 100% agree. > > Scott > > On 2019-01-04 9:46 a.m., Chris Friesen wrote: >> When we customize an upstream package for the first time, >> TIS_PATCH_VER gets set to 1, then generally gets incremented on each >> subsequent change.  Thus, prior to package upgrade TIS_PATCH_VER >> reflects the number of changes that were made to the upstream >> package.  This can be used to tell at a glance how customized a given >> package is. >> >> When upgrading, it's possible that some customizations are no longer >> applicable, while others are.  Thus, I think options "a" and "e" don't >> make sense as they remove the "how customized is this package" meaning. >> >> Of the options below, I think option "c" is probably the best since >> for an upgrade we might create a single meta-patch to add all the >> source patches. >> >> I think the most accurate value would probably be "number of source >> patches" plus "number of meta patches that don't add/remove source >> patches".  But we probably don't really need that level of accuracy. >> >> Chris >> >> On 1/4/2019 2:28 AM, An, Ran1 wrote: >>> Hi all >>>    I'm sending this to discuss about the rule of initial value of >>> TIS_PATCH_VER when srpm package is upgraded. >>> "TIS_PATCH_VER" is a counter to indicate change within a major >>> version of the package, on which we put patches. >>>    When I upgraded srpms(related to CentOS) from CentOS 7.5 to 7.6, >>> there are different voices about the initial value of >>> TIS_PATCH_VER(comments on [1][2][3][4]): >>>      a). reset it to 0 >>>      b). reset to the number of STX patches remaining (source patches >>> and meta_patches together) >>>      c). reset to the number of STX patches remaining (source patches >>> only) >>>      d). reset to the number of STX patches remaining (meta patches >>> only) >>>      e). case by case, better do not reset. >>> >>> It is not a technical issue, but we will face it each time we upgrade >>> packages, so which would you like to choose? >>> >>> [1] https://review.openstack.org/#/c/627760/ >>> [2] https://review.openstack.org/#/c/627750/ >>> [3] https://review.openstack.org/#/c/627156/ >>> [4] https://review.openstack.org/#/c/627770/ >>> >>> Thanks >>> Ran >> >> > From juan.carlos.alonso at intel.com Wed Jan 16 22:59:24 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 16 Jan 2019 22:59:24 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190116 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8ACBA@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-16 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 25 TCs [PASS] TOTAL: [ 30 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From quickconvey at gmail.com Thu Jan 17 07:54:46 2019 From: quickconvey at gmail.com (Quick Convey) Date: Thu, 17 Jan 2019 13:24:46 +0530 Subject: [Starlingx-discuss] Starlingx network requirement In-Reply-To: <9F21D86C-B952-4D15-B25C-50CD2484FB23@windriver.com> References: <9F21D86C-B952-4D15-B25C-50CD2484FB23@windriver.com> Message-ID: Thanks Matt peters, Compute nodes communicate via this Data interface, right ? *VMs* in different compute nodes also use this data interface for communication, right ? (communication between VM in CP1 -to- VM in CP2) *OVS* also use this data interface to make tunnel between the compute nodes, right ? You have mentioned that application requirements decide the required number of ports and network topology. Applications will be running in the VM and doesn't aware about the physical topology, right ? Could you please explain it. Thanks, On Mon, Jan 14, 2019 at 10:50 PM Peters, Matt wrote: > See inline. > > > > *From: *Quick Convey > *Date: *Monday, January 14, 2019 at 3:56 AM > *To: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *[Starlingx-discuss] Starlingx network requirement > > > > Dear All, > > > > I am planing to setup Starlingx in bare-metal (controller-storage > deployment) > > > https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage > > > > I have couple of questions > > > > *Q1)* What is the network requirements for this setup. All nodes should > be in same network, that is the only requirement, right ? > > The Management network is on all hosts and also serves as the PXEBoot > network for booting other hosts from the Controller hosts. > > The OAM network is required for controller hosts only. > > The Data network is required for compute hosts only. > > > > *Q2)* In "Hardware Requirements" section, I seen *"Data: n x 10GE > Compute"*, what that means, is it number of physical interfaces needed > for data ? what that* "n" *indicate ? is it number of compute nodes ? > > The ‘n’ indicates you can have more than 1 port if required for your > application deployment. The data networks are not used by the platform, so > it is up to the application requirements to decide the required number of > ports and network topology. > > > > *Q3) *What is the number of physical interfaces needed in controller and > compute bare-metal nodes ?. From the document I understand that only 2 > physical interfaces are enough, right ? > > Controller: 1 Mgmt, 1 OAM > > Compute: 1 Mgmt, N Data (where N>=1) > > > > *Q4) *Is there any picture which shows *Management*, *OAM* and *Data* > interface connections between controller and compute nodes ? > > I don’t think there is a StarlingX document that shows the interconnection. > > > > *Thanks,* > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From himanshugoyal500 at gmail.com Thu Jan 17 13:21:36 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Thu, 17 Jan 2019 18:51:36 +0530 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: Message-ID: Hi, I'm trying to install StarlingX Controller 0 on Physical Machine, But it is failing in config_controller at task* waiting for service activation*...... with Error: "*Configuration failed: Timeout waiting for service enable*" I'm using ISO available at the path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso Please suggest us the procedure to debug that & how i can re-run the config_controller again. Many Thanks, Himanshu Goyal On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal wrote: > Hi, > > Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node > (Both nodes on different physical Machines). > > Many Thanks, > Himanshu Goyal > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Thu Jan 17 13:52:32 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 17 Jan 2019 14:52:32 +0100 Subject: [Starlingx-discuss] Suggestions for a All-In-One Setup on a RAID In-Reply-To: <1276855519.1435728.1547568250762@communicator.strato.com> References: <57371165.1423631.1547559668744@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA413C3E@ALA-MBD.corp.ad.wrs.com> <1276855519.1435728.1547568250762@communicator.strato.com> Message-ID: Hi, I wnat to setup an all-in-one starlingx configuration on a nmve disk, which should be mirrored to a second nvme disk. I have learned, that the various StarlingX installation modes come each with a predefined partition scheme. Any suggestions how to adapt these in my case? Thanks Marcel From Matt.Peters at windriver.com Thu Jan 17 14:35:12 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 17 Jan 2019 14:35:12 +0000 Subject: [Starlingx-discuss] Starlingx network requirement In-Reply-To: References: <9F21D86C-B952-4D15-B25C-50CD2484FB23@windriver.com> Message-ID: <1214156E-FB27-488C-8829-827D90D90126@windriver.com> The data network provides the physical infrastructure for the OpenStack guest tenant networks, which is used for both inter-compute and external network access from Virtual Machines (VMs). The underlay configuration (VxLAN attributes, VLAN ranges, etc) is managed by the cloud administrator and is made available to the OpenStack tenants (applications). The application requirements drive the topology of this network since the cloud operator must be able to support whatever application is being deployed within the VMs. The virtual switch, OVS in the case of StarlingX, implements the OpenStack tenant networks and acts as the bridge between the physical infrastructure and the virtual networks. For additional background information on OpenStack networking, please refer to the following: https://docs.openstack.org/neutron/rocky/admin/intro.html Hope that helps. Regards Matt From: Quick Convey Date: Thursday, January 17, 2019 at 2:55 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Starlingx network requirement Thanks Matt peters, Compute nodes communicate via this Data interface, right ? VMs in different compute nodes also use this data interface for communication, right ? (communication between VM in CP1 -to- VM in CP2) OVS also use this data interface to make tunnel between the compute nodes, right ? You have mentioned that application requirements decide the required number of ports and network topology. Applications will be running in the VM and doesn't aware about the physical topology, right ? Could you please explain it. Thanks, On Mon, Jan 14, 2019 at 10:50 PM Peters, Matt > wrote: See inline. From: Quick Convey > Date: Monday, January 14, 2019 at 3:56 AM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Starlingx network requirement Dear All, I am planing to setup Starlingx in bare-metal (controller-storage deployment) https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage I have couple of questions Q1) What is the network requirements for this setup. All nodes should be in same network, that is the only requirement, right ? The Management network is on all hosts and also serves as the PXEBoot network for booting other hosts from the Controller hosts. The OAM network is required for controller hosts only. The Data network is required for compute hosts only. Q2) In "Hardware Requirements" section, I seen "Data: n x 10GE Compute", what that means, is it number of physical interfaces needed for data ? what that "n" indicate ? is it number of compute nodes ? The ‘n’ indicates you can have more than 1 port if required for your application deployment. The data networks are not used by the platform, so it is up to the application requirements to decide the required number of ports and network topology. Q3) What is the number of physical interfaces needed in controller and compute bare-metal nodes ?. From the document I understand that only 2 physical interfaces are enough, right ? Controller: 1 Mgmt, 1 OAM Compute: 1 Mgmt, N Data (where N>=1) Q4) Is there any picture which shows Management, OAM and Data interface connections between controller and compute nodes ? I don’t think there is a StarlingX document that shows the interconnection. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Thu Jan 17 14:50:39 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Thu, 17 Jan 2019 14:50:39 +0000 Subject: [Starlingx-discuss] cpu mode Message-ID: hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Thu Jan 17 15:11:15 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 17 Jan 2019 15:11:15 +0000 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> Hi, On what step of config_controller it is failing? Can you provide the logs? Are you deploying manually or automatic? To apply the config_controller again I think you need to start over the installation process. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 7:22 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi, I'm trying to install StarlingX Controller 0 on Physical Machine, But it is failing in config_controller at task waiting for service activation...... with Error: "Configuration failed: Timeout waiting for service enable" I'm using ISO available at the path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso Please suggest us the procedure to debug that & how i can re-run the config_controller again. Many Thanks, Himanshu Goyal On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: Hi, Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node (Both nodes on different physical Machines). Many Thanks, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jason.McKenna at windriver.com Thu Jan 17 15:13:29 2019 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Thu, 17 Jan 2019 15:13:29 +0000 Subject: [Starlingx-discuss] [build] go packages, version 2 In-Reply-To: References: Message-ID: Hi Victor, great feedback. Inline. > -----Original Message----- > From: Victor Rodriguez > Sent: January 16, 2019 10:49 AM > To: McKenna, Jason > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [build] go packages, version 2 > > Hi Jason > > > On Tue, Jan 15, 2019 at 11:57 AM McKenna, Jason > wrote: > > > > Hi build team, > > > > > > > > At the previous build meeting I had identified an issue with the way some > go based packages were being built be built (they required internet access), > and promised I’d update the mailing list on a potential way forward that we > were prototyping. > > > > > > > > Some preliminary points: > > > > - go usually attempts to resolve dependencies at build time, by going > out to the internet and fetching stuff (like dependency source code) using > the “go get” command > > > > - Sometimes the stuff fetched by “go get” isn’t appropriate (i.e. “go > get” fetches the latest version, but deprecated APIs may have been > removed, etc) > > > > - Different versions of go packages may require different versions of > dependencies > > > > - We want builds to be reproducible without unexpected code changes > (i.e. we want to know what we’re compiling in) > > > > - Some people build in environments where they don’t have Internet > access > > > > > > > > The initial solution (which didn’t take into account the Internet access > problem) was to use “dep”. “dep” is an external tool which was an “official > experiment” of the go project. Rather than fetch the latest dependencies > from the internet (like “go get”), it allowed specific revisions of > dependencies to be captured. “dep” fetched those versions from the > Internet. This solved the deprecated API issue, the reproducible build issue, > and the issue of not using rpms for the dependencies. However, if someone > was attempting to build in an internet-less context, the system would fail. > > > > > > > > Enter this second revision. > > > > > > > > The dependency packages are now downloaded at download-mirrors.sh > time as tarballs. The tarballs are produced as of a specific commit for each > dependency. > > I agree taht this is a good solution , in terms of make it work , my concern is > that we are mantaining a lot of tar balls , and that scares me a bit Agreed. This actually is an opportunity to leverage the work that Marcela is doing where she is refactoring the download tarballs from a single .lst file into a more manageable form. If we can separate the tarballs/rpms/srpms/etc from a single file to a per-repo file (or a per-package file for build-time artifacts...) then maintaining the tarball downloads becomes a lot cleaner. > > This allows us to hit all our bullet points – code is snapshotted, reproducible, > we don’t end up having to create a bunch of new rpms with dependency > source code and potential version conflicts, and it requires no internet access > (other than at download-mirrors.sh time) > > If it works for you is fine , my concern will be to mantain changes of broken > links in the future ( that sure go dep will fix ) > > +1 from my part as long as it does not geneerate a lot of mantainance > does not create a problem for ourselvs > > How hard to aloud internet during build ? Allowing Internet during build is easy if you have Internet access (edit the .cfg for your mock environment), but obviously troublesome if you're behind a firewall which blocks access. While this doesn't affect me personally, I am under the impression that you folks have several sites which are configured this way (I could be wrong, but I seem to recall early in the project a few packages breaking because reworked packages assumed Internet). Even if I'm mistaken about that point, I do know of several organizations which require clean-room builds for security and reproducibility reasons. Better to not limit potential adopters if we can help it :) > > Regards > > > > > > > > > Jerry has posted a preview code review showing his work using this > mechanism. I’ve marked the reviews as workflow -1 to give the build team a > chance to see the mechanism. > > > > https://review.openstack.org/#/c/631001/ > > > > https://review.openstack.org/#/c/631002/ > > > > > > > > -Jason > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Volker.Hoesslin at swsn.de Thu Jan 17 16:23:26 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Thu, 17 Jan 2019 16:23:26 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: <3k03ta01c8bua1mm@shdsegapp2> References: <3k03ta01c8bua1mm@shdsegapp2> Message-ID: it is impossible to set a EPIC (or any other AMD) as guest CPU? $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e-4be8-a8ea-b58fc00358ce) Command Failed: One or more of the operations failed but my compute node seems to support EPIC CPUs? cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml .... some tips for me how to handle this? volker... ________________________________ Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Donnerstag, 17. Januar 2019 15:50 An: starlingx-discuss at lists.starlingx.io Betreff: [Starlingx-discuss] cpu mode hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... -------------- next part -------------- An HTML attachment was scrubbed... URL: From mario.alfredo.c.arevalo at intel.com Thu Jan 17 16:40:03 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Thu, 17 Jan 2019 16:40:03 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2>, Message-ID: <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com> Hi Volker, Could you please send me the QEMU command line used to launch your VM, this is in order to check the QEMU arguments/flags, possibly it requires "-cpu host" argument. Thanks. Best regards. Mario. From: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Sent: Thursday, January 17, 2019 8:23 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] cpu mode it is impossible to set a EPIC (or any other AMD) as guest CPU? $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e-4be8-a8ea-b58fc00358ce) Command Failed: One or more of the operations failed but my compute node seems to support EPIC CPUs? cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml .... some tips for me how to handle this? volker... Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Donnerstag, 17. Januar 2019 15:50 An: starlingx-discuss at lists.starlingx.io Betreff: [Starlingx-discuss] cpu mode hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... From bharath_ves at hotmail.com Thu Jan 17 17:28:38 2019 From: bharath_ves at hotmail.com (bharath thiruveedula) Date: Thu, 17 Jan 2019 17:28:38 +0000 Subject: [Starlingx-discuss] How to get access to Starlingx Message-ID: Hi, I am trying to install Starlingx, but due to resource constraints. couldn't achieve it. Can we install Starlingx on a 16GB machine? If not, is there any way to explore the features of starlingx like accessing public installation of Starlingx? Best Regards Bharath T -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Thu Jan 17 17:30:31 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 17 Jan 2019 17:30:31 +0000 Subject: [Starlingx-discuss] Suggestions for a All-In-One Setup on a RAID In-Reply-To: References: <57371165.1423631.1547559668744@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA413C3E@ALA-MBD.corp.ad.wrs.com> <1276855519.1435728.1547568250762@communicator.strato.com> Message-ID: Marcel, > I wnat to setup an all-in-one starlingx configuration on a nmve disk, > which should be mirrored to a second nvme disk. > > I have learned, that the various StarlingX installation modes come each > with a predefined partition scheme. I found the following changes [0] and file [1] (see function create_controller_filesystems) to help how StarlingX partitions are defined. > Any suggestions how to adapt these in my case? Does your use case demands to have this mirrored system for fail-over strictly in a Simplex configuration? Duplex allows you to replicate your controller-0 with controller-1 via drdb, is there anything I am missing for your use case to avoid a Duplex configuration? [0] https://review.openstack.org/#/q/topic:bug/1791170+(status:open+OR+status:merged) [1] http://git.openstack.org/cgit/openstack/stx-config/tree/sysinv/sysinv/sysinv/sysinv/conductor/manager.py From hayde.martinez.landa at intel.com Thu Jan 17 17:52:14 2019 From: hayde.martinez.landa at intel.com (Martinez Landa, Hayde) Date: Thu, 17 Jan 2019 17:52:14 +0000 Subject: [Starlingx-discuss] How to get access to Starlingx Message-ID: <489F0921-AF2A-4D5E-91CE-A35E16D80834@intel.com> Bharath, >Hi, >I am trying to install Starlingx, but due to resource constraints. couldn't achieve it. Can we install Starlingx on a 16GB machine? Are you trying to install on Virtual or Bare Metal? If you are trying on bare metal please be aware that you need 2 network interfaces at least, connected to a switch. The storage that the documentation suggest is just for recommended performance but it will depend on the workloads of your project, >If not, is there any way to explore the features of starlingx like accessing public installation of Starlingx? We don't have something like this at the moment, but it is a great suggestion. I encourage you to test with your resources and please inform us any interesting findings. >Best Regards >Bharath T From vm.rod25 at gmail.com Thu Jan 17 18:03:05 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 17 Jan 2019 12:03:05 -0600 Subject: [Starlingx-discuss] How to get access to Starlingx In-Reply-To: <489F0921-AF2A-4D5E-91CE-A35E16D80834@intel.com> References: <489F0921-AF2A-4D5E-91CE-A35E16D80834@intel.com> Message-ID: On Thu, Jan 17, 2019 at 11:52 AM Martinez Landa, Hayde wrote: > > Bharath, > > >Hi, > > >I am trying to install Starlingx, but due to resource constraints. couldn't achieve it. Can we install Starlingx on a 16GB machine? > Are you trying to install on Virtual or Bare Metal? > If you are trying on bare metal please be aware that you need 2 network interfaces at least, connected to a switch. > The storage that the documentation suggest is just for recommended performance but it will depend on the workloads of your project, > > >If not, is there any way to explore the features of starlingx like accessing public installation of Starlingx? > We don't have something like this at the moment, but it is a great suggestion. > Can we update this into the public documentation ? maybe the FAQ section coudl have some of this points Thanks Hayde > I encourage you to test with your resources and please inform us any interesting findings. > > >Best Regards > >Bharath T > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Tue Jan 8 00:28:44 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 8 Jan 2019 00:28:44 +0000 Subject: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages In-Reply-To: References: <9BAB5B7CAF57C3459E4636391F1071CE05290C42@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA40F62C@ALA-MBD.corp.ad.wrs.com> <8e27b76c-ce2a-039c-f51c-b00087dad813@linux.intel.com> <36d5be50-348e-a5a5-6d63-c4be2543f4f4@windriver.com> Message-ID: <9A85D2917C58154C960D95352B22818BB28C7820@fmsmsx121.amr.corp.intel.com> Saul wrote: > Do we need a proper Specification for the meaning of the package information, this is where we can change the tis/TIS to stx/STX! +1! brucej -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Monday, January 7, 2019 3:48 PM To: Scott Little ; Penney, Don ; Friesen, Chris ; An, Ran1 ; Lin, Shuicheng ; Church, Robert ; Bailey, Henry Albert (Al) Cc: starlingx-discuss at lists.starlingx.io; Chen, Haochuan Z Subject: Re: [Starlingx-discuss] discuss about initial value of TIS_PATCH_VER when upgrade packages On 1/7/19 7:28 AM, Scott Little wrote: > I disagree.   Our experience in the past, is that putting a tis.0 on a > package raises questions from both customers and designers.  Why are > you compiling this at all if you aren't changing it?I would have > thought that the tis. extension would be enough to indicate this package had patches. I also think we should really be switching to stx.0, but that's a different discussion I would guess. > A little digging, and some wasted cycles, and the answer is.  "Oh, we > are changing it. we still have 3 patches against it. sorry for the > confusion." > > Now as you point out.  We might remove a patch in a non-rebase context. > In this case we are compelled to increment, rather than decrement, > TIS_PACTH_VER.  In this case we have to live with the misleadingly > high number until the next rebase.  That's ok.  No one has complained > about that. > I guess I am about the consistency of the meaning of tis. when it increments, such that starting at 0 and later incrementing means change occurs vs starting at N want meaning a patch count and later incrementing and not really having a meaning any more, my OCD kind of kicks in. > I should have been flagging this in earlier code reviews.  I wasn't.  > My error.  Had bigger fish to fry in the early months of going open source. > As I said, I had never heard this until now, I understand your busy, but we did the whole 7.5 update without hearing about. > If the community wants to overrule, that's fine.  I'm just trying to > share my hard won experience as 'the rebase guy' for 4 years prior to > open sourcing. > Do we need a proper Specification for the meaning of the package information, this is where we can change the tis/TIS to stx/STX! Sau! > Scott > > > > On 2019-01-04 4:52 p.m., Saul Wold wrote: >> >> I am not sure I agree with any of this, first off, just the fact that >> we have an SRPM and the TIS_PACTH_VER indicates that it's been >> patched, I really don't see the value in having the patch count >> indicated as a "Version" item. >> >> It makes more sense to start from 0 (option a) and that way we can >> track each subsequent change to that package with an increment. >> >> This issue did not come up at all in past updates, I am not sure why >> it's becoming an issue now. >> >> See below for additional comments > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Tue Jan 8 13:40:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 8 Jan 2019 13:40:19 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 1/9 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E3B959@SHSMSX103.ccr.corp.intel.com> Agenda proposed for 1/9 meetings: 1. CentOS 7.6 upgrade status (Shuicheng/Martin) 2. Ceph upgrade status (Vivian/Changcheng) 3. Python2to3 status, flocks and OS packages (Austin) 4. Opens (all) @all, welcome back from holiday, please let me know if you have additional topics for Wed's call. Th.x - cindy -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; Shang, Dehao; 'Rowsell, Brent'; Wold, Saul; Waheed, Numan; Sun, Austin; Jones, Bruce E; Liu, ZhipengS; starlingx-discuss at lists.starlingx.io; Troyer, Dean; Hu, Yong; 'Khalil, Ghada'; Zhu, Vivian; Lin, Shuicheng; Somerville, Jim Cc: 'Young, Ken'; Hu, Wei W; Armstrong, Robert H; Martinez Monroy, Elio; 'Hellmann, Gil'; 'Chen, Jacky'; 'Eslimi, Dariush'; Lara, Cesar; Cobbley, David A; 'Waines, Greg'; Gomez, Juan P; Martinez Landa, Hayde; Arce Moreno, Abraham; Perez Rodriguez, Humberto I; Perez Carranza, Jose; 'Seiler, Glenn' Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, January 9, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From cindy.xie at intel.com Wed Jan 9 15:25:14 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 9 Jan 2019 15:25:14 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack Distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E3D47F@SHSMSX103.ccr.corp.intel.com> Cancel for this week due to the conflict with StarlingX meet-up. Thx. - cindy * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 5598 bytes Desc: not available URL: From bruce.e.jones at intel.com Wed Jan 9 15:46:33 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 9 Jan 2019 15:46:33 +0000 Subject: [Starlingx-discuss] Community meeting notes Jan 9 2019 Message-ID: <9A85D2917C58154C960D95352B22818BB28C8FDA@fmsmsx121.amr.corp.intel.com> Agenda and notes - Jan 9th call * Denver PTG attendance - May 1 - 3 * Need to submit space / time requests for the PTG meeting by Jan 20th ? Edge WG to set a 1/2 day session to save time to work with other projects e.g. us * Expected topics ? Planning for the next release - 1 day? ? Cross project collaboration - 1/2 day? * Please register for the January community meeting so we can get a count for logistics (meals, etc...): https://starlingx_jan2019meetup.eventbrite.com * The draft agenda has changed: https://etherpad.openstack.org/p/stx-chandler-meetup * Expectations of the presenters - show your plan for * Release Planning Prep ? https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=0 ? Request Project Leads to fill as much as possible of their plan before the F2F next week * Conflicts next week for the Meetup meeting and existing sub-project calls. We are cancelling all of these calls * Tuesday 1400 UTC Distro.openstack call * Tuesday 1700 UTC Test Team call * Wednesday 1400 UTC Distro.other call * Wednesday 1500 UTC Community call * Wednesday 2030 UTC Docs call * Monday and Thursday calls may be impacted due to travel as well ? Multi-OS call will be cancelled due to travel ? Container meeting will be cancelled due to travel ? Security Meeting on Monday is cancelled * Creation of a Test repo (stx-test) - Ada to work with Dean * Do we want a SB project for the test team too? * Creation of a SB project for specs? Dean to submit and thank you! * Banned C function policy * https://wiki.openstack.org/wiki/StarlingX/Security/Banned_C_Functions * Key work items in flight * Kernel upgrade ? Part of CentOS 7.6 upgrade. Patches submitted to upgrade patches for std and rt kernel. Patches for some out of tree drivers submitted. Plan is to upgrade 49 sRPMS, 17 have been merged to feature branch. Today we decided to abandon work to upgrade puppet modules not needed after containerizing OpenStack. We are disabling Melanox driver support for DPDK until OVS supports the newer version. * Ceph upgrade ? Training this week to help the team ramp on how to change StarlingX to adapt to the new Ceph interfaces. * Containerizer services ? We now have a wiki for how to bring up containers on simplex. Mingyuan has done this. We have images on docker hub. Still a lot of work left on integration issues and feature development. Looking for additional help, discussions on email happening now. * OpenStack patch reduction ? Continued good progress on Nova upstreaming with two changes merged recently ? Intel team (Cindy) to take on PCI Interrupt Affinity * Intel still working on outsourcing agreement for several Nova items, getting closer to vendor selection ? Progress tracked in a big spreadsheet * https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jan 10 16:50:53 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 11:50:53 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 26 - Failure! Message-ID: <990113396.72.1547139055519.JavaMail.javamailuser@localhost> Project: email-test Build #: 26 Status: Failure Timestamp: 20190110T165053Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 440 bytes Desc: not available URL: From build.starlingx at gmail.com Thu Jan 10 16:56:54 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 11:56:54 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 27 - Still Failing! In-Reply-To: <990113396.72.1547139055519.JavaMail.javamailuser@localhost> References: <990113396.72.1547139055519.JavaMail.javamailuser@localhost> Message-ID: <689526641.74.1547139415920.JavaMail.javamailuser@localhost> Project: email-test Build #: 27 Status: Still Failing Timestamp: 20190110T165654Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 467 bytes Desc: not available URL: From build.starlingx at gmail.com Thu Jan 10 17:28:49 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 10 Jan 2019 12:28:49 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 28 - Still Failing! In-Reply-To: <689526641.74.1547139415920.JavaMail.javamailuser@localhost> References: <689526641.74.1547139415920.JavaMail.javamailuser@localhost> Message-ID: <1164981661.76.1547141331171.JavaMail.javamailuser@localhost> Project: email-test Build #: 28 Status: Still Failing Timestamp: 20190110T172849Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar -------------- next part -------------- A non-text attachment was scrubbed... Name: build.log Type: application/octet-stream Size: 467 bytes Desc: not available URL: From Ghada.Khalil at windriver.com Thu Jan 10 21:47:14 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 10 Jan 2019 21:47:14 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 01/10 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4A232A@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Jan 10/2019 * Patch upstreaming: https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/edit?ts=5c354b69#gid=0 * Summary: ? Network Segment Range: 2 patches merged / 6 more to go -- making progress on the rest ? 0077-api-reject-routes-with-invalid-network-val.patch (6ed8fe6) -- merged ? 0040-rpc-removing-timeout-backoff-multiplier.patch (2e4ca00) -- ready to merge ? 0092-l3-add-l2pop-support-for-floatingip-resources.patch (9f926a5) -- Lots of discussion in the community; requiring lots of effort to follow up. Agreed to abandon l2pop RFE since the primary use-case is BGP EVPN which is not a priority for STX at this time. * Containerized OVS * SB: https://storyboard.openstack.org/#!/story/2004649 * Prime: Huifeng and Cheng * Already setup the standard openstack-helm w/ OVS successfully * Next working on setting up the STX containerized environment following the wiki * Then planning to look at the code changes in the STX repo's. * AR: Ghada to give Joseph a heads up regarding questions. * Code changes are expected in the override charts * OVS-DPDK firewall * SB: https://storyboard.openstack.org/#!/story/2002944 * Prime: Kailun * Setup STX environment with 1 controller and 2 computes. Also setup * Matt sent the * OVS Process Monitoring / Alarming * SB: https://storyboard.openstack.org/#!/story/2002947 * Prime: Intel TBD ? Not assigned yet as resources are not available. Will re-visit in 2 weeks. * Configurable vswitch memory - OVS-DPDK Jumbo Frame Support * SB: https://storyboard.openstack.org/#!/story/2004472 * Prime: Steve Webster * Making good progress; expect to have gerrit reviews out this week. Regards, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From gwanmax at gmail.com Mon Jan 14 03:40:22 2019 From: gwanmax at gmail.com (wang guo) Date: Mon, 14 Jan 2019 11:40:22 +0800 Subject: [Starlingx-discuss] document outdated In-Reply-To: References: Message-ID: Hi Abraham, I'm trying to build ISO by following the document you said. When command "build-pkgs" is executed in the builder container, some files seem not found. As bellows is the console output and the build log "build-std.log" is attached. ``` [ubuntu at ecc232b348f8 /]$ build-pkgs ... ... ... 03:28:27 b0: ===== Build SRPM for 'update-motd' ===== 03:28:27 b0: PKG_BASE=/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/utilities/update-motd 03:28:27 b0: WORK_BASE=/localdisk/loadbuild/ubuntu/starlingx/std/inputs/stx/stx-integ/utilities/update-motd 03:28:27 b0: RPMBUILD_BASE=/localdisk/loadbuild/ubuntu/starlingx/std/inputs/stx/stx-integ/utilities/update-motd/rpmbuild 03:28:27 b0: Wrote: /localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SOURCES/update-motd/srpm_input.md5 03:28:27 b0: SRPM build not required for '/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/utilities/update-motd' 03:28:27 b0: ===== Build complete for 'update-motd' ===== 03:28:27 b0: 03:28:28 ERROR: reaper (1301): Failed to build src.rpm from source at 'b1' 03:28:28 b0: ===== Build SRPM for 'python' ===== 03:28:28 b0: PKG_BASE=/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/python/python-2.7.5 03:28:28 b0: BUILD_DIR=python/rpmbuild 03:28:28 b0: SRPM_DIR=/localdisk/loadbuild/ubuntu/starlingx/std/srpm_assemble/python/rpmbuild/SRPMS 03:28:28 b0: Wrote: /localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SOURCES/python/srpm_input.md5 03:28:28 b0: SRPM build not required for '/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/python/python-2.7.5' 03:28:28 b0: ===== Build complete for 'python' ===== 03:28:28 b0: 03:28:28 ============ Build failed ============= 03:28:28 b1: ===== Build SRPM for 'qemu-kvm-ev' ===== 03:28:28 b1: PKG_BASE=/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/virt/qemu 03:28:28 b1: WORK_BASE=/localdisk/loadbuild/ubuntu/starlingx/std/inputs/stx/stx-integ/virt/qemu 03:28:28 b1: RPMBUILD_BASE=/localdisk/loadbuild/ubuntu/starlingx/std/inputs/stx/stx-integ/virt/qemu/rpmbuild 03:28:28 b1: ERROR: md5sums_from_input_vars (107): readlink -f '/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/downloads/kvm-unit-tests.git-4ea7633.tar.bz2 /localdisk/designer/ubuntu/starlingx/cgcs-root/stx/downloads/keycodemapdb-16e5b07.tar.gz centos/files/* /localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/virt/qemu/qemu/qemu_clean /localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/virt/qemu/qemu/qemu_clean.service /localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/virt/qemu/qemu/qemu-system-x86.conf' -type f 03:28:28 b1: ERROR: build_dir_spec (974): md5sums_from_input_vars 'spec' '/localdisk/designer/ubuntu/starlingx/cgcs-root/stx/stx-integ/virt/qemu/centos/qemu-kvm.spec' '/localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SOURCES/qemu-kvm-ev' 03:28:28 ERROR: reaper (1303): Failed to build src.rpm from source at 'b1' 03:28:28 ######## Mon Jan 14 03:28:28 UTC 2019: build-srpm-parallel --std failed with rc=1 ubuntu at gw-test:~/stx-tools$ ll ~/starlingx/mirror/CentOS/pike/downloads/kvm-unit-tests.git-4ea7633.tar.bz2 -rw-r--r-- 1 ubuntu ubuntu 316542 Jan 14 02:01 /home/ubuntu/starlingx/mirror/CentOS/pike/downloads/kvm-unit-tests.git-4ea7633.tar.bz2 ``` Thanks a lot. - Wang Arce Moreno, Abraham 于2019年1月10日周四 上午3:24写道: > Hi Wang Guo, > > > https://docs.starlingx.io/developer_guide/index.html#setup-repository- > > docker-container > > > > After executing "cd $HOME/stx-tools/centos-mirror-tools/", cannot find > > Dockerfile at this directory; then execute "docker build --tag > $USER:centos- > > mirror-repository --file Dockerfile" failed. > > The required changes are being reviewed > https://review.openstack.org/#/c/619043/ > > We will post them no later than tomorrow. For now please refer to > http://git.openstack.org/cgit/openstack/stx-tools/tree/README.rst > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build-std.log Type: text/x-log Size: 44936 bytes Desc: not available URL: From bruce.e.jones at intel.com Mon Jan 14 21:54:09 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 14 Jan 2019 21:54:09 +0000 Subject: [Starlingx-discuss] Community meeting - Bruce's slides Message-ID: <9A85D2917C58154C960D95352B22818BB28CC281@fmsmsx121.amr.corp.intel.com> Here are some slides for discussion on the topics for me on the agenda and in the release plan. brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Chandler F2F CVE Upgrades.pdf Type: application/pdf Size: 30601 bytes Desc: Chandler F2F CVE Upgrades.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Chandler F2F User Documentation.pdf Type: application/pdf Size: 51124 bytes Desc: Chandler F2F User Documentation.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Chandler F2F Compiler Flags for Security.pdf Type: application/pdf Size: 37671 bytes Desc: Chandler F2F Compiler Flags for Security.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Chandler F2F DistroOpenStack.pdf Type: application/pdf Size: 32882 bytes Desc: Chandler F2F DistroOpenStack.pdf URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: Chandler F2F MultiOS.pdf Type: application/pdf Size: 71570 bytes Desc: Chandler F2F MultiOS.pdf URL: From Matt.Peters at windriver.com Tue Jan 15 14:16:55 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 15 Jan 2019 14:16:55 +0000 Subject: [Starlingx-discuss] Approach for SB 2004710 support to access docker images via proxy? In-Reply-To: References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4CD7B@ALA-MBD.corp.ad.wrs.com> Message-ID: Hello, I also wanted to bring to your attention the following spec and storyboard. These outline changes forthcoming for replacing config_controller. Therefore, any changes that are introduce for config_controller should consider the impacts to this development. https://review.openstack.org/#/c/629581/ https://storyboard.openstack.org/#!/story/2004695 From: "Qi, Mingyuan" Date: Monday, January 14, 2019 at 9:29 PM To: Barton Wensley , "Miller, Frank" , "Kung, John" Cc: Chris Friesen , "Church, Robert" , Brent Rowsell , "Xie, Cindy" , "Penney, Don" , "Peters, Matt" Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Thanks Bart, I agree with the proxy part. As for alternate docker registry I have different thinking. The additional docker registry(or called registry mirror) is to accelerate docker image pulling from internet, not only for openstack-helm, but also for k8s/armada image. It’s slightly different from local docker registry (say controller_address:9001) while local registry is more like a cache for controller after pulling. If the registry mirror is needed for k8s/armada, it has to be set in config_controller as well. Meanwhile the registry mirror address may be in no_proxy if it’s host is within the same LAN as controller. For the proxy SB, once which table is confirmed to store proxy info, I will finish the code for review. Thanks, Mingyuan From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Tuesday, January 15, 2019 4:58 To: Miller, Frank ; Qi, Mingyuan ; Kung, John Cc: Friesen, Chris ; Church, Robert ; Rowsell, Brent ; Xie, Cindy ; Penney, Don ; Peters, Matt Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Mingyuan, As far as the input of the new configuration goes, we typically only add configuration questions to config_controller if the data is required for the execution of config_controller. In your case, I expect the proxy configuration would be required by config_controller (in order to download the kubernetes images), but the alternate docker registry wouldn’t be required until the stx-openstack application was required. So for the proxy configuration, we would probably add a new set of questions something like this: Kubernetes Configuration: ------------------------- Configure http proxy [y/N]: y HTTP proxy URL: http://proxy.example.com:80 HTTPS proxy URL: https://proxy.example.com:443 I suspect that the NO_PROXY for the service config file can be calculated ourselves - I assume it will just have the IP of our internal docker registry? The http proxy config would be system wide configuration value, so it wouldn’t belong in the host table. John Kung or Matt Peters can comment on which sysinv table you should use. The configuration of the http-proxy.conf file can be done in the docker.pp manifest as you suggested below. This needs to be done on both controllers. You need input from Bob Church for the configuration of the alternate docker registry. I think the registry is currently hardcoded in sysinv. I suspect we will want a new sysinv command to specify the new docker registry, but Bob or John should comment on that. Bart From: Miller, Frank Sent: January 14, 2019 2:36 PM To: Qi, Mingyuan Cc: Friesen, Chris; Wensley, Barton; Church, Robert; Rowsell, Brent; Xie, Cindy; Penney, Don Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Thanks for the update Mingyuan. Bart are you able to provide your suggestions to Mingyuan on the best approach for this? Frank From: Qi, Mingyuan [mailto:mingyuan.qi at intel.com] Sent: Monday, January 14, 2019 2:38 AM To: Miller, Frank Cc: Friesen, Chris; Wensley, Barton; Church, Robert; Rowsell, Brent; Xie, Cindy Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Frank, Thanks for your note, I was just wondering how wide I should send my I thoughts for review. Last week, I tried 2 approaches to implement proxy: First one is following the complete mechanism of config_controller • collect proxy info in input_config() and treat them as host values. • add 3 attributes(http_proxy, https_proxy, no_proxy) to ihost of sysinv/cgts-client. • accordingly add 3 fields in sysinv host api/object/db. • add 3 params to docker puppet class and update them during pupput platform yaml file generation. • apply docker.pp to create http-proxy.conf with user input proxy info. One questions about this approach: Is ihost the suitable table for proxy info? Most likely the first controller is the only one node that docker needs proxy to access internet on, is this info redundant for each host? Or an alternative one without adding proxy info to sysinv db is better? I did a trail but introduced a shortcut mechanism in config_controller: • collect proxy info in input_config() as well • a utility script to add proxy info to host yaml after host puppet config creation finished. • Same as the previous approach to apply docker puppet manifest Same situation as the docker registry mirror, they could be done as the same approach. Really appreciate your comments. Thanks, Mingyuan From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, January 14, 2019 6:36 To: Qi, Mingyuan > Cc: Friesen, Chris >; Wensley, Barton >; Church, Robert >; Rowsell, Brent >; Xie, Cindy > Subject: Approach for SB 2004710 support to access docker images via proxy? Mingyuan: Thank-you for taking this on. Can you describe the code changes that you think are needed for adding support to access docker images via a proxy? I’ve cc’d a few senior designers who can help you if required and provide initial feedback to you before you get too far into any implementation and post a gerrit review. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From liang.a.fang at intel.com Tue Jan 15 14:57:44 2019 From: liang.a.fang at intel.com (Fang, Liang A) Date: Tue, 15 Jan 2019 14:57:44 +0000 Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EBB25D46B@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5F8E8F@ALA-MBD.corp.ad.wrs.com> , <4C60D9C5C8176C47874FFF36647AA19E9D5FAA8C@ALA-MBD.corp.ad.wrs.com> <4C60D9C5C8176C47874FFF36647AA19E9D608275@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Ovidiu I have taken over this work from Lisa. I took a look of the code in stx-config, main call stack may like below if I understand correctly: system -> sysinv-api -> sysinv-conductor -> sysinv-agent code to change: system: command parameter change, add parameter "cache_size" (name TBD) for command "system storage-backend-add" and "system storage-backend-modify", e.g. · system storage-backend-add ceph cache_size=10 · system storage-backend-modify ceph cache_size=10 sysinv-api: code change? Maybe, I need to look code more sysinv-conductor: edit puppet, let agent to apply the puppet to modify /etc/cinder/cinder.conf and restart cinder service, etc sysinv-agent: may don’t need to change any code I will try to coding and debugging this week and send out the code review asap, thanks to review then. Regards Liang From: Li, Xiaoyan Sent: Wednesday, December 19, 2018 2:28 PM To: Poncea, Ovidiu Cc: 'starlingx-discuss at lists.starlingx.io' ; Rowsell, Brent ; Fang, Liang A Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, Yesterday we discussed raw cache in Distro.openstack Dec 18th meeting. And Brent agreed that we should replace StarlingX raw cache with Cinder image cache, and we need to do corresponding changes in stx config to enable Cinder image cache. We Intel will do the work and you people will assist us. Could you show us where the Cinder service is configed and started in stx config? The following is TODO copied from former emails. Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Best wishes Lisa From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Thursday, December 6, 2018 2:06 PM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent, Please give your suggestions. And thank Ovidiu with the detailed summary! One correction here: With Cinder image cache, image_volume_cache_max_size_gb and image_volume_cache_max_count can be set 0, which means unlimited for both cache capacity and number of cached images. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Wednesday, December 5, 2018 3:42 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank >; Church, Robert > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Thanks for providing clarifications! So, for our use cases, main problem is that glance’s raw caching is more controllable that cinder’s. If it’s not enough we need to improve it, if we can live with it then at a minimum it needs to be enabled though sysinv configuration and then remove the raw-caching from glance. See inline comments plus bellow summary and proposal, we need Brent’s input on this: I see two main solutions to the problem: A. Always enable cache, for any backend, but only cache glance images that have a certain attribute – this needs a cinder upstream change. Cache limit has to be removed (another cinder upstream change). We may also need a way to kick-start the caching in cinder & clean up cache (periodically and/or user triggered should be enough). B. Make enabling cache storage backend specific and configurable (through sysinv). Once cinder’s cache is enabled for a backend, cache everything. Size of the cache should be configurable. I would go for B. as it, most likely, doesn’t need upstream changes. [Li, Xiaoyan] Agree with B. But it doesn’t conflict with the requirements to set a property of an image like disable_cache, with this property Cinder won’t cache this image. I am concerned what kind of scenario/image it is suitable for? Summary of problems, TBD if we can live with them: · Images are not cached on creation – if we can’t live with it we may need a trigger to cinder on image creation or a way to manually kick-start the caching process. · Since first volume creation is slow for larger volumes this may timeout (keystone token expiration) – we had a customer using 200GB qcow2 windows images that would timeout on conversion. I don’t see a workaround for it, just ask him to manually do the conversion when importing very large images to glance. · we can’t provide a 100% guarantee that, once converted, successive creation won’t need to get converted again due to cache exhaustion. Can we live with it? Users may intermittently see slowdowns and wonder what’s going on. [Li, Xiaoyan] How about we can add a properties to this image/volume, Cinder will at last evict the cached image when cache exhausted. This need a cinder upstream to respect the property. · cache will waste space, if original images no longer exists there is no automated way to remove them from the cache – admin can clean up the cache manually if he so desires. We can either: 1. Live with it – assume that the space allocated to the cache is for the cache only or users can clean up cache by themselves. 2. Clean up cache through a cron job (although this is a cache, some caches are supposed to clean themselves up if cached data is no longer present). 3. Implement another mechanism to clean the cache when an image is deleted not at a later time (this is way too complex to upstream). · What happens with images that users don’t want to cache? Should we add a filter (glance property)? [Li, Xiaoyan] Allow users to add a property of the image. And need cinder upstream to respect the property. I vote for #2 as it does not seems too hard to implement. A once a day cron task can free up wasted space. [Li, Xiaoyan] This cron task probably can’t be included in Cinder. Is it OK? Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues): · Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution) · Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased] · When first enabling: create shadow tenant (no need to remove it when disabling cache) · Support disabling cache for a backend (clean up residual images) Regards, Ovidiu From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Tuesday, November 27, 2018 4:30 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Cc: Miller, Frank Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, As far as I’m concerned, Cinder image cache is an cache mechanism. So overall, users don’t need to clean it manually. Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments. Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Monday, November 26, 2018 11:15 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Cc: Miller, Frank > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Lisa, Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me. Short comparison of the two: Raw caching Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool. Advantages: - user can select the images it wants to cache - user can monitor the progress and can check used space for each image (cli + dashboard). - on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use. - no wasted space - complete control by user Disadvantages: - There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work. - Ceph only Cinder's caching Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold. Advantages: - already upstream - works with all backends - all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes. - admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation) Disadvantages: 1. it's either globally enabled or disabled => needs sysinv configuration option 2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste 3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important. 4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit. 5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates) Mitigations of disadvantages above - possible solutions and alternatives: #1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify") [Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html I think it is enough. [Ovi] Nice, we need a configuration option per backend in sysinv to enable it. (most likely in the capabilities fields of storage-backends table. See ‘system storage-backend-*’ commands). #2: No workaround comes to my mind - we can probably live with it #3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it) [Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction. As it is cache and should be transparent for users, why do we need users to evict data? [Ovi] If we conclude than this is enough from data usage perspectives then we are ok with it. #4: Two options comes to mind: 1. To get some control we should not limit the cache size, given that we do propper cleanup in #3. [Li, Xiaoyan] Even we do cleanup, the limit can’t be removed. [Ovi] We may need to enhance this. :q 2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or #5: This is mandatory, otherwise cinder's caching won't work at all. [Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can’t it work with Kubernetes? [Ovi] I did not say it won’t work with kubernates ☺ What I said is that we need to provision the shadow tenant automatically when the feature is enabled. Questions, (maybe if you get time to play with cinder's caching to get a better understanding): 1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel? [Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first and then creates volumes. The primary key is not image_id+backend_storage. It is possible that several entries or volumes will be created in same backend storage. [Ovi] So, only the first volume creation is going to be slow? If that’s the case then parallel volume creation will work ok as only first volume creation will be slow. 2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend? [Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled. [Ovi] Then we need to enable it and configure cache size per backend, I guess. 3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect? [Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed. [Ovi] So after we make the changes, we re-apply the manifests and restart the services (reload the helm charts for k8s deployments) 4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user? [Li, Xiaoyan] Admin and shadow tenants can both do cleanup. Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Thursday, November 22, 2018 2:41 AM To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Brent and Ovidiu, As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache. Please vote whether we can abandon raw cache in StarlingX. StarlingX Create an image cache in ceph when Glance creates an image. And delete the cached image in ceph when deleting the original image in Glance. Cinder: When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache. So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it. Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes. From my viewpoint, Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement. It is for all Cinder supported backend storage, not just for Ceph. Best wishes Lisa From: Li, Xiaoyan Sent: Monday, November 19, 2018 9:44 AM To: Poncea, Ovidiu >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Ovidiu, A cached image ( new volume from this image) is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image. All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368 https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82 A cached image is deleted when the configure space for cache is used up. So currently Cinder doesn’t delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache. https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117 https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351 Best wishes Lisa From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com] Sent: Friday, November 16, 2018 4:57 PM To: Li, Xiaoyan >; Rowsell, Brent >; 'starlingx-discuss at lists.starlingx.io' > Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi Li, Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed. Thanks, Ovidiu ________________________________ From: Li, Xiaoyan [xiaoyan.li at intel.com] Sent: Tuesday, November 13, 2018 9:19 AM To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail. And if I would like to remove the function in StarlingX, there are two methods: 1. Submit a patch to revert the changes in Glance and Cinder. 2. Ignore these patches during upgrading StarlingX/Cinder to new Cinder release. Which way do we prefer to? Best wishes Lisa From: Li, Xiaoyan Sent: Thursday, September 20, 2018 10:17 AM To: Rowsell, Brent >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi, Brent The following are mechanism of Cinder volume cache. Creation of cached volume: It creates a cached volume in the backend storage when creating from an image. 1. Create_from_image: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890 2. Return image cache entry: If not existed, it creates a new entry. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746 3. Create a new image-volume and cache entry for it: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872 Use a cached volume when creating a volume: https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes). https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164 Best wishes Lisa From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Thursday, September 6, 2018 10:02 AM To: Li, Xiaoyan >; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching We would need to review this feature to ensure it provides equivalent functionality first. If it does, great, we can look at reverting and enabling this cinder functionality. Brent From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com] Sent: Wednesday, September 5, 2018 9:59 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching Hi all, This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage. In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead. The problem is that we need to update Cinder config in StarlingX. Any comments? Best wishes Lisa -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Wed Jan 16 14:08:08 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 16 Jan 2019 14:08:08 +0000 Subject: [Starlingx-discuss] Approach for SB 2004710 support to access docker images via proxy? In-Reply-To: References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4CD7B@ALA-MBD.corp.ad.wrs.com> Message-ID: <024F54E1-268F-45B3-80B8-E5C2321F3B2C@windriver.com> Hi Mingyuan, No, the change is not planned to be completed by that time, in fact, it has a dependency on it being the default deployment model. I just wanted to make sure you were aware of the impending changes that can have an impact on your Story. Regards, Matt From: "Qi, Mingyuan" Date: Wednesday, January 16, 2019 at 3:50 AM To: "Peters, Matt" , Barton Wensley , "Miller, Frank" , "Kung, John" Cc: Chris Friesen , "Church, Robert" , Brent Rowsell , "Xie, Cindy" , "Penney, Don" , "starlingx-discuss at lists.starlingx.io" , "Ngo, Tee" Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Matt, Is this change planned to be finished before container cutover? Thanks, Mingyuan From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, January 15, 2019 22:17 To: Qi, Mingyuan ; Wensley, Barton ; Miller, Frank ; Kung, John Cc: Friesen, Chris ; Church, Robert ; Rowsell, Brent ; Xie, Cindy ; Penney, Don ; starlingx-discuss at lists.starlingx.io; Ngo, Tee Subject: Re: Approach for SB 2004710 support to access docker images via proxy? Hello, I also wanted to bring to your attention the following spec and storyboard. These outline changes forthcoming for replacing config_controller. Therefore, any changes that are introduce for config_controller should consider the impacts to this development. https://review.openstack.org/#/c/629581/ https://storyboard.openstack.org/#!/story/2004695 From: "Qi, Mingyuan" > Date: Monday, January 14, 2019 at 9:29 PM To: Barton Wensley >, "Miller, Frank" >, "Kung, John" > Cc: Chris Friesen >, "Church, Robert" >, Brent Rowsell >, "Xie, Cindy" >, "Penney, Don" >, "Peters, Matt" > Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Thanks Bart, I agree with the proxy part. As for alternate docker registry I have different thinking. The additional docker registry(or called registry mirror) is to accelerate docker image pulling from internet, not only for openstack-helm, but also for k8s/armada image. It’s slightly different from local docker registry (say controller_address:9001) while local registry is more like a cache for controller after pulling. If the registry mirror is needed for k8s/armada, it has to be set in config_controller as well. Meanwhile the registry mirror address may be in no_proxy if it’s host is within the same LAN as controller. For the proxy SB, once which table is confirmed to store proxy info, I will finish the code for review. Thanks, Mingyuan From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Tuesday, January 15, 2019 4:58 To: Miller, Frank >; Qi, Mingyuan >; Kung, John > Cc: Friesen, Chris >; Church, Robert >; Rowsell, Brent >; Xie, Cindy >; Penney, Don >; Peters, Matt > Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Mingyuan, As far as the input of the new configuration goes, we typically only add configuration questions to config_controller if the data is required for the execution of config_controller. In your case, I expect the proxy configuration would be required by config_controller (in order to download the kubernetes images), but the alternate docker registry wouldn’t be required until the stx-openstack application was required. So for the proxy configuration, we would probably add a new set of questions something like this: Kubernetes Configuration: ------------------------- Configure http proxy [y/N]: y HTTP proxy URL: http://proxy.example.com:80 HTTPS proxy URL: https://proxy.example.com:443 I suspect that the NO_PROXY for the service config file can be calculated ourselves - I assume it will just have the IP of our internal docker registry? The http proxy config would be system wide configuration value, so it wouldn’t belong in the host table. John Kung or Matt Peters can comment on which sysinv table you should use. The configuration of the http-proxy.conf file can be done in the docker.pp manifest as you suggested below. This needs to be done on both controllers. You need input from Bob Church for the configuration of the alternate docker registry. I think the registry is currently hardcoded in sysinv. I suspect we will want a new sysinv command to specify the new docker registry, but Bob or John should comment on that. Bart From: Miller, Frank Sent: January 14, 2019 2:36 PM To: Qi, Mingyuan Cc: Friesen, Chris; Wensley, Barton; Church, Robert; Rowsell, Brent; Xie, Cindy; Penney, Don Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Thanks for the update Mingyuan. Bart are you able to provide your suggestions to Mingyuan on the best approach for this? Frank From: Qi, Mingyuan [mailto:mingyuan.qi at intel.com] Sent: Monday, January 14, 2019 2:38 AM To: Miller, Frank Cc: Friesen, Chris; Wensley, Barton; Church, Robert; Rowsell, Brent; Xie, Cindy Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Frank, Thanks for your note, I was just wondering how wide I should send my I thoughts for review. Last week, I tried 2 approaches to implement proxy: First one is following the complete mechanism of config_controller • collect proxy info in input_config() and treat them as host values. • add 3 attributes(http_proxy, https_proxy, no_proxy) to ihost of sysinv/cgts-client. • accordingly add 3 fields in sysinv host api/object/db. • add 3 params to docker puppet class and update them during pupput platform yaml file generation. • apply docker.pp to create http-proxy.conf with user input proxy info. One questions about this approach: Is ihost the suitable table for proxy info? Most likely the first controller is the only one node that docker needs proxy to access internet on, is this info redundant for each host? Or an alternative one without adding proxy info to sysinv db is better? I did a trail but introduced a shortcut mechanism in config_controller: • collect proxy info in input_config() as well • a utility script to add proxy info to host yaml after host puppet config creation finished. • Same as the previous approach to apply docker puppet manifest Same situation as the docker registry mirror, they could be done as the same approach. Really appreciate your comments. Thanks, Mingyuan From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, January 14, 2019 6:36 To: Qi, Mingyuan > Cc: Friesen, Chris >; Wensley, Barton >; Church, Robert >; Rowsell, Brent >; Xie, Cindy > Subject: Approach for SB 2004710 support to access docker images via proxy? Mingyuan: Thank-you for taking this on. Can you describe the code changes that you think are needed for adding support to access docker images via a proxy? I’ve cc’d a few senior designers who can help you if required and provide initial feedback to you before you get too far into any implementation and post a gerrit review. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From gwanmax at gmail.com Wed Jan 16 06:24:10 2019 From: gwanmax at gmail.com (wang guo) Date: Wed, 16 Jan 2019 14:24:10 +0800 Subject: [Starlingx-discuss] Command "build-iso" is running without stopping when build ISO Message-ID: Hi all, When I execute command "build-iso" in the builder container, the console is always printing without stopping. The attached file "build-iso.log" is the captured console output. Does anyone help me with this issue? Thanks a lot. B.R. Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build-iso.log Type: text/x-log Size: 1303372 bytes Desc: not available URL: From mingyuan.qi at intel.com Wed Jan 16 08:50:02 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 16 Jan 2019 08:50:02 +0000 Subject: [Starlingx-discuss] Approach for SB 2004710 support to access docker images via proxy? In-Reply-To: References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4CD7B@ALA-MBD.corp.ad.wrs.com> Message-ID: Matt, Is this change planned to be finished before container cutover? Thanks, Mingyuan From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, January 15, 2019 22:17 To: Qi, Mingyuan ; Wensley, Barton ; Miller, Frank ; Kung, John Cc: Friesen, Chris ; Church, Robert ; Rowsell, Brent ; Xie, Cindy ; Penney, Don ; starlingx-discuss at lists.starlingx.io; Ngo, Tee Subject: Re: Approach for SB 2004710 support to access docker images via proxy? Hello, I also wanted to bring to your attention the following spec and storyboard. These outline changes forthcoming for replacing config_controller. Therefore, any changes that are introduce for config_controller should consider the impacts to this development. https://review.openstack.org/#/c/629581/ https://storyboard.openstack.org/#!/story/2004695 From: "Qi, Mingyuan" > Date: Monday, January 14, 2019 at 9:29 PM To: Barton Wensley >, "Miller, Frank" >, "Kung, John" > Cc: Chris Friesen >, "Church, Robert" >, Brent Rowsell >, "Xie, Cindy" >, "Penney, Don" >, "Peters, Matt" > Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Thanks Bart, I agree with the proxy part. As for alternate docker registry I have different thinking. The additional docker registry(or called registry mirror) is to accelerate docker image pulling from internet, not only for openstack-helm, but also for k8s/armada image. It’s slightly different from local docker registry (say controller_address:9001) while local registry is more like a cache for controller after pulling. If the registry mirror is needed for k8s/armada, it has to be set in config_controller as well. Meanwhile the registry mirror address may be in no_proxy if it’s host is within the same LAN as controller. For the proxy SB, once which table is confirmed to store proxy info, I will finish the code for review. Thanks, Mingyuan From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Tuesday, January 15, 2019 4:58 To: Miller, Frank >; Qi, Mingyuan >; Kung, John > Cc: Friesen, Chris >; Church, Robert >; Rowsell, Brent >; Xie, Cindy >; Penney, Don >; Peters, Matt > Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Mingyuan, As far as the input of the new configuration goes, we typically only add configuration questions to config_controller if the data is required for the execution of config_controller. In your case, I expect the proxy configuration would be required by config_controller (in order to download the kubernetes images), but the alternate docker registry wouldn’t be required until the stx-openstack application was required. So for the proxy configuration, we would probably add a new set of questions something like this: Kubernetes Configuration: ------------------------- Configure http proxy [y/N]: y HTTP proxy URL: http://proxy.example.com:80 HTTPS proxy URL: https://proxy.example.com:443 I suspect that the NO_PROXY for the service config file can be calculated ourselves - I assume it will just have the IP of our internal docker registry? The http proxy config would be system wide configuration value, so it wouldn’t belong in the host table. John Kung or Matt Peters can comment on which sysinv table you should use. The configuration of the http-proxy.conf file can be done in the docker.pp manifest as you suggested below. This needs to be done on both controllers. You need input from Bob Church for the configuration of the alternate docker registry. I think the registry is currently hardcoded in sysinv. I suspect we will want a new sysinv command to specify the new docker registry, but Bob or John should comment on that. Bart From: Miller, Frank Sent: January 14, 2019 2:36 PM To: Qi, Mingyuan Cc: Friesen, Chris; Wensley, Barton; Church, Robert; Rowsell, Brent; Xie, Cindy; Penney, Don Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Thanks for the update Mingyuan. Bart are you able to provide your suggestions to Mingyuan on the best approach for this? Frank From: Qi, Mingyuan [mailto:mingyuan.qi at intel.com] Sent: Monday, January 14, 2019 2:38 AM To: Miller, Frank Cc: Friesen, Chris; Wensley, Barton; Church, Robert; Rowsell, Brent; Xie, Cindy Subject: RE: Approach for SB 2004710 support to access docker images via proxy? Frank, Thanks for your note, I was just wondering how wide I should send my I thoughts for review. Last week, I tried 2 approaches to implement proxy: First one is following the complete mechanism of config_controller • collect proxy info in input_config() and treat them as host values. • add 3 attributes(http_proxy, https_proxy, no_proxy) to ihost of sysinv/cgts-client. • accordingly add 3 fields in sysinv host api/object/db. • add 3 params to docker puppet class and update them during pupput platform yaml file generation. • apply docker.pp to create http-proxy.conf with user input proxy info. One questions about this approach: Is ihost the suitable table for proxy info? Most likely the first controller is the only one node that docker needs proxy to access internet on, is this info redundant for each host? Or an alternative one without adding proxy info to sysinv db is better? I did a trail but introduced a shortcut mechanism in config_controller: • collect proxy info in input_config() as well • a utility script to add proxy info to host yaml after host puppet config creation finished. • Same as the previous approach to apply docker puppet manifest Same situation as the docker registry mirror, they could be done as the same approach. Really appreciate your comments. Thanks, Mingyuan From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Monday, January 14, 2019 6:36 To: Qi, Mingyuan > Cc: Friesen, Chris >; Wensley, Barton >; Church, Robert >; Rowsell, Brent >; Xie, Cindy > Subject: Approach for SB 2004710 support to access docker images via proxy? Mingyuan: Thank-you for taking this on. Can you describe the code changes that you think are needed for adding support to access docker images via a proxy? I’ve cc’d a few senior designers who can help you if required and provide initial feedback to you before you get too far into any implementation and post a gerrit review. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From bharath_ves at hotmail.com Thu Jan 17 18:13:07 2019 From: bharath_ves at hotmail.com (bharath thiruveedula) Date: Thu, 17 Jan 2019 18:13:07 +0000 Subject: [Starlingx-discuss] How to get access to Starlingx In-Reply-To: <489F0921-AF2A-4D5E-91CE-A35E16D80834@intel.com> References: <489F0921-AF2A-4D5E-91CE-A35E16D80834@intel.com> Message-ID: Hi Matrinez, I am planning to install on virtual environment. >>I encourage you to test with your resources and please inform us any interesting findings. Can you please share some pointers on a minimal hardware footprint? Is there any recording which explains internals of Starlingx? Best Regards Bharath T ________________________________ From: Martinez Landa, Hayde Sent: Thursday, January 17, 2019 11:22 PM To: bharath thiruveedula; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to get access to Starlingx Bharath, >Hi, >I am trying to install Starlingx, but due to resource constraints. couldn't achieve it. Can we install Starlingx on a 16GB machine? Are you trying to install on Virtual or Bare Metal? If you are trying on bare metal please be aware that you need 2 network interfaces at least, connected to a switch. The storage that the documentation suggest is just for recommended performance but it will depend on the workloads of your project, >If not, is there any way to explore the features of starlingx like accessing public installation of Starlingx? We don't have something like this at the moment, but it is a great suggestion. I encourage you to test with your resources and please inform us any interesting findings. >Best Regards >Bharath T -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Thu Jan 17 18:45:26 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Thu, 17 Jan 2019 12:45:26 -0600 Subject: [Starlingx-discuss] Mailing list notes Message-ID: I just flushed the starlingx-discuss moderation queue, there were a number of messages from the last week or so stacked up along with a couple hundred spam emails (yay? we're well-known enough to attract spam? :) The top reasons most messages get caught in moderation are: 1) message too big - we have a limit of 60K on message size. I am aware that the majority of subscribers use a mail client that encourages top-posting and that leads to not trimming messages, in this environment that makes conversations over multiple messages hard to follow. Trimming unnecessary quoting in messages makes for a better record of conversations, and will ensure that you are not delayed by the size limit. 2) too many recipients - The list is configured to set Reply-to to the original poster and not the list itself, so many people hit reply-all to reply to the list. Again, some mail readers handle this poorly and It is good practice to trim the recipients when replying to a mailing list. 3) not subscribed - For whatever reason mailman (the list processor) doesn't think your email is subscribed and refuses to deliver. The size of the spam backlog is a testament to why we keep this on. We generally also allow these through as it is often new people to the community and they will get notified that they need to subscribe to the list. I am investigating a coupe of reports that messages are not being properly forwarded and the user is notified that they are not subscribed when they believe they are. If you think you are also having this problem let me know and I'll check it out. One is an anomaly, two is a coincidence, three, well at three we may have a pattern :) Also, you can verify your subscription settings at http://lists.starlingx.io/cgi-bin/mailman/options/starlingx-discuss dt -- Dean Troyer dtroyer at gmail.com From Barton.Wensley at windriver.com Thu Jan 17 18:56:33 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 17 Jan 2019 18:56:33 +0000 Subject: [Starlingx-discuss] Mailing list notes In-Reply-To: References: Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4DA3A@ALA-MBD.corp.ad.wrs.com> To play devil's advocate... For #1 below (message too big) - it looks to me like of the messages that were flushed, only one was due to a message body larger than 60K - most were due to attachments. Personally, I find it difficult to track conversations where the message has been "trimmed" (I am tempted to use other more descriptive words but I won't). I'd prefer to see the whole context of the conversation without having to track back through several messages in the history. Bart -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: January 17, 2019 1:45 PM To: starlingx Subject: [Starlingx-discuss] Mailing list notes I just flushed the starlingx-discuss moderation queue, there were a number of messages from the last week or so stacked up along with a couple hundred spam emails (yay? we're well-known enough to attract spam? :) The top reasons most messages get caught in moderation are: 1) message too big - we have a limit of 60K on message size. I am aware that the majority of subscribers use a mail client that encourages top-posting and that leads to not trimming messages, in this environment that makes conversations over multiple messages hard to follow. Trimming unnecessary quoting in messages makes for a better record of conversations, and will ensure that you are not delayed by the size limit. 2) too many recipients - The list is configured to set Reply-to to the original poster and not the list itself, so many people hit reply-all to reply to the list. Again, some mail readers handle this poorly and It is good practice to trim the recipients when replying to a mailing list. 3) not subscribed - For whatever reason mailman (the list processor) doesn't think your email is subscribed and refuses to deliver. The size of the spam backlog is a testament to why we keep this on. We generally also allow these through as it is often new people to the community and they will get notified that they need to subscribe to the list. I am investigating a coupe of reports that messages are not being properly forwarded and the user is notified that they are not subscribed when they believe they are. If you think you are also having this problem let me know and I'll check it out. One is an anomaly, two is a coincidence, three, well at three we may have a pattern :) Also, you can verify your subscription settings at http://lists.starlingx.io/cgi-bin/mailman/options/starlingx-discuss dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Thu Jan 17 18:57:31 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 17 Jan 2019 12:57:31 -0600 Subject: [Starlingx-discuss] [build] go packages, version 2 In-Reply-To: References: Message-ID: On Thu, Jan 17, 2019 at 9:14 AM McKenna, Jason wrote: > > Hi Victor, great feedback. Inline. > > > -----Original Message----- > > From: Victor Rodriguez > > Sent: January 16, 2019 10:49 AM > > To: McKenna, Jason > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] [build] go packages, version 2 > > > > Hi Jason > > > > > > On Tue, Jan 15, 2019 at 11:57 AM McKenna, Jason > > wrote: > > > > > > Hi build team, > > > > > > > > > > > > At the previous build meeting I had identified an issue with the way some > > go based packages were being built be built (they required internet access), > > and promised I’d update the mailing list on a potential way forward that we > > were prototyping. > > > > > > > > > > > > Some preliminary points: > > > > > > - go usually attempts to resolve dependencies at build time, by going > > out to the internet and fetching stuff (like dependency source code) using > > the “go get” command > > > > > > - Sometimes the stuff fetched by “go get” isn’t appropriate (i.e. “go > > get” fetches the latest version, but deprecated APIs may have been > > removed, etc) > > > > > > - Different versions of go packages may require different versions of > > dependencies > > > > > > - We want builds to be reproducible without unexpected code changes > > (i.e. we want to know what we’re compiling in) > > > > > > - Some people build in environments where they don’t have Internet > > access > > > > > > > > > > > > The initial solution (which didn’t take into account the Internet access > > problem) was to use “dep”. “dep” is an external tool which was an “official > > experiment” of the go project. Rather than fetch the latest dependencies > > from the internet (like “go get”), it allowed specific revisions of > > dependencies to be captured. “dep” fetched those versions from the > > Internet. This solved the deprecated API issue, the reproducible build issue, > > and the issue of not using rpms for the dependencies. However, if someone > > was attempting to build in an internet-less context, the system would fail. > > > > > > > > > > > > Enter this second revision. > > > > > > > > > > > > The dependency packages are now downloaded at download-mirrors.sh > > time as tarballs. The tarballs are produced as of a specific commit for each > > dependency. > > > > I agree taht this is a good solution , in terms of make it work , my concern is > > that we are mantaining a lot of tar balls , and that scares me a bit > > Agreed. This actually is an opportunity to leverage the work that Marcela is doing where she is refactoring the download tarballs from a single .lst file into a more manageable form. If we can separate the tarballs/rpms/srpms/etc from a single file to a per-repo file (or a per-package file for build-time artifacts...) then maintaining the tarball downloads becomes a lot cleaner. Ok , i think is a good oportunity for the work that Marcela is doing > > > > > This allows us to hit all our bullet points – code is snapshotted, reproducible, > > we don’t end up having to create a bunch of new rpms with dependency > > source code and potential version conflicts, and it requires no internet access > > (other than at download-mirrors.sh time) > > > > If it works for you is fine , my concern will be to mantain changes of broken > > links in the future ( that sure go dep will fix ) > > > > +1 from my part as long as it does not geneerate a lot of mantainance > > does not create a problem for ourselvs > > > > How hard to aloud internet during build ? > > Allowing Internet during build is easy if you have Internet access (edit the .cfg for your mock environment), but obviously troublesome if you're behind a firewall which blocks access. While this doesn't affect me personally, I am under the impression that you folks have several sites which are configured this way (I could be wrong, but I seem to recall early in the project a few packages breaking because reworked packages assumed Internet). Even if I'm mistaken about that point, I do know of several organizations which require clean-room builds for security and reproducibility reasons. Better to not limit potential adopters if we can help it :) Ok, I see your point, well even if is behind a firewall it should have some proxy solution but yes, let's assume that we have a user that does not have internet. My only concern will be that I am not a fan of having many tarballs/patches or change that we have to maintain /host. If we are going to host them, Do we make a copy of them in the official CENGN repo? and point our scripts over there to be 100% sure that the download will work all the time? regards > > > > > Regards > > > > > > > > > > > > > > Jerry has posted a preview code review showing his work using this > > mechanism. I’ve marked the reviews as workflow -1 to give the build team a > > chance to see the mechanism. > > > > > > https://review.openstack.org/#/c/631001/ > > > > > > https://review.openstack.org/#/c/631002/ > > > > > > > > > > > > -Jason > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Thu Jan 17 19:35:32 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 17 Jan 2019 19:35:32 +0000 Subject: [Starlingx-discuss] Mailing list notes In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4DA3A@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4DA3A@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA4152BA@ALA-MBD.corp.ad.wrs.com> I'd agree with Bart. There have been many cases (not necessarily on this discussion list) where folks reply and truncate threads and remove all history and context from it, leaving an unintelligible message that can only make sense if you then go and read a dozen other truncated messages, in the correct order. I don't see how this leads to a "better record" -----Original Message----- From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Thursday, January 17, 2019 1:57 PM To: Dean Troyer; starlingx Subject: Re: [Starlingx-discuss] Mailing list notes To play devil's advocate... For #1 below (message too big) - it looks to me like of the messages that were flushed, only one was due to a message body larger than 60K - most were due to attachments. Personally, I find it difficult to track conversations where the message has been "trimmed" (I am tempted to use other more descriptive words but I won't). I'd prefer to see the whole context of the conversation without having to track back through several messages in the history. Bart -----Original Message----- From: Dean Troyer [mailto:dtroyer at gmail.com] Sent: January 17, 2019 1:45 PM To: starlingx Subject: [Starlingx-discuss] Mailing list notes I just flushed the starlingx-discuss moderation queue, there were a number of messages from the last week or so stacked up along with a couple hundred spam emails (yay? we're well-known enough to attract spam? :) The top reasons most messages get caught in moderation are: 1) message too big - we have a limit of 60K on message size. I am aware that the majority of subscribers use a mail client that encourages top-posting and that leads to not trimming messages, in this environment that makes conversations over multiple messages hard to follow. Trimming unnecessary quoting in messages makes for a better record of conversations, and will ensure that you are not delayed by the size limit. 2) too many recipients - The list is configured to set Reply-to to the original poster and not the list itself, so many people hit reply-all to reply to the list. Again, some mail readers handle this poorly and It is good practice to trim the recipients when replying to a mailing list. 3) not subscribed - For whatever reason mailman (the list processor) doesn't think your email is subscribed and refuses to deliver. The size of the spam backlog is a testament to why we keep this on. We generally also allow these through as it is often new people to the community and they will get notified that they need to subscribe to the list. I am investigating a coupe of reports that messages are not being properly forwarded and the user is notified that they are not subscribed when they believe they are. If you think you are also having this problem let me know and I'll check it out. One is an anomaly, two is a coincidence, three, well at three we may have a pattern :) Also, you can verify your subscription settings at http://lists.starlingx.io/cgi-bin/mailman/options/starlingx-discuss dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Thu Jan 17 19:47:57 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 17 Jan 2019 14:47:57 -0500 (EST) Subject: [Starlingx-discuss] [build-report] email-test - Build # 36 - Still Failing! Message-ID: <442585755.211.1547754481231.JavaMail.javamailuser@localhost> Project: email-test Build #: 36 Status: Still Failing Timestamp: 20190117T194757Z Check attached log for details. -------------------------------------------------------------------------------- Parameters P1: foo P2: bar PUBLISH_LOGS_BASE: /tmp/logs From michel.thebeau at windriver.com Thu Jan 17 19:58:40 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Thu, 17 Jan 2019 14:58:40 -0500 Subject: [Starlingx-discuss] How to get access to Starlingx In-Reply-To: References: <489F0921-AF2A-4D5E-91CE-A35E16D80834@intel.com> Message-ID: <1547755120.3455.137.camel@windriver.com> On Thu, 2019-01-17 at 18:13 +0000, bharath thiruveedula wrote: > Hi Matrinez, > > I am planning to install on virtual environment.  > >>I encourage you to test with your resources and please inform us > any interesting findings. > > Can you please share some pointers on a minimal hardware footprint? > If you are reviewing the virtualization methods from Installation guide here: https://docs.starlingx.io/installation_guide/index.html You will find that the requirements are coded into the scripts and xml for virtual environment under libvirt/virsh. This guide lists 32G RAM minimum for the virtual environment. Martinez Landa, Hayde wrote: > I encourage you to test with your resources and please inform > us any interesting findings. I would suggest the same.  If you want to respond to me offline I'm willing to share what I know about implementation of the virtual environment.  But this caveat: a host with 16G will not allow StarlingX to do much-if-any work if you can get it configured. M > Is there any recording which explains internals of Starlingx? > > Best Regards > Bharath T > > From: Martinez Landa, Hayde > Sent: Thursday, January 17, 2019 11:22 PM > To: bharath thiruveedula; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] How to get access to Starlingx >   > Bharath, > > >Hi, > > >I am trying to install Starlingx, but due to resource constraints. > couldn't achieve it. Can we install Starlingx on a 16GB machine? > Are you trying to install on Virtual or Bare Metal? > If you are trying on bare metal please be aware that you need 2 > network interfaces at least, connected to a switch.  > The storage that the documentation suggest is just for recommended > performance but it will depend on the workloads of your project, > > >If not, is there any way to explore the features of starlingx like > accessing public installation of Starlingx? > We don't have something like this at the moment, but it is a great > suggestion. > > I encourage you to test with your resources and please inform us any > interesting findings. > > >Best Regards > >Bharath T > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michael.l.tullis at intel.com Thu Jan 17 20:28:32 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Thu, 17 Jan 2019 20:28:32 +0000 Subject: [Starlingx-discuss] [docs] Invitation to contribute and participate In-Reply-To: <3808363B39586544A6839C76CF81445EA1A3BB4C@FMSMSX151.amr.corp.intel.com> References: <3808363B39586544A6839C76CF81445EA1A3BB4C@FMSMSX151.amr.corp.intel.com> Message-ID: <3808363B39586544A6839C76CF81445EA1A9BC1C@ORSMSX104.amr.corp.intel.com> All, As you have time and interest, please jump in with our very small docs team to continue building out https://docs.starlingx.io/. If you know of code changes or additions that might impact existing docs, please submit PRs directly using the contributor guides at https://docs.starlingx.io/contributor/index.html, or submit your feedback to this mailing list with [docs] in the subject line. We meet weekly on Wednesdays as called out in the wiki: https://wiki.openstack.org/wiki/Starlingx/Meetings. Our team meeting etherpad is https://etherpad.openstack.org/p/stx-documentation. Thanks, -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Thu Jan 17 21:01:34 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 17 Jan 2019 22:01:34 +0100 Subject: [Starlingx-discuss] Suggestions for a All-In-One Setup on a RAID Message-ID: <1b4a8424-c7d9-e0bf-fd5d-51320a71cb62@schaible-consulting.de> Hi Abraham, > Does your use case demands to have this mirrored system for fail-over > strictly in a Simplex configuration? > Duplex allows you to replicate your controller-0 with controller-1 via > drdb, is there anything I am missing for your use case to avoid a Duplex > configuration? good point. Since I am pretty new to StarlingX I was not aware of this option. I'll give it a try. Thanks Marcel From cboylan at sapwetik.org Thu Jan 17 21:03:01 2019 From: cboylan at sapwetik.org (Clark Boylan) Date: Thu, 17 Jan 2019 13:03:01 -0800 Subject: [Starlingx-discuss] Infra team upgrading review.openstack.org Gerrit from 2.13.9 to 2.13.12 January 18 at about 1700UTC Message-ID: <1547758981.2237058.1637449632.37C152F2@webmail.messagingengine.com> We will be performing a minor Gerrit upgrade to version 2.13.12 tomorrow (January 18, 2019) at 1700UTC. We've tested this upgrade on our dev server, https://review-dev.openstack.org, and expect it to be a quick upgrade. Any outage shouldn't last more than 10 minutes. We will let our configuration management tooling manage the upgrade so we won't have an exact time, but will try to get it as close to 1700UTC as possible. Feel free to test out the new version on the dev server. We are happy to answer any questions you might have as well. Sorry for the short notice and the cross post, Clark From bruce.e.jones at intel.com Thu Jan 17 21:09:24 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 17 Jan 2019 21:09:24 +0000 Subject: [Starlingx-discuss] Spec exception request Message-ID: <9A85D2917C58154C960D95352B22818BB28CDF36@fmsmsx121.amr.corp.intel.com> This week is the spec cut off. I would like to request an extension for a Documentation spec that I plan to write and post for comments asap. It will not require code changes, only stx.docs changes. I might need 2 weeks. OK? brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Thu Jan 17 21:15:05 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 17 Jan 2019 21:15:05 +0000 Subject: [Starlingx-discuss] Suggestions for a All-In-One Setup on a RAID In-Reply-To: <1b4a8424-c7d9-e0bf-fd5d-51320a71cb62@schaible-consulting.de> References: <1b4a8424-c7d9-e0bf-fd5d-51320a71cb62@schaible-consulting.de> Message-ID: > > Does your use case demands to have this mirrored system for fail-over > > strictly in a Simplex configuration? > > Duplex allows you to replicate your controller-0 with controller-1 via > > drdb, is there anything I am missing for your use case to avoid a Duplex > > configuration? > > good point. Since I am pretty new to StarlingX I was not aware of this > option. I'll give it a try. Awesome Marcel! Please let us know how it goes :) From vm.rod25 at gmail.com Thu Jan 17 22:34:07 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 17 Jan 2019 16:34:07 -0600 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: References: Message-ID: On Wed, Jan 2, 2019 at 10:35 AM Young, Ken wrote: > > Victor, > > > > Security work is never completed. There is always a long list of inventive new vulnerabilities and a laundry list of hardening work to be completed. The vulnerability work, considering the severity, is generally urgent. Hardening work is not urgent but important. In this case, we are dealing with a hardening initiative that focuses on a small area of the code. > > > > The challenge is that these small change proposed have larger implications. As was pointed out on the gerrit reviews, performance and / or functional testing is required. Hi Ken Just to follow the idea of this mail after hollliday break, you mention that: My concern is that we affect the timing / behaviour of stx-ha and stx-metal such that they do not work together in some scenarios. This will need to be tested and is certainly larger than a sanity. Could you please help to describe n human words, ( I can do the script ) how a good test to probe this would look like? If you provide me with a basic description of the security test I could help writing the first draft of a code test that help us to prove if the flags break the funcionality thanks Victor R > > > > Also, I am wondering if there is a way to phase the effort. For example, is there a way to break up the flag changes such that the warnings are separated from the flags which change the compiled code? That way, we are not trying to jam everything through at once. > > > > Hope this helps. Happy to discuss when you return from Holliday. > > > > Regards, > > Ken Y > > > > From: Victor Rodriguez > Date: Friday, December 28, 2018 at 7:34 PM > To: Curtis > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez wrote: > > Hi StarlingX community > > We can all agree that security is an important feature to be taken > into consideration in any SW project. In the aim of improving the > security of the StarlingX project, we have been taking the task to > propose the use of some compiler flags that prevent and detect some > security holes, especially by buffer overflow that could lead into ROP > attacks. > > The list of flags that we are proposing are : > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” > > Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" > Format string vulnerabilities: CFLAGS="-Wformat -Wformat-security" > Stack execution protection: LDFLAGS="-z noexecstack" > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > These are being analyzed in the following Gerrit reviews (thanks a lot > for all the good feedback) > > https://review.openstack.org/#/c/623608/ > https://review.openstack.org/#/c/623603/ > https://review.openstack.org/#/c/623601/ > https://review.openstack.org/#/c/623599/ > > As requested in the Gerrit reviews, there is a proper need to first > understand what these compiler flags do and what is the impact they > have at the functional and performance area of the project. This is a > preliminary report, we will be following up with a test plan for > functional & performance test plans for the services as a next step. > This report includes: > > * Detailed description of what the compiler flag does > * Code example that shows how does it work to prevent attacks > * If there is a change in the binary, we create a microbenchmark that > shows us how the flag impact the performance > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security > > As a result of the microbenchmark, the performance impact is not > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > details on the HW and SW specification upon requests) > > The areas of the code we are suggesting on the patches are: > > * stx-ha > * stx-metal > * stx-nfv > * stx-fault > > We do take care that these flags are not breaking the following areas > after being applied. > > * Build process of the image > * Sanity test cases after the image is created > (Ada can give more details on the sanity report of the image generated > with these flags) > > If running the sanity tests are not enough to prove that a change in > compiler flags do not affect functionality, please gave us the right > path to follow. > > As mentioned before, this is a preliminary report, and that we will be > following up with a test plan for functional & performance test plans > for the services as a next step. > > Hope this email helps to clarify some questions related to the flags > and start the follow-up discussion. > > > > Thanks for the context Victor, it's very helpful to me. > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > One thing I want to mention is something the Kata Containers team was talking about at the Berlin OpenStack summit, which is when many small performance hits start to add up. They have to be careful to ensure they don't have a bunch of smallish looking changes that add up to a large performance hit over a longer period of time. > > > > You are right, it's a valid point that we need to take care too > > > > Overall I'm sure the StarlingX project would like to have some performance testing, if we don't already, though that can be challenging for an open source project. I had mentioned OPNFV's Functest and related projects on the TSC call, but now seeing which components are affected I'm not sure that would be directly helpful. I look forward to further discussions around this area. > > > > Thanks for let me know that, I will take a look at OPNFV's functest and other projects before the next TSC of 2019 > > > > I will do my best to came up with a proposal for a better performance testing. > > > > Thanks > > > > Victor Rodriguez > > > > Thanks, > > Curtis > > > > > Regards > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com From juan.carlos.alonso at intel.com Thu Jan 17 23:46:13 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 17 Jan 2019 23:46:13 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190117 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8AFE9@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-17 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 25 TCs [PASS] TOTAL: [ 30 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Thu Jan 17 23:50:37 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 17 Jan 2019 15:50:37 -0800 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review In-Reply-To: References: Message-ID: On 1/17/19 2:29 PM, Victor Rodriguez wrote: > Hello Everyone at STX community > > After quick sync with Ken, we agree to work on the meta specification > together, for that we will share the specifications in a google doc > public > In this way, anyone can help to create the documents and avoid > miscommunications or avoid having paragraphs that are not delivering > the same message. > Since I am also actively involved with this effort and will be traveling this next week (visiting with Victor!), I would further suggest that we time box this activity to the finish next Friday so that this can get back into gerrit review. > These are the links for the current 2 specifications we have: > > https://docs.google.com/document/d/188nuV98w90xG0WWRFKItoxcsCzH5xB95dy-MD1MDR5c/edit?usp=sharing > Regarding the first specification, please remember that this is a "Meta-Specification", the idea here is to give a high level view of what is being planned. It is an attempt to identify additional specifications that will detail the work that has to occur. We are not trying to solve everthing (or even define everything) in the specification, we are trying to lay down the groundwork and direction for a MultiOS solution for StarlingX, not just the Build system. Thanks for everyone's support and understanding. Sau! > https://docs.google.com/document/d/1jjdvjJDvu9_KcGiTal9BxYI__dWZ2a3MaYjkJEQSR24/edit?usp=sharing > > Feel free to comment and suggest as you prefer, ( many of the > suggestions made on the Gerrit review are fixed, some others need > better writing ) > > The link has writing permissions, please be careful to do not erase a > section until is previously agreed > > Thanks a lot for your feedback > > Victor Rodriguez > > On Wed, Jan 16, 2019 at 11:46 AM Victor Rodriguez wrote: >> >> On Tue, Jan 15, 2019 at 4:59 PM Saul Wold wrote: >>> >>> TSC Members: >>> >>> We have updated the Multi-OS overview to be more of a meta-specification >>> giving the overview and direction for our approach to Multi-OS. >>> >>> Please take a look at: https://review.openstack.org/#/c/619801/13 >>> >>> Victor and I will be working on getting the "Source ReOrg" specification >>> completed tonight ahead of the F2F session tomorrow. >>> >> >> Hi team, this is the specification of the flock's code reorg: >> >> https://review.openstack.org/#/c/631288/ >> >> Thanks a lot for your feedback >> >> >>> Sau! >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Fri Jan 18 01:48:01 2019 From: yong.hu at intel.com (Hu, Yong) Date: Fri, 18 Jan 2019 01:48:01 +0000 Subject: [Starlingx-discuss] Command "build-iso" is running without stopping when build ISO In-Reply-To: References: Message-ID: Likely your build system run out of the disk space. From: wang guo Date: Friday, 18 January 2019 at 2:10 AM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Command "build-iso" is running without stopping when build ISO Hi all, When I execute command "build-iso" in the builder container, the console is always printing without stopping. The attached file "build-iso.log" is the captured console output. Does anyone help me with this issue? Thanks a lot. B.R. Wang -------------- next part -------------- An HTML attachment was scrubbed... URL: From himanshugoyal500 at gmail.com Fri Jan 18 05:15:58 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Fri, 18 Jan 2019 10:45:58 +0530 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> Message-ID: Thanks Juan, Error is resolved after again boot the server. The reason for that my management interface was not up. I have some Questions: 1) As I'm deploying starlingX with only 1 controller & 1 compute Machine.while executing "sudo config_controller" it is taking the IP Addresses of controller 0, controller 1 & floating IP Address. As i have only one controller, So how can i avoid those type of configurations..? 2) Is there any specific installation guide for that(1 controller & 1 compute) type installation currently I'm following the guide: https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage. Please suggest me the need full changes i have to done for that. 3) my controller machine have one Disk of around 3725GB, But when I'm trying to configure cinder on same disk host-disk-list command shows me available disk is as 0 GB. please suggest me if there anyway to use that same disk with cinder. *snapshot of host-disk-list:* [image: image.png] Regards, Himanshu Goyal On Thu, Jan 17, 2019 at 8:41 PM Alonso, Juan Carlos < juan.carlos.alonso at intel.com> wrote: > Hi, > > > > On what step of config_controller it is failing? Can you provide the logs? > > Are you deploying manually or automatic? > > To apply the config_controller again I think you need to start over the > installation process. > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] > *Sent:* Thursday, January 17, 2019 7:22 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Hi, > > > > I'm trying to install StarlingX Controller 0 on Physical Machine, But it > is failing in config_controller at task* waiting for service activation* > ...... > > with Error: "*Configuration failed: Timeout waiting for service enable*" > > > > I'm using ISO available at the path: > http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso > > > > Please suggest us the procedure to debug that & how i can re-run the > config_controller again. > > Many Thanks, > > Himanshu Goyal > > > > On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: > > Hi, > > > > Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node > (Both nodes on different physical Machines). > > > > Many Thanks, > > Himanshu Goyal > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image.png Type: image/png Size: 18669 bytes Desc: not available URL: From Volker.Hoesslin at swsn.de Fri Jan 18 09:30:15 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Fri, 18 Jan 2019 09:30:15 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com> References: <3k03ta01c8bua1mm@shdsegapp2>, , <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com> Message-ID: hi, this is an "ps aux | grep qemu" output (for better reading, a little bit formated by me): /usr/libexec/qemu-kvm -name guest=instance-0000000a,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-instance-0000000a/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/huge-2048kB/libvirt/qemu/13-instance-0000000a,share=yes,size=8589934592,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -uuid 3bce281c-db91-4b6f-aa48-832c27c2338f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=16.0.2-1.tis.11,serial=f88367ca-9cf6-4678-bc69-69ed97297bb5,uuid=3bce281c-db91-4b6f-aa48-832c27c2338f,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-instance-0000000a/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot reboot-timeout=5000,strict=on -global i440FX-pcihost.pci-hole64-size=67108864K -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:cinder-volumes/volume-ef431cc7-2964-4335-8138-2d2c6642be6c:auth_supported=none:mon_host=192.168.204.3\:6789\;192.168.204.4\:6789\;192.168.204.112\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=ef431cc7-2964-4335-8138-2d2c6642be6c -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu74250cb0-40,server -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ff:cc:ac,bus=pci.0,addr=0x3 -add-fd set=0,fd=81 -chardev pty,id=charserial0,logfile=/dev/fdset/0,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on => btw, how can this cmd-line handle the spaces in argument "-smbios" or just "ps aux" remove the ' " ' ? this is an running VM without any changes, created and started on top starlingx (2x controller, 2x compute, 3x storage). and yes, there is no "-cpu host" argument !? this ends in something like this in guest view: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 6 Model: 13 Model name: QEMU Virtual CPU version 2.5+ Stepping: 3 CPU MHz: 2199.996 BogoMIPS: 4399.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 64K L1i cache: 64K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-3 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx fxsr_opt pdpe1gb lm nopl cpuid pni cx16 x2apic popcnt hypervisor lahf_lm 3dnowprefetch vmmcall here are some outputs from my flavor metadata trys: hw:cpu_model = SandyBridge => No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge of course, SandyBridge is an intel CPU, but all of allowed CPU architectures for meta extra specs "hw:cpu_model" are Intel devices, so i need AMD! do you remember: $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. how can i edit this allowed CPU-Model-List?! the nova.conf settings define no CPU config, so the hyperviser has to handle this: # cat /etc/nova/nova.conf | grep cpu_mode libvirt_cpu_mode = none cpu_mode=none so this is an deadlock for me, i do not know how to fix this :( plz help, volker... ________________________________________ Von: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Gesendet: Donnerstag, 17. Januar 2019 17:40 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: RE: cpu mode Hi Volker, Could you please send me the QEMU command line used to launch your VM, this is in order to check the QEMU arguments/flags, possibly it requires "-cpu host" argument. Thanks. Best regards. Mario. From: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Sent: Thursday, January 17, 2019 8:23 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] cpu mode it is impossible to set a EPIC (or any other AMD) as guest CPU? $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e-4be8-a8ea-b58fc00358ce) Command Failed: One or more of the operations failed but my compute node seems to support EPIC CPUs? cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml .... some tips for me how to handle this? volker... Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Donnerstag, 17. Januar 2019 15:50 An: starlingx-discuss at lists.starlingx.io Betreff: [Starlingx-discuss] cpu mode hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... From Volker.Hoesslin at swsn.de Fri Jan 18 12:57:58 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Fri, 18 Jan 2019 12:57:58 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: <3a32af01c5lgh84j@shdsegapp1> References: <3k03ta01c8bua1mm@shdsegapp2>, , <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com>, <3a32af01c5lgh84j@shdsegapp1> Message-ID: i do not know if this helps, but here is a little more input: # virsh capabilities 00000000-0000-0000-0000-ac1f6b647302 x86_64 EPYC-IBPB AMD tcp rdma 134119260 2891223 59329 1 134213632 1239040 62602 1 none 0 dac 0 +0:+0 +0:+0 hvm 32 /usr/bin/qemu-system-x86_64 pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 /usr/libexec/qemu-kvm pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 hvm 64 /usr/bin/qemu-system-x86_64 pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 /usr/libexec/qemu-kvm pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 ________________________________________ Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Freitag, 18. Januar 2019 10:30 An: Arevalo, Mario Alfredo C; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] cpu mode hi, this is an "ps aux | grep qemu" output (for better reading, a little bit formated by me): /usr/libexec/qemu-kvm -name guest=instance-0000000a,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-instance-0000000a/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/huge-2048kB/libvirt/qemu/13-instance-0000000a,share=yes,size=8589934592,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -uuid 3bce281c-db91-4b6f-aa48-832c27c2338f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=16.0.2-1.tis.11,serial=f88367ca-9cf6-4678-bc69-69ed97297bb5,uuid=3bce281c-db91-4b6f-aa48-832c27c2338f,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-instance-0000000a/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot reboot-timeout=5000,strict=on -global i440FX-pcihost.pci-hole64-size=67108864K -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:cinder-volumes/volume-ef431cc7-2964-4335-8138-2d2c6642be6c:auth_supported=none:mon_host=192.168.204.3\:6789\;192.168.204.4\:6789\;192.168.204.112\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=ef431cc7-2964-4335-8138-2d2c6642be6c -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu74250cb0-40,server -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ff:cc:ac,bus=pci.0,addr=0x3 -add-fd set=0,fd=81 -chardev pty,id=charserial0,logfile=/dev/fdset/0,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on => btw, how can this cmd-line handle the spaces in argument "-smbios" or just "ps aux" remove the ' " ' ? this is an running VM without any changes, created and started on top starlingx (2x controller, 2x compute, 3x storage). and yes, there is no "-cpu host" argument !? this ends in something like this in guest view: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 6 Model: 13 Model name: QEMU Virtual CPU version 2.5+ Stepping: 3 CPU MHz: 2199.996 BogoMIPS: 4399.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 64K L1i cache: 64K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-3 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx fxsr_opt pdpe1gb lm nopl cpuid pni cx16 x2apic popcnt hypervisor lahf_lm 3dnowprefetch vmmcall here are some outputs from my flavor metadata trys: hw:cpu_model = SandyBridge => No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge of course, SandyBridge is an intel CPU, but all of allowed CPU architectures for meta extra specs "hw:cpu_model" are Intel devices, so i need AMD! do you remember: $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. how can i edit this allowed CPU-Model-List?! the nova.conf settings define no CPU config, so the hyperviser has to handle this: # cat /etc/nova/nova.conf | grep cpu_mode libvirt_cpu_mode = none cpu_mode=none so this is an deadlock for me, i do not know how to fix this :( plz help, volker... ________________________________________ Von: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Gesendet: Donnerstag, 17. Januar 2019 17:40 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: RE: cpu mode Hi Volker, Could you please send me the QEMU command line used to launch your VM, this is in order to check the QEMU arguments/flags, possibly it requires "-cpu host" argument. Thanks. Best regards. Mario. From: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Sent: Thursday, January 17, 2019 8:23 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] cpu mode it is impossible to set a EPIC (or any other AMD) as guest CPU? $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e-4be8-a8ea-b58fc00358ce) Command Failed: One or more of the operations failed but my compute node seems to support EPIC CPUs? cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml .... some tips for me how to handle this? volker... Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Donnerstag, 17. Januar 2019 15:50 An: starlingx-discuss at lists.starlingx.io Betreff: [Starlingx-discuss] cpu mode hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ken.Young at windriver.com Fri Jan 18 15:34:24 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Fri, 18 Jan 2019 15:34:24 +0000 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: References: Message-ID: <7DF6804B-15E9-4998-B132-DB38969CFFD2@windriver.com> See inline. On 2019-01-17, 5:34 PM, "Victor Rodriguez" wrote: On Wed, Jan 2, 2019 at 10:35 AM Young, Ken wrote: > > Victor, > > > > Security work is never completed. There is always a long list of inventive new vulnerabilities and a laundry list of hardening work to be completed. The vulnerability work, considering the severity, is generally urgent. Hardening work is not urgent but important. In this case, we are dealing with a hardening initiative that focuses on a small area of the code. > > > > The challenge is that these small change proposed have larger implications. As was pointed out on the gerrit reviews, performance and / or functional testing is required. Hi Ken Just to follow the idea of this mail after hollliday break, you mention that: My concern is that we affect the timing / behaviour of stx-ha and stx-metal such that they do not work together in some scenarios. This will need to be tested and is certainly larger than a sanity. Could you please help to describe n human words, ( I can do the script ) how a good test to probe this would look like? If you provide me with a basic description of the security test I could help writing the first draft of a code test that help us to prove if the flags break the functionality Victor, At a high level, we need to regress the behaviour of stx-ha and stx-metal to ensure that there is functional issues introduced by the change to the compiler. As well, we need to look at the system behaviour of ha and metal to ensure no changes have been introduced which affect has behaviour: - SWACT detection and time - Multinode failure avoidance - Heartbeat loss - lock / unlock - etc I believe that Ada has the test for ha and metal. Please review. Regards, Ken Y thanks Victor R > > > > Also, I am wondering if there is a way to phase the effort. For example, is there a way to break up the flag changes such that the warnings are separated from the flags which change the compiled code? That way, we are not trying to jam everything through at once. > > > > Hope this helps. Happy to discuss when you return from Holliday. > > > > Regards, > > Ken Y > > > > From: Victor Rodriguez > Date: Friday, December 28, 2018 at 7:34 PM > To: Curtis > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez wrote: > > Hi StarlingX community > > We can all agree that security is an important feature to be taken > into consideration in any SW project. In the aim of improving the > security of the StarlingX project, we have been taking the task to > propose the use of some compiler flags that prevent and detect some > security holes, especially by buffer overflow that could lead into ROP > attacks. > > The list of flags that we are proposing are : > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector-strong” > > Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" > Format string vulnerabilities: CFLAGS="-Wformat -Wformat-security" > Stack execution protection: LDFLAGS="-z noexecstack" > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > These are being analyzed in the following Gerrit reviews (thanks a lot > for all the good feedback) > > https://review.openstack.org/#/c/623608/ > https://review.openstack.org/#/c/623603/ > https://review.openstack.org/#/c/623601/ > https://review.openstack.org/#/c/623599/ > > As requested in the Gerrit reviews, there is a proper need to first > understand what these compiler flags do and what is the impact they > have at the functional and performance area of the project. This is a > preliminary report, we will be following up with a test plan for > functional & performance test plans for the services as a next step. > This report includes: > > * Detailed description of what the compiler flag does > * Code example that shows how does it work to prevent attacks > * If there is a change in the binary, we create a microbenchmark that > shows us how the flag impact the performance > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_exercises/cflags_security > > As a result of the microbenchmark, the performance impact is not > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > details on the HW and SW specification upon requests) > > The areas of the code we are suggesting on the patches are: > > * stx-ha > * stx-metal > * stx-nfv > * stx-fault > > We do take care that these flags are not breaking the following areas > after being applied. > > * Build process of the image > * Sanity test cases after the image is created > (Ada can give more details on the sanity report of the image generated > with these flags) > > If running the sanity tests are not enough to prove that a change in > compiler flags do not affect functionality, please gave us the right > path to follow. > > As mentioned before, this is a preliminary report, and that we will be > following up with a test plan for functional & performance test plans > for the services as a next step. > > Hope this email helps to clarify some questions related to the flags > and start the follow-up discussion. > > > > Thanks for the context Victor, it's very helpful to me. > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > One thing I want to mention is something the Kata Containers team was talking about at the Berlin OpenStack summit, which is when many small performance hits start to add up. They have to be careful to ensure they don't have a bunch of smallish looking changes that add up to a large performance hit over a longer period of time. > > > > You are right, it's a valid point that we need to take care too > > > > Overall I'm sure the StarlingX project would like to have some performance testing, if we don't already, though that can be challenging for an open source project. I had mentioned OPNFV's Functest and related projects on the TSC call, but now seeing which components are affected I'm not sure that would be directly helpful. I look forward to further discussions around this area. > > > > Thanks for let me know that, I will take a look at OPNFV's functest and other projects before the next TSC of 2019 > > > > I will do my best to came up with a proposal for a better performance testing. > > > > Thanks > > > > Victor Rodriguez > > > > Thanks, > > Curtis > > > > > Regards > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com From vm.rod25 at gmail.com Thu Jan 17 22:29:09 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 17 Jan 2019 16:29:09 -0600 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review In-Reply-To: References: Message-ID: Hello Everyone at STX community After quick sync with Ken, we agree to work on the meta specification together, for that we will share the specifications in a google doc public In this way, anyone can help to create the documents and avoid miscommunications or avoid having paragraphs that are not delivering the same message. These are the links for the current 2 specifications we have: https://docs.google.com/document/d/188nuV98w90xG0WWRFKItoxcsCzH5xB95dy-MD1MDR5c/edit?usp=sharing https://docs.google.com/document/d/1jjdvjJDvu9_KcGiTal9BxYI__dWZ2a3MaYjkJEQSR24/edit?usp=sharing Feel free to comment and suggest as you prefer, ( many of the suggestions made on the Gerrit review are fixed, some others need better writing ) The link has writing permissions, please be careful to do not erase a section until is previously agreed Thanks a lot for your feedback Victor Rodriguez On Wed, Jan 16, 2019 at 11:46 AM Victor Rodriguez wrote: > > On Tue, Jan 15, 2019 at 4:59 PM Saul Wold wrote: > > > > TSC Members: > > > > We have updated the Multi-OS overview to be more of a meta-specification > > giving the overview and direction for our approach to Multi-OS. > > > > Please take a look at: https://review.openstack.org/#/c/619801/13 > > > > Victor and I will be working on getting the "Source ReOrg" specification > > completed tonight ahead of the F2F session tomorrow. > > > > Hi team, this is the specification of the flock's code reorg: > > https://review.openstack.org/#/c/631288/ > > Thanks a lot for your feedback > > > > Sau! > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Jan 17 23:46:37 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 17 Jan 2019 15:46:37 -0800 Subject: [Starlingx-discuss] [TSC] Updated Multi-OS meta-specification available for review In-Reply-To: References: Message-ID: <818853a0-39fe-2045-4bc4-df16bb24e908@linux.intel.com> On 1/17/19 2:29 PM, Victor Rodriguez wrote: > Hello Everyone at STX community > > After quick sync with Ken, we agree to work on the meta specification > together, for that we will share the specifications in a google doc > public > In this way, anyone can help to create the documents and avoid > miscommunications or avoid having paragraphs that are not delivering > the same message. > Since I am also actively involved with this effort and will be traveling this next week (visiting with Victor!), I would further suggest that we time box this activity to the finish next Friday so that this can get back into gerrit review. > These are the links for the current 2 specifications we have: > > https://docs.google.com/document/d/188nuV98w90xG0WWRFKItoxcsCzH5xB95dy-MD1MDR5c/edit?usp=sharing > Regarding the first specification, please remember that this is a "Meta-Specification", the idea here is to give a high level view of what is being planned. It is an attempt to identify additional specifications that will detail the work that has to occur. We are not trying to solve everthing (or even define everything) in the specification, we are trying to lay down the groundwork and direction for a MultiOS solution for StarlingX, not just the Build system. Thanks for everyone's support and understanding. Sau! > https://docs.google.com/document/d/1jjdvjJDvu9_KcGiTal9BxYI__dWZ2a3MaYjkJEQSR24/edit?usp=sharing > > Feel free to comment and suggest as you prefer, ( many of the > suggestions made on the Gerrit review are fixed, some others need > better writing ) > > The link has writing permissions, please be careful to do not erase a > section until is previously agreed > > Thanks a lot for your feedback > > Victor Rodriguez > > On Wed, Jan 16, 2019 at 11:46 AM Victor Rodriguez wrote: >> >> On Tue, Jan 15, 2019 at 4:59 PM Saul Wold wrote: >>> >>> TSC Members: >>> >>> We have updated the Multi-OS overview to be more of a meta-specification >>> giving the overview and direction for our approach to Multi-OS. >>> >>> Please take a look at: https://review.openstack.org/#/c/619801/13 >>> >>> Victor and I will be working on getting the "Source ReOrg" specification >>> completed tonight ahead of the F2F session tomorrow. >>> >> >> Hi team, this is the specification of the flock's code reorg: >> >> https://review.openstack.org/#/c/631288/ >> >> Thanks a lot for your feedback >> >> >>> Sau! >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Volker.Hoesslin at swsn.de Fri Jan 18 15:44:40 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Fri, 18 Jan 2019 15:44:40 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: <3a32af01c5lgh877@shdsegapp1> References: <3k03ta01c8bua1mm@shdsegapp2>, , <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com>, <3a32af01c5lgh84j@shdsegapp1>,<3a32af01c5lgh877@shdsegapp1> Message-ID: i know, i know, i shouldnt spam this list ;) but after some tests i have tried to change the nova-conf (/etc/nova/nova.conf) and edit some values: [default] libvirt_cpu_mode = "custom" // none -> costom libvirt_cpu_model = "EPYC-IBRS" // insert this line [libvirt] cpu_mode = "custom" // none -> custom cpu_model = "EPYC-IBRS" // insert this line but after reboot my compute-node, some auto-config logic is reconfigure this config-file and "[libvirt]" part the option "cpu_mode" is back to "none" and "cpu_model" is deleted completly :( is there any way to prevent or configure this auto-config? volker... ________________________________________ Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Freitag, 18. Januar 2019 13:57 An: Arevalo, Mario Alfredo C; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] cpu mode i do not know if this helps, but here is a little more input: # virsh capabilities 00000000-0000-0000-0000-ac1f6b647302 x86_64 EPYC-IBPB AMD tcp rdma 134119260 2891223 59329 1 134213632 1239040 62602 1 none 0 dac 0 +0:+0 +0:+0 hvm 32 /usr/bin/qemu-system-x86_64 pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 /usr/libexec/qemu-kvm pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 hvm 64 /usr/bin/qemu-system-x86_64 pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 /usr/libexec/qemu-kvm pc-i440fx-rhel7.4.0 pc pc-i440fx-rhel7.0.0 pc-i440fx-2.4 rhel6.3.0 rhel6.4.0 rhel6.0.0 pc-i440fx-2.8 pc-i440fx-2.7 pc-i440fx-2.10 pc pc-i440fx-rhel7.1.0 pc-i440fx-2.3 pc-i440fx-rhel7.2.0 pc-i440fx-2.2 pc-q35-rhel7.3.0 q35 rhel6.5.0 rhel6.6.0 rhel6.1.0 pc-i440fx-2.6 rhel6.2.0 pc-i440fx-2.5 pc-i440fx-rhel7.3.0 pc-i440fx-2.9 ________________________________________ Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Freitag, 18. Januar 2019 10:30 An: Arevalo, Mario Alfredo C; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] cpu mode hi, this is an "ps aux | grep qemu" output (for better reading, a little bit formated by me): /usr/libexec/qemu-kvm -name guest=instance-0000000a,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-instance-0000000a/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/huge-2048kB/libvirt/qemu/13-instance-0000000a,share=yes,size=8589934592,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -uuid 3bce281c-db91-4b6f-aa48-832c27c2338f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=16.0.2-1.tis.11,serial=f88367ca-9cf6-4678-bc69-69ed97297bb5,uuid=3bce281c-db91-4b6f-aa48-832c27c2338f,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-instance-0000000a/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot reboot-timeout=5000,strict=on -global i440FX-pcihost.pci-hole64-size=67108864K -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:cinder-volumes/volume-ef431cc7-2964-4335-8138-2d2c6642be6c:auth_supported=none:mon_host=192.168.204.3\:6789\;192.168.204.4\:6789\;192.168.204.112\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=ef431cc7-2964-4335-8138-2d2c6642be6c -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu74250cb0-40,server -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ff:cc:ac,bus=pci.0,addr=0x3 -add-fd set=0,fd=81 -chardev pty,id=charserial0,logfile=/dev/fdset/0,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on => btw, how can this cmd-line handle the spaces in argument "-smbios" or just "ps aux" remove the ' " ' ? this is an running VM without any changes, created and started on top starlingx (2x controller, 2x compute, 3x storage). and yes, there is no "-cpu host" argument !? this ends in something like this in guest view: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 6 Model: 13 Model name: QEMU Virtual CPU version 2.5+ Stepping: 3 CPU MHz: 2199.996 BogoMIPS: 4399.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 64K L1i cache: 64K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-3 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx fxsr_opt pdpe1gb lm nopl cpuid pni cx16 x2apic popcnt hypervisor lahf_lm 3dnowprefetch vmmcall here are some outputs from my flavor metadata trys: hw:cpu_model = SandyBridge => No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge of course, SandyBridge is an intel CPU, but all of allowed CPU architectures for meta extra specs "hw:cpu_model" are Intel devices, so i need AMD! do you remember: $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. how can i edit this allowed CPU-Model-List?! the nova.conf settings define no CPU config, so the hyperviser has to handle this: # cat /etc/nova/nova.conf | grep cpu_mode libvirt_cpu_mode = none cpu_mode=none so this is an deadlock for me, i do not know how to fix this :( plz help, volker... ________________________________________ Von: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Gesendet: Donnerstag, 17. Januar 2019 17:40 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: RE: cpu mode Hi Volker, Could you please send me the QEMU command line used to launch your VM, this is in order to check the QEMU arguments/flags, possibly it requires "-cpu host" argument. Thanks. Best regards. Mario. From: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Sent: Thursday, January 17, 2019 8:23 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] cpu mode it is impossible to set a EPIC (or any other AMD) as guest CPU? $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e-4be8-a8ea-b58fc00358ce) Command Failed: One or more of the operations failed but my compute node seems to support EPIC CPUs? cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml .... some tips for me how to handle this? volker... Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Donnerstag, 17. Januar 2019 15:50 An: starlingx-discuss at lists.starlingx.io Betreff: [Starlingx-discuss] cpu mode hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mario.alfredo.c.arevalo at intel.com Fri Jan 18 15:47:23 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Fri, 18 Jan 2019 15:47:23 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2>, , <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com>, Message-ID: <6594B51DBE477C48AAE23675314E6C4664561651@fmsmsx107.amr.corp.intel.com> Hi Volker, Thanks, yeah it does not include cpu_host flag, I was searching about that and I found this: https://wiki.openstack.org/wiki/LibvirtXMLCPUModel It seems that you need to change libvirt_cpu_mode option in your /etc/nova/nova.conf file by: libvirt_cpu_mode = host-passthrough Best regards. Mario. ________________________________________ From: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Sent: Friday, January 18, 2019 1:30 AM To: Arevalo, Mario Alfredo C; starlingx-discuss at lists.starlingx.io Subject: AW: cpu mode hi, this is an "ps aux | grep qemu" output (for better reading, a little bit formated by me): /usr/libexec/qemu-kvm -name guest=instance-0000000a,debug-threads=on -S -object secret,id=masterKey0,format=raw,file=/var/lib/libvirt/qemu/domain-13-instance-0000000a/master-key.aes -machine pc-i440fx-rhel7.4.0,accel=kvm,usb=off,dump-guest-core=off -m 8192 -realtime mlock=off -smp 4,sockets=4,cores=1,threads=1 -object memory-backend-file,id=ram-node0,prealloc=yes,mem-path=/mnt/huge-2048kB/libvirt/qemu/13-instance-0000000a,share=yes,size=8589934592,host-nodes=0,policy=bind -numa node,nodeid=0,cpus=0-3,memdev=ram-node0 -uuid 3bce281c-db91-4b6f-aa48-832c27c2338f -smbios type=1,manufacturer=Fedora Project,product=OpenStack Nova,version=16.0.2-1.tis.11,serial=f88367ca-9cf6-4678-bc69-69ed97297bb5,uuid=3bce281c-db91-4b6f-aa48-832c27c2338f,family=Virtual Machine -no-user-config -nodefaults -chardev socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-13-instance-0000000a/monitor.sock,server,nowait -mon chardev=charmonitor,id=monitor,mode=control -rtc base=utc,driftfix=slew -global kvm-pit.lost_tick_policy=delay -no-hpet -no-shutdown -boot reboot-timeout=5000,strict=on -global i440FX-pcihost.pci-hole64-size=67108864K -device piix3-usb-uhci,id=usb,bus=pci.0,addr=0x1.0x2 -drive file=rbd:cinder-volumes/volume-ef431cc7-2964-4335-8138-2d2c6642be6c:auth_supported=none:mon_host=192.168.204.3\:6789\;192.168.204.4\:6789\;192.168.204.112\:6789,format=raw,if=none,id=drive-virtio-disk0,cache=none,discard=unmap -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x4,drive=drive-virtio-disk0,id=virtio-disk0,bootindex=1,write-cache=on,serial=ef431cc7-2964-4335-8138-2d2c6642be6c -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu74250cb0-40,server -netdev vhost-user,chardev=charnet0,id=hostnet0 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=fa:16:3e:ff:cc:ac,bus=pci.0,addr=0x3 -add-fd set=0,fd=81 -chardev pty,id=charserial0,logfile=/dev/fdset/0,logappend=on -device isa-serial,chardev=charserial0,id=serial0 -device usb-tablet,id=input0,bus=usb.0,port=1 -vnc 0.0.0.0:1 -k en-us -device cirrus-vga,id=video0,bus=pci.0,addr=0x2 -incoming defer -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x5 -msg timestamp=on => btw, how can this cmd-line handle the spaces in argument "-smbios" or just "ps aux" remove the ' " ' ? this is an running VM without any changes, created and started on top starlingx (2x controller, 2x compute, 3x storage). and yes, there is no "-cpu host" argument !? this ends in something like this in guest view: $ lscpu Architecture: x86_64 CPU op-mode(s): 32-bit, 64-bit Byte Order: Little Endian CPU(s): 4 On-line CPU(s) list: 0-3 Thread(s) per core: 1 Core(s) per socket: 1 Socket(s): 4 NUMA node(s): 1 Vendor ID: AuthenticAMD CPU family: 6 Model: 13 Model name: QEMU Virtual CPU version 2.5+ Stepping: 3 CPU MHz: 2199.996 BogoMIPS: 4399.99 Hypervisor vendor: KVM Virtualization type: full L1d cache: 64K L1i cache: 64K L2 cache: 512K L3 cache: 16384K NUMA node0 CPU(s): 0-3 Flags: fpu de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 syscall nx fxsr_opt pdpe1gb lm nopl cpuid pni cx16 x2apic popcnt hypervisor lahf_lm 3dnowprefetch vmmcall here are some outputs from my flavor metadata trys: hw:cpu_model = SandyBridge => No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required SandyBridge of course, SandyBridge is an intel CPU, but all of allowed CPU architectures for meta extra specs "hw:cpu_model" are Intel devices, so i need AMD! do you remember: $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. how can i edit this allowed CPU-Model-List?! the nova.conf settings define no CPU config, so the hyperviser has to handle this: # cat /etc/nova/nova.conf | grep cpu_mode libvirt_cpu_mode = none cpu_mode=none so this is an deadlock for me, i do not know how to fix this :( plz help, volker... ________________________________________ Von: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Gesendet: Donnerstag, 17. Januar 2019 17:40 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: RE: cpu mode Hi Volker, Could you please send me the QEMU command line used to launch your VM, this is in order to check the QEMU arguments/flags, possibly it requires "-cpu host" argument. Thanks. Best regards. Mario. From: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Sent: Thursday, January 17, 2019 8:23 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] cpu mode it is impossible to set a EPIC (or any other AMD) as guest CPU? $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b-f0c7-48ca-8c8a-f78481e62cd4 Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake-Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e-4be8-a8ea-b58fc00358ce) Command Failed: One or more of the operations failed but my compute node seems to support EPIC CPUs? cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml .... some tips for me how to handle this? volker... Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Donnerstag, 17. Januar 2019 15:50 An: starlingx-discuss at lists.starlingx.io Betreff: [Starlingx-discuss] cpu mode hi, my setup has two computes nodes, every node has a dual AMD EPYC 7601 CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to the guest VMs. i have tryed with some flavor-metadata but nothing realy helps, the VMs getting just a little subset of cpu-features. some investigations to the kvm-settings hit me to the facts that my nova config has "cpu_model=none" !? how can i fix that and bring my AMD EPIC CPU to my nova-config?! here is the host /proc/cpuinfo processor : 127 vendor_id : AuthenticAMD cpu family : 23 model : 1 model name : AMD EPYC 7601 32-Core Processor stepping : 2 microcode : 0x8001227 cpu MHz : 1200.000 cache size : 512 KB physical id : 1 siblings : 64 core id : 31 cpu cores : 32 apicid : 127 initial apicid : 127 fpu : yes fpu_exception : yes cpuid level : 13 wp : yes flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic v_vmsave_vmload vgif overflow_recov succor smca bogomips : 4400.08 TLB size : 2560 4K pages clflush size : 64 cache_alignment : 64 address sizes : 48 bits physical, 48 bits virtual power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] greez & thx, volker... From michel.thebeau at windriver.com Fri Jan 18 16:07:45 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Fri, 18 Jan 2019 11:07:45 -0500 Subject: [Starlingx-discuss] cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> Message-ID: <1547827665.3455.168.camel@windriver.com> Hi Volker, The output you listed here lists the values accepted by openstack flavor set command. Did you try "Passthrough"? I'm asking internally about cpu model support and I'll respond if I hear anything interesting. M On Thu, 2019-01-17 at 16:23 +0000, von Hoesslin, Volker wrote: > it is impossible to set a EPIC (or any other AMD) as guest CPU? > > $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b- > f0c7-48ca-8c8a-f78481e62cd4 > Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must > be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, > SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake- > Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e- > 4be8-a8ea-b58fc00358ce) > Command Failed: One or more of the operations failed > > but my compute node seems to support EPIC CPUs? > > cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml > > >   >     >     >     >  .... >     >     >   > > > some tips for me how to handle this? > > volker... > > Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] > Gesendet: Donnerstag, 17. Januar 2019 15:50 > An: starlingx-discuss at lists.starlingx.io > Betreff: [Starlingx-discuss] cpu mode > > hi, > my setup has two computes nodes, every node has a dual AMD EPYC 7601 > CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to > the guest VMs. i have tryed with some flavor-metadata but nothing > realy helps, the VMs getting just a little subset of cpu-features. > some investigations to the kvm-settings hit me to the facts that my > nova config has "cpu_model=none" !? how can i fix that and bring my > AMD EPIC CPU to my nova-config?! > > here is the host /proc/cpuinfo > > processor       : 127 > vendor_id       : AuthenticAMD > cpu family      : 23 > model           : 1 > model name      : AMD EPYC 7601 32-Core Processor > stepping        : 2 > microcode       : 0x8001227 > cpu MHz         : 1200.000 > cache size      : 512 KB > physical id     : 1 > siblings        : 64 > core id         : 31 > cpu cores       : 32 > apicid          : 127 > initial apicid  : 127 > fpu             : yes > fpu_exception   : yes > cpuid level     : 13 > wp              : yes > flags           : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext > fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc > extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 > fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm > cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch > osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 > cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep > bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero > irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale > vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic > v_vmsave_vmload vgif overflow_recov succor smca > bogomips        : 4400.08 > TLB size        : 2560 4K pages > clflush size    : 64 > cache_alignment : 64 > address sizes   : 48 bits physical, 48 bits virtual > power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] > > greez & thx, > volker... > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Fri Jan 18 16:09:42 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 18 Jan 2019 09:09:42 -0700 Subject: [Starlingx-discuss] cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com> <3a32af01c5lgh84j@shdsegapp1> <3a32af01c5lgh877@shdsegapp1> Message-ID: <275a1198-3de7-7087-ea96-362ea287cb6d@windriver.com> There is no way to prevent the rewrite of the nova.conf file as it is intended to be managed by the system. Chris On 1/18/2019 8:44 AM, von Hoesslin, Volker wrote: > i know, i know, i shouldnt spam this list ;) > but after some tests i have tried to change the nova-conf (/etc/nova/nova.conf) and edit some values: > > [default] > libvirt_cpu_mode = "custom" // none -> costom > libvirt_cpu_model = "EPYC-IBRS" // insert this line > > [libvirt] > cpu_mode = "custom" // none -> custom > cpu_model = "EPYC-IBRS" // insert this line > > but after reboot my compute-node, some auto-config logic is reconfigure this config-file and "[libvirt]" part the option "cpu_mode" is back to "none" and "cpu_model" is deleted completly :( > > is there any way to prevent or configure this auto-config? > > volker... > ________________________________________ From juan.carlos.alonso at intel.com Fri Jan 18 16:31:30 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 18 Jan 2019 16:31:30 +0000 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> Hi, Good that you could resolve config_controller issue. I am not sure if you can avoid/skip extra configuration for hosts you don’t want during config_controller. I think you need to follow the installation steps normally but provision only the host you want to use. Yesterday I asked to my team and it is possible to deploy 1 controller and 1 compute only, please refer to https://github.com/xe1gyq/starlingx/blob/master/ControllerStorage.md To configure cinder on controller-0 you need to have a partition with space available to be added to cinder-volume. By default should be two partitions on each host, one with available space. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 11:16 PM To: Alonso, Juan Carlos Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Error is resolved after again boot the server. The reason for that my management interface was not up. I have some Questions: 1) As I'm deploying starlingX with only 1 controller & 1 compute Machine.while executing "sudo config_controller" it is taking the IP Addresses of controller 0, controller 1 & floating IP Address. As i have only one controller, So how can i avoid those type of configurations..? 2) Is there any specific installation guide for that(1 controller & 1 compute) type installation currently I'm following the guide: https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage. Please suggest me the need full changes i have to done for that. 3) my controller machine have one Disk of around 3725GB, But when I'm trying to configure cinder on same disk host-disk-list command shows me available disk is as 0 GB. please suggest me if there anyway to use that same disk with cinder. snapshot of host-disk-list: [image.png] Regards, Himanshu Goyal On Thu, Jan 17, 2019 at 8:41 PM Alonso, Juan Carlos > wrote: Hi, On what step of config_controller it is failing? Can you provide the logs? Are you deploying manually or automatic? To apply the config_controller again I think you need to start over the installation process. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 7:22 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi, I'm trying to install StarlingX Controller 0 on Physical Machine, But it is failing in config_controller at task waiting for service activation...... with Error: "Configuration failed: Timeout waiting for service enable" I'm using ISO available at the path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso Please suggest us the procedure to debug that & how i can re-run the config_controller again. Many Thanks, Himanshu Goyal On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: Hi, Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node (Both nodes on different physical Machines). Many Thanks, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 18669 bytes Desc: image001.png URL: From chris.friesen at windriver.com Fri Jan 18 16:51:58 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Fri, 18 Jan 2019 09:51:58 -0700 Subject: [Starlingx-discuss] Mailing list notes In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA4152BA@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA4DA3A@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA4152BA@ALA-MBD.corp.ad.wrs.com> Message-ID: <717f3e31-d757-2215-8845-31367a341c45@windriver.com> Arguably trimming too much information is due to the people in question not showing good judgement about what bits to remove, rather than a problem with the technique in the first place. I personally would rather see something like *this* message where the not-directly-relevant bits have been dropped so that it's easier to see the flow of the discussion. (Although I'd argue that writing below the quoted bits makes it easier to catch up on a thread that you haven't been following--but it's also less convenient for people on mobile devices that have to scan through a long message.) While we're on the topic of emails, Bart/Don it looks like your email clients are not indenting or otherwise indicating which sections are quoted.  I suspect this is default Outlook behaviour (it can be changed), but it does make it a lot more difficult to break up a message and reply to it in sections without resorting to things like prefixing with initials or using different colours for different people.  On the other hand, multiple levels of indentation become problematic on mobile devices with limited screen size.  The choice of tool influences which writing style is easiest to use and most clear to the recipient. Chris On 1/17/2019 12:35 PM, Penney, Don wrote: > I'd agree with Bart. There have been many cases (not necessarily on this discussion list) where folks reply and truncate threads and remove all history and context from it, leaving an unintelligible message that can only make sense if you then go and read a dozen other truncated messages, in the correct order. I don't see how this leads to a "better record" > > -----Original Message----- > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > > Personally, I find it difficult to track conversations where the message has been "trimmed" (I am tempted to use other more descriptive words but I won't). I'd prefer to see the whole context of the conversation without having to track back through several messages in the history. > > > -----Original Message----- > From: Dean Troyer [mailto:dtroyer at gmail.com] > > The top reasons most messages get caught in moderation are: > > 1) message too big - we have a limit of 60K on message size. I am > aware that the majority of subscribers use a mail client that > encourages top-posting and that leads to not trimming messages, in > this environment that makes conversations over multiple messages hard > to follow. Trimming unnecessary quoting in messages makes for a > better record of conversations, and will ensure that you are not > delayed by the size limit. From ildiko.vancsa at gmail.com Fri Jan 18 18:30:50 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 18 Jan 2019 13:30:50 -0500 Subject: [Starlingx-discuss] Marketing planning call series starting in first week of February Message-ID: <207C5FE0-F858-4190-AA54-05F67AC9BD6A@gmail.com> Hi StarlingX Community, We had a planning meeting at the end of December where we were talking about community priorities and planning marketing and outreach activities. To pick up the discussion and keep the ball rolling we agreed to have the call as a series to make sure we are on the right track with both planning and execution. The next call is planned for __February 6 at 8am Pacific__. If you are planning to attend this meeting and are absolutely unable to make this slot please let me know. As finding a slot that works for everyone is nearly impossible please be flexible as much as possible. As a reminder here is the etherpad we used for the last call: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans Thanks and Best Regards, Ildikó From ildiko.vancsa at gmail.com Fri Jan 18 19:25:32 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 18 Jan 2019 14:25:32 -0500 Subject: [Starlingx-discuss] [tsc][election] Election process for StarlingX Message-ID: Hi, As the first TSC election is scheduled for Q2 this year it is time to think about the details of the process. The OpenStack Foundation will provide help with the first elections with the end goal to reach a 100% community-owned election process. I would like to draw your attention to the wiki page describing the OpenStack election process which we can use as a basis for StarlingX as well: https://governance.openstack.org/election/ Please be aware of differences, such as StarlingX not requiring a Foundation membership to participate in the elections which needs to be reflected on the tooling side as well: https://docs.starlingx.io/governance/reference/tsc/stx_charter.html#elections I created an etherpad in case the community would like to brainstorm before being ready to submit a patch in the governance repository to describe the process to use: https://etherpad.openstack.org/p/StarlingX_Election_Process Besides the etherpad we can use the TSC calls to discuss the topic further. Please let me know if you have any questions. Thanks and Best Regards, Ildikó From build.starlingx at gmail.com Fri Jan 18 19:35:43 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 18 Jan 2019 14:35:43 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_publish - Build # 63 - Failure! Message-ID: <1814552474.232.1547840146223.JavaMail.javamailuser@localhost> Project: STX_publish Build #: 63 Status: Failure Timestamp: 20190118T193431Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190118T172840Z OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs TIMESTAMP: 20190118T172840Z PUBLISH_INPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/inputs PUBLISH_LOGS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/logs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/outputs MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein PUBLISH_DISTRO_BASE: /export/mirror/starlingx/f/stein/ From build.starlingx at gmail.com Fri Jan 18 19:35:49 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 18 Jan 2019 14:35:49 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 18 - Still Failing! In-Reply-To: <1687957814.225.1547802005462.JavaMail.javamailuser@localhost> References: <1687957814.225.1547802005462.JavaMail.javamailuser@localhost> Message-ID: <1648814556.235.1547840150110.JavaMail.javamailuser@localhost> Project: STX_build_stein_master Build #: 18 Status: Still Failing Timestamp: 20190118T172840Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: true From build.starlingx at gmail.com Fri Jan 18 20:14:22 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 18 Jan 2019 15:14:22 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 26 - Failure! Message-ID: <1288957170.239.1547842463652.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 26 Status: Failure Timestamp: 20190118T201420Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: f/stein MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190118T172840Z OS: centos MUNGED_BRANCH: f-stein MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein DOCKER_BUILD_ID: jenkins-f-stein-20190118T172840Z-builder OPENSTACK_RELEASE: master TIMESTAMP: 20190118T172840Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/outputs From scott.little at windriver.com Fri Jan 18 20:17:47 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 18 Jan 2019 15:17:47 -0500 Subject: [Starlingx-discuss] [build-report] STX_publish - Build # 63 - Failure! In-Reply-To: <1814552474.232.1547840146223.JavaMail.javamailuser@localhost> References: <1814552474.232.1547840146223.JavaMail.javamailuser@localhost> Message-ID: The problem resides with the CENGN build scripts, and has been corrected. Scott On 2019-01-18 2:35 p.m., build.starlingx at gmail.com wrote: > Project: STX_publish > Build #: 63 > Status: Failure > Timestamp: 20190118T193431Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190118T172840Z > OS: centos > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs > TIMESTAMP: 20190118T172840Z > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/inputs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/logs > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/outputs > MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/f/stein/ > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Jan 18 20:19:30 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 18 Jan 2019 15:19:30 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 18 - Still Failing! In-Reply-To: <1648814556.235.1547840150110.JavaMail.javamailuser@localhost> References: <1687957814.225.1547802005462.JavaMail.javamailuser@localhost> <1648814556.235.1547840150110.JavaMail.javamailuser@localhost> Message-ID: Root cause was the failure of job 'STX_publish - Build # 63'.   This has been corrected. Scott On 2019-01-18 2:35 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_stein_master > Build #: 18 > Status: Still Failing > Timestamp: 20190118T172840Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Fri Jan 18 20:32:05 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 18 Jan 2019 15:32:05 -0500 Subject: [Starlingx-discuss] Denver Summit discount codes and CFP reminder Message-ID: Hi, In preparation to the upcoming Denver Open Infrastructure Summit, if you’ve had code merged into the StarlingX repos between July 11, 2018 through Jan 11, 2019, or serving in a leadership position (TSC, PL, TL) - then you should have received an email today with a discount code to register to attend the Summit and PTG. However, if you have already or are planning to (which I really hope you are) submit a talk to the Summit CFP, then please wait on registering until later in February when the accepted speaker notifications are sent out. If your talk is accepted, at that time you will receive a second discount code to apply with further instructions. Thanks to everyone who has contributed to StarlingX so far! We hope you will attend the Summit in Denver and make the most of the opportunity for open collaboration there! As a reminder to submit a talk to the CFP, the deadline is next Wednesday, Jan 23: https://www.openstack.org/summit/denver-2019/ Please let me know if you have any questions. Thanks and Best Regards, Ildikó From scott.little at windriver.com Fri Jan 18 20:32:22 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 18 Jan 2019 15:32:22 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 26 - Failure! In-Reply-To: <1288957170.239.1547842463652.JavaMail.javamailuser@localhost> References: <1288957170.239.1547842463652.JavaMail.javamailuser@localhost> Message-ID: <3b243435-d728-c225-c682-990c9338605b@windriver.com> Ok jenkins, now your just being inconsistent with the scope of your environment variables. Fixing ..... I apologize for the noise as we debug the f/stein build.  We are trying to use some common sub-tasks and the odd bug is popping out. Scott On 2019-01-18 3:14 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_docker_images > Build #: 26 > Status: Failure > Timestamp: 20190118T201420Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: f/stein > MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190118T172840Z > OS: centos > MUNGED_BRANCH: f-stein > MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein//20190118T172840Z/logs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/logs > MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein > DOCKER_BUILD_ID: jenkins-f-stein-20190118T172840Z-builder > OPENSTACK_RELEASE: master > TIMESTAMP: 20190118T172840Z > OS_VERSION: 7.5.1804 > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/inputs > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/f/stein//20190118T172840Z/outputs > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Jan 18 21:29:19 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 18 Jan 2019 21:29:19 +0000 Subject: [Starlingx-discuss] Bulk update of launchpad tags from stx.2019.03 to stx.2019.05 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4A6B41@ALA-MBD.corp.ad.wrs.com> Hello all, The next StarlingX release will be called stx.2019.05 (as confirmed in the F2F Chandler meeting earlier this week). The launchpad tags have been updated from stx.2019.03 to stx.2019.05. If you are subscribed to a launchpad, you'll likely get an email with the change (sorry about the spam). Moving forward, any issues gating the next StarlingX release will be tagged with stx.2019.05. Regards, Ghada PS: The same update will be done for StoryBoard shortly. From Numan.Waheed at windriver.com Fri Jan 18 21:43:54 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Fri, 18 Jan 2019 21:43:54 +0000 Subject: [Starlingx-discuss] [Test] Minutes of Meeting - Test Breakout Session - Jan 16, 2019 Message-ID: <3CAA827B7A79BA46B15B280EC82088FE48257539@ALA-MBD.corp.ad.wrs.com> Following are the notes from Test Breakout Session in Phoenix, AZ. Date: Jan 16, 2019 Participants: Numan, Ada, Victor, Frank * Test Repository o Test Repository will be setup before 2019.05 release. There will be all the content in the repository but it may not be all automated. o Jose from Ada's team will be the lead in setting up the repository. o There will be at least 2 cores for each sub-repo. o Same person can be nominated as Cores for more than one sub-repo. o List of cores will be provided by Jan 31, 2019 o Structure of Test Repository was discussed in length and Test repository will be setup according to the discussed structure. * Test Dashboard o Christopher from Ada's team is working on setting up test dashboard. o First step is to estimate the space required to host test dashboard. o Once the estimated space requirement is available, we will be requesting CENGN to host this dashboard. o Christopher will be maintaining the dashboard. o Linking the dashboard with automation FW to upload test results will be the responsibility of team using the automation FW. * Publically Available Test Environment o Dean indicated that there is publically accessible environment available for other projects by Intel. The proposal is to add required number of servers in the existing pool so that new permissions are not needed to access these servers. o Ada is going to follow up with Dean and Bruce about it. o There will be 4 configurations provided for public access (AIO-Simplex, AIO-Duplex, Multi-Node, Multi-Node with Dedicated Storage). o Setting up a scheduler and how the permissions will be granted is not decided at this point. It will be decided once the servers are available. * Sanity o Sanity test cases will change due to patch elimination and containerization. o Numan will provide the procedure for sanity test cases to community. o Ada's team is going to automate these test cases and provide the automated tests to community. * Regression (Manual / Automated) o Some features will not be available after Patch Elimination. o Test Case procedures will change for some features that are available in Stein Release. o Numan and Ada's team will collaborate to go through these test case suites and find out which test cases are obsolete and which test cases need changing. o These changes will be shared with community and the test cases will be uploaded to Test Repository. * Virtual Environment Installer o Numan and Ada's team will work together to create a Virtual Environment Installer for community. * Updating Test Wiki o Test Wiki needs updating o As the test cases are changing due to Patch Elimination and Containerization, Test Wiki needs updating. o Numan and Ada will work together to update the Wiki. o Wiki will be updated with information about Automation FrameWork. o Once Test Repository is available, a new section will be added to Test Wiki regarding Test Repository. o After creation of Test Dashboard, information regarding Test Dashboard will be added to test wiki. * Performance Testing o Victor will be leading the Performance testing. o What needs to be collected and different categories of performance testing suitable for STX were discussed in length at this meeting. o Victor will send his proposal regarding what he is planning to collect, performance test cases, which tools will be used and how the data will be used to the community for feedback. Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From quickconvey at gmail.com Mon Jan 21 11:59:08 2019 From: quickconvey at gmail.com (Quick Convey) Date: Mon, 21 Jan 2019 17:29:08 +0530 Subject: [Starlingx-discuss] Starlingx network requirement In-Reply-To: <1214156E-FB27-488C-8829-827D90D90126@windriver.com> References: <9F21D86C-B952-4D15-B25C-50CD2484FB23@windriver.com> <1214156E-FB27-488C-8829-827D90D90126@windriver.com> Message-ID: Thanks Matt peters, Why we don't need data interface in controllers ?. I think neutron dhcp and router namespaces will be in controller node, So packets should go to controller to route it as per old OpenStack release. I don't know how its working in Stralingx , I think as per new OpenStack release packets don't go to controller. Does Starlingx create dhcp and router namespaces in compute nodes ?. Thanks, On Thu, Jan 17, 2019 at 8:06 PM Peters, Matt wrote: > The data network provides the physical infrastructure for the OpenStack > guest tenant networks, which is used for both inter-compute and external > network access from Virtual Machines (VMs). The underlay configuration > (VxLAN attributes, VLAN ranges, etc) is managed by the cloud administrator > and is made available to the OpenStack tenants (applications). The > application requirements drive the topology of this network since the cloud > operator must be able to support whatever application is being deployed > within the VMs. > > > > The virtual switch, OVS in the case of StarlingX, implements the OpenStack > tenant networks and acts as the bridge between the physical infrastructure > and the virtual networks. > > > > For additional background information on OpenStack networking, please > refer to the following: > > https://docs.openstack.org/neutron/rocky/admin/intro.html > > > > Hope that helps. > > > > Regards Matt > > > > *From: *Quick Convey > *Date: *Thursday, January 17, 2019 at 2:55 AM > *To: *"Peters, Matt" > *Cc: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] Starlingx network requirement > > > > Thanks Matt peters, > > > > Compute nodes communicate via this Data interface, right ? > > *VMs* in different compute nodes also use this data interface for > communication, right ? (communication between VM in CP1 -to- VM in CP2) > > *OVS* also use this data interface to make tunnel between the compute > nodes, right ? > > > > You have mentioned that application requirements decide the required > number of ports and network topology. Applications will be running in the > VM and doesn't aware about the physical topology, right ? Could you please > explain it. > > > > Thanks, > > > > > > On Mon, Jan 14, 2019 at 10:50 PM Peters, Matt > wrote: > > See inline. > > > > *From: *Quick Convey > *Date: *Monday, January 14, 2019 at 3:56 AM > *To: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *[Starlingx-discuss] Starlingx network requirement > > > > Dear All, > > > > I am planing to setup Starlingx in bare-metal (controller-storage > deployment) > > > https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage > > > > I have couple of questions > > > > *Q1)* What is the network requirements for this setup. All nodes should > be in same network, that is the only requirement, right ? > > The Management network is on all hosts and also serves as the PXEBoot > network for booting other hosts from the Controller hosts. > > The OAM network is required for controller hosts only. > > The Data network is required for compute hosts only. > > > > *Q2)* In "Hardware Requirements" section, I seen *"Data: n x 10GE > Compute"*, what that means, is it number of physical interfaces needed > for data ? what that* "n" *indicate ? is it number of compute nodes ? > > The ‘n’ indicates you can have more than 1 port if required for your > application deployment. The data networks are not used by the platform, so > it is up to the application requirements to decide the required number of > ports and network topology. > > > > *Q3) *What is the number of physical interfaces needed in controller and > compute bare-metal nodes ?. From the document I understand that only 2 > physical interfaces are enough, right ? > > Controller: 1 Mgmt, 1 OAM > > Compute: 1 Mgmt, N Data (where N>=1) > > > > *Q4) *Is there any picture which shows *Management*, *OAM* and *Data* > interface connections between controller and compute nodes ? > > I don’t think there is a StarlingX document that shows the interconnection. > > > > *Thanks,* > > > > > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From himanshugoyal500 at gmail.com Mon Jan 21 12:13:37 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Mon, 21 Jan 2019 17:43:37 +0530 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> Message-ID: Thanks Juan, Able to unlock my controller node. But facing Issue in PXE boot of compute node. After unlocking of controller machine not able to see compute host in "*system host-list*" command. my controller machine is directly connected to compute machine. I'm following the below steps Steps: *1) system host-unlock controller-0* *2) system host-list* Output:: [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ 3) power on my compute machine. And give option to boot from PXE my compute machine is directly connected with controller with mgmt port. But not able to see host in "system host-list". 4) i tried with system host-add command also, but it is giving below error: *Error:* [wrsroot at controller-0 ~(keystone_admin)]$ system host-add Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. Please suggest me the needful change. Regards, Himanshu Goyal On Fri, Jan 18, 2019 at 10:01 PM Alonso, Juan Carlos < juan.carlos.alonso at intel.com> wrote: > Hi, > > > > Good that you could resolve config_controller issue. > > > > I am not sure if you can avoid/skip extra configuration for hosts you > don’t want during config_controller. I think you need to follow the > installation steps normally but provision only the host you want to use. > > Yesterday I asked to my team and it is possible to deploy 1 controller and > 1 compute only, please refer to > https://github.com/xe1gyq/starlingx/blob/master/ControllerStorage.md > > To configure cinder on controller-0 you need to have a partition with > space available to be added to cinder-volume. By default should be two > partitions on each host, one with available space. > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] > *Sent:* Thursday, January 17, 2019 11:16 PM > *To:* Alonso, Juan Carlos > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Thanks Juan, > > > > Error is resolved after again boot the server. The reason for that my > management interface was not up. > > > > I have some Questions: > > > > 1) As I'm deploying starlingX with only 1 controller & 1 compute > Machine.while executing "sudo config_controller" it is taking the IP > Addresses of controller 0, controller 1 & floating IP Address. As i have > only one controller, So how can i avoid those type of configurations..? > > > > 2) Is there any specific installation guide for that(1 controller & 1 > compute) type installation currently I'm following the guide: > https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage. > Please suggest me the need full changes i have to done for that. > > > > 3) my controller machine have one Disk of around 3725GB, But when I'm > trying to configure cinder on same disk host-disk-list command shows me > available disk is as 0 GB. please suggest me if there anyway to use that > same disk with cinder. > > > > *snapshot of host-disk-list:* > > [image: image.png] > > > > Regards, > > Himanshu Goyal > > > > On Thu, Jan 17, 2019 at 8:41 PM Alonso, Juan Carlos < > juan.carlos.alonso at intel.com> wrote: > > Hi, > > > > On what step of config_controller it is failing? Can you provide the logs? > > Are you deploying manually or automatic? > > To apply the config_controller again I think you need to start over the > installation process. > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] > *Sent:* Thursday, January 17, 2019 7:22 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Hi, > > > > I'm trying to install StarlingX Controller 0 on Physical Machine, But it > is failing in config_controller at task* waiting for service activation* > ...... > > with Error: "*Configuration failed: Timeout waiting for service enable*" > > > > I'm using ISO available at the path: > http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso > > > > Please suggest us the procedure to debug that & how i can re-run the > config_controller again. > > Many Thanks, > > Himanshu Goyal > > > > On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: > > Hi, > > > > Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node > (Both nodes on different physical Machines). > > > > Many Thanks, > > Himanshu Goyal > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 18669 bytes Desc: not available URL: From Matt.Peters at windriver.com Mon Jan 21 13:34:41 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Mon, 21 Jan 2019 13:34:41 +0000 Subject: [Starlingx-discuss] Starlingx network requirement In-Reply-To: References: <9F21D86C-B952-4D15-B25C-50CD2484FB23@windriver.com> <1214156E-FB27-488C-8829-827D90D90126@windriver.com> Message-ID: <819BDC7B-A5B6-4653-8DAF-3304FE3C31C3@windriver.com> Hello, StarlingX deploys DHCP servers and virtual routers (layer 3 services) across the compute nodes. For the standard deployment (not AIO), no data traffic is terminated on controllers. Regards, Matt From: Quick Convey Date: Monday, January 21, 2019 at 6:59 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Starlingx network requirement Thanks Matt peters, Why we don't need data interface in controllers ?. I think neutron dhcp and router namespaces will be in controller node, So packets should go to controller to route it as per old OpenStack release. I don't know how its working in Stralingx , I think as per new OpenStack release packets don't go to controller. Does Starlingx create dhcp and router namespaces in compute nodes ?. Thanks, On Thu, Jan 17, 2019 at 8:06 PM Peters, Matt > wrote: The data network provides the physical infrastructure for the OpenStack guest tenant networks, which is used for both inter-compute and external network access from Virtual Machines (VMs). The underlay configuration (VxLAN attributes, VLAN ranges, etc) is managed by the cloud administrator and is made available to the OpenStack tenants (applications). The application requirements drive the topology of this network since the cloud operator must be able to support whatever application is being deployed within the VMs. The virtual switch, OVS in the case of StarlingX, implements the OpenStack tenant networks and acts as the bridge between the physical infrastructure and the virtual networks. For additional background information on OpenStack networking, please refer to the following: https://docs.openstack.org/neutron/rocky/admin/intro.html Hope that helps. Regards Matt From: Quick Convey > Date: Thursday, January 17, 2019 at 2:55 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Starlingx network requirement Thanks Matt peters, Compute nodes communicate via this Data interface, right ? VMs in different compute nodes also use this data interface for communication, right ? (communication between VM in CP1 -to- VM in CP2) OVS also use this data interface to make tunnel between the compute nodes, right ? You have mentioned that application requirements decide the required number of ports and network topology. Applications will be running in the VM and doesn't aware about the physical topology, right ? Could you please explain it. Thanks, On Mon, Jan 14, 2019 at 10:50 PM Peters, Matt > wrote: See inline. From: Quick Convey > Date: Monday, January 14, 2019 at 3:56 AM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Starlingx network requirement Dear All, I am planing to setup Starlingx in bare-metal (controller-storage deployment) https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage I have couple of questions Q1) What is the network requirements for this setup. All nodes should be in same network, that is the only requirement, right ? The Management network is on all hosts and also serves as the PXEBoot network for booting other hosts from the Controller hosts. The OAM network is required for controller hosts only. The Data network is required for compute hosts only. Q2) In "Hardware Requirements" section, I seen "Data: n x 10GE Compute", what that means, is it number of physical interfaces needed for data ? what that "n" indicate ? is it number of compute nodes ? The ‘n’ indicates you can have more than 1 port if required for your application deployment. The data networks are not used by the platform, so it is up to the application requirements to decide the required number of ports and network topology. Q3) What is the number of physical interfaces needed in controller and compute bare-metal nodes ?. From the document I understand that only 2 physical interfaces are enough, right ? Controller: 1 Mgmt, 1 OAM Compute: 1 Mgmt, N Data (where N>=1) Q4) Is there any picture which shows Management, OAM and Data interface connections between controller and compute nodes ? I don’t think there is a StarlingX document that shows the interconnection. Thanks, -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Mon Jan 21 13:48:32 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 21 Jan 2019 13:48:32 +0000 Subject: [Starlingx-discuss] cpu mode In-Reply-To: <275a1198-3de7-7087-ea96-362ea287cb6d@windriver.com> References: <3k03ta01c8bua1mm@shdsegapp2> <6594B51DBE477C48AAE23675314E6C4664560E40@fmsmsx107.amr.corp.intel.com> <3a32af01c5lgh84j@shdsegapp1> <3a32af01c5lgh877@shdsegapp1> <275a1198-3de7-7087-ea96-362ea287cb6d@windriver.com> Message-ID: Hm, do you know the point of code there is intent to handle this rewrite? Can I patch this on runtime (without recompiling), just for testing... -----Ursprüngliche Nachricht----- Von: Chris Friesen [mailto:chris.friesen at windriver.com] Gesendet: Freitag, 18. Januar 2019 17:10 An: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] cpu mode There is no way to prevent the rewrite of the nova.conf file as it is intended to be managed by the system. Chris On 1/18/2019 8:44 AM, von Hoesslin, Volker wrote: > i know, i know, i shouldnt spam this list ;) > but after some tests i have tried to change the nova-conf (/etc/nova/nova.conf) and edit some values: > > [default] > libvirt_cpu_mode = "custom" // none -> costom > libvirt_cpu_model = "EPYC-IBRS" // insert this line > > [libvirt] > cpu_mode = "custom" // none -> custom > cpu_model = "EPYC-IBRS" // insert this line > > but after reboot my compute-node, some auto-config logic is reconfigure this config-file and "[libvirt]" part the option "cpu_mode" is back to "none" and "cpu_model" is deleted completly :( > > is there any way to prevent or configure this auto-config? > > volker... > ________________________________________ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Volker.Hoesslin at swsn.de Mon Jan 21 14:30:36 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 21 Jan 2019 14:30:36 +0000 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> , <1547827665.3455.168.camel@windriver.com>, Message-ID: i have set "host-passthrough" in "/etc/nova/nova.conf" ================================= [DEFAULT] libvirt_cpu_mode = host-passthrough [libvirt] cpu_mode = host-passthrough ================================= and restart nove service: # service nova-compute restart for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": [libvirt] cpu_mode = none and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? volker... ________________________________________ Von: Michel Thebeau [michel.thebeau at windriver.com] Gesendet: Freitag, 18. Januar 2019 17:07 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] cpu mode Hi Volker, The output you listed here lists the values accepted by openstack flavor set command. Did you try "Passthrough"? I'm asking internally about cpu model support and I'll respond if I hear anything interesting. M On Thu, 2019-01-17 at 16:23 +0000, von Hoesslin, Volker wrote: > it is impossible to set a EPIC (or any other AMD) as guest CPU? > > $ openstack flavor set --property hw:cpu_model=EPYC-IBPB 76609f7b- > f0c7-48ca-8c8a-f78481e62cd4 > Failed to set flavor property: Invalid hw:cpu_model 'EPYC-IBPB', must > be one of: Passthrough, Conroe, Penryn, Nehalem, Westmere, > SandyBridge, IvyBridge, Haswell, Broadwell-noTSX, Broadwell, Skylake- > Client, Skylake-Server. (HTTP 400) (Request-ID: req-2fda19cc-8e0e- > 4be8-a8ea-b58fc00358ce) > Command Failed: One or more of the operations failed > > but my compute node seems to support EPIC CPUs? > > cat /usr/share/libvirt/cpu_map/x86_EPYC-IBRS.xml > > > > > > > .... > > > > > > some tips for me how to handle this? > > volker... > > Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] > Gesendet: Donnerstag, 17. Januar 2019 15:50 > An: starlingx-discuss at lists.starlingx.io > Betreff: [Starlingx-discuss] cpu mode > > hi, > my setup has two computes nodes, every node has a dual AMD EPYC 7601 > CPU config. how can i bring all the CPU features (AES, SSSE3, ...) to > the guest VMs. i have tryed with some flavor-metadata but nothing > realy helps, the VMs getting just a little subset of cpu-features. > some investigations to the kvm-settings hit me to the facts that my > nova config has "cpu_model=none" !? how can i fix that and bring my > AMD EPIC CPU to my nova-config?! > > here is the host /proc/cpuinfo > > processor : 127 > vendor_id : AuthenticAMD > cpu family : 23 > model : 1 > model name : AMD EPYC 7601 32-Core Processor > stepping : 2 > microcode : 0x8001227 > cpu MHz : 1200.000 > cache size : 512 KB > physical id : 1 > siblings : 64 > core id : 31 > cpu cores : 32 > apicid : 127 > initial apicid : 127 > fpu : yes > fpu_exception : yes > cpuid level : 13 > wp : yes > flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr > pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext > fxsr_opt pdpe1gb rdtscp lm constant_tsc art rep_good nopl nonstop_tsc > extd_apicid amd_dcm aperfmperf eagerfpu pni pclmulqdq monitor ssse3 > fma cx16 sse4_1 sse4_2 movbe popcnt aes xsave avx f16c rdrand lahf_lm > cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch > osvw skinit wdt tce topoext perfctr_core perfctr_nb bpext perfctr_l2 > cpb hw_pstate retpoline_amd ssbd ibpb vmmcall fsgsbase bmi1 avx2 smep > bmi2 rdseed adx smap clflushopt sha_ni xsaveopt xsavec xgetbv1 clzero > irperf xsaveerptr arat npt lbrv svm_lock nrip_save tsc_scale > vmcb_clean flushbyasid decodeassists pausefilter pfthreshold avic > v_vmsave_vmload vgif overflow_recov succor smca > bogomips : 4400.08 > TLB size : 2560 4K pages > clflush size : 64 > cache_alignment : 64 > address sizes : 48 bits physical, 48 bits virtual > power management: ts ttp tm hwpstate cpb eff_freq_ro [13] [14] > > greez & thx, > volker... > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Mon Jan 21 15:09:34 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 21 Jan 2019 09:09:34 -0600 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> Message-ID: <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> You shouldn't need to modify nova.conf. With the current codebase you should be able to specify "hw:cpu_model=Passthrough" in the flavor extra-specs. Chris On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: > i have set "host-passthrough" in "/etc/nova/nova.conf" > > ================================= > [DEFAULT] > libvirt_cpu_mode = host-passthrough > > [libvirt] > cpu_mode = host-passthrough > ================================= > > and restart nove service: > # service nova-compute restart > > for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": > > [libvirt] > cpu_mode = none > > and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? From Volker.Hoesslin at swsn.de Mon Jan 21 15:16:20 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 21 Jan 2019 15:16:20 +0000 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> , <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> Message-ID: this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts volker... ________________________________________ Von: Chris Friesen [chris.friesen at windriver.com] Gesendet: Montag, 21. Januar 2019 16:09 An: starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] WG: cpu mode You shouldn't need to modify nova.conf. With the current codebase you should be able to specify "hw:cpu_model=Passthrough" in the flavor extra-specs. Chris On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: > i have set "host-passthrough" in "/etc/nova/nova.conf" > > ================================= > [DEFAULT] > libvirt_cpu_mode = host-passthrough > > [libvirt] > cpu_mode = host-passthrough > ================================= > > and restart nove service: > # service nova-compute restart > > for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": > > [libvirt] > cpu_mode = none > > and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Mon Jan 21 15:28:05 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 21 Jan 2019 09:28:05 -0600 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> Message-ID: <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com> This is another assumption about Intel CPUs. I *think* the following should work: On the controllers, edit /usr/lib/python2.7/site-packages/nova/scheduler/filters/vcpu_model_filter.py (if that's not the path it should be something pretty close). In the "_is_host_kvm" function add the following before the "return False" line: if 'svm' in info['features']: return True Then, on the active controller node run "sudo sm-restart service nova-scheduler". This should restart the nova scheduler, and at this point you should be able to schedule an instance. Chris On 1/21/2019 9:16 AM, von Hoesslin, Volker wrote: > this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: > > No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts > > volker... > ________________________________________ > Von: Chris Friesen [chris.friesen at windriver.com] > Gesendet: Montag, 21. Januar 2019 16:09 > An: starlingx-discuss at lists.starlingx.io > Betreff: Re: [Starlingx-discuss] WG: cpu mode > > You shouldn't need to modify nova.conf. > > With the current codebase you should be able to specify > "hw:cpu_model=Passthrough" in the flavor extra-specs. > > Chris > > On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: >> i have set "host-passthrough" in "/etc/nova/nova.conf" >> >> ================================= >> [DEFAULT] >> libvirt_cpu_mode = host-passthrough >> >> [libvirt] >> cpu_mode = host-passthrough >> ================================= >> >> and restart nove service: >> # service nova-compute restart >> >> for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": >> >> [libvirt] >> cpu_mode = none >> >> and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Volker.Hoesslin at swsn.de Mon Jan 21 16:07:06 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 21 Jan 2019 16:07:06 +0000 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com> References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> , <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com> Message-ID: kk, it seems it is the right way, but now i get this error here: No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough volker... ________________________________________ Von: Chris Friesen [chris.friesen at windriver.com] Gesendet: Montag, 21. Januar 2019 16:28 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode This is another assumption about Intel CPUs. I *think* the following should work: On the controllers, edit /usr/lib/python2.7/site-packages/nova/scheduler/filters/vcpu_model_filter.py (if that's not the path it should be something pretty close). In the "_is_host_kvm" function add the following before the "return False" line: if 'svm' in info['features']: return True Then, on the active controller node run "sudo sm-restart service nova-scheduler". This should restart the nova scheduler, and at this point you should be able to schedule an instance. Chris On 1/21/2019 9:16 AM, von Hoesslin, Volker wrote: > this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: > > No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts > > volker... > ________________________________________ > Von: Chris Friesen [chris.friesen at windriver.com] > Gesendet: Montag, 21. Januar 2019 16:09 > An: starlingx-discuss at lists.starlingx.io > Betreff: Re: [Starlingx-discuss] WG: cpu mode > > You shouldn't need to modify nova.conf. > > With the current codebase you should be able to specify > "hw:cpu_model=Passthrough" in the flavor extra-specs. > > Chris > > On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: >> i have set "host-passthrough" in "/etc/nova/nova.conf" >> >> ================================= >> [DEFAULT] >> libvirt_cpu_mode = host-passthrough >> >> [libvirt] >> cpu_mode = host-passthrough >> ================================= >> >> and restart nove service: >> # service nova-compute restart >> >> for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": >> >> [libvirt] >> cpu_mode = none >> >> and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From himanshugoyal500 at gmail.com Mon Jan 21 16:31:54 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Mon, 21 Jan 2019 22:01:54 +0530 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> Message-ID: Thanks Juan ,Yong I tried both the commands output shows as below: 1) [wrsroot at controller-0 ~(keystone_admin)]$ *system host-add -n compute-0 -p worker -m 00:1e:67:fd:3d:fe* usage: system host-add [-n ] [-p ] [-s ] [-m ] [-i ] [-I ] [-T ] [-U ] [-P ] [-b ] [-r ] [-o ] [-c ] [-v ] [-l ] [-D ] system host-add: error: argument -p/--personality: invalid choice: 'worker' (choose from 'controller', 'compute', 'storage', 'network', 'profile') [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ 2) [wrsroot at controller-0 ~(keystone_admin)]$ *system host-add -n compute-0 -p compute -m 00:1e:67:fd:3d:fe* Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. [wrsroot at controller-0 ~(keystone_admin)]$ Regards, Himanshu Goyal On Mon, Jan 21, 2019 at 8:42 PM Alonso, Juan Carlos < juan.carlos.alonso at intel.com> wrote: > Hi, > > > > The personality of computes changed to “worker”, so the command should be: > > > > system host-add -n compute-0 -p worker -m ${mac_address} > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Hu, Yong > *Sent:* Monday, January 21, 2019 8:51 AM > *To:* Himanshu Goyal ; Alonso, Juan Carlos < > juan.carlos.alonso at intel.com> > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Hi Himanshu, > > “system host-list” doesn’t “see” your compute node and LLDP won’t work, > > because the mgt port on compute node directly connects to mgt port on > controller-0 (rather than both connecting to a hub). > > > > Anyway, given you know the MAC of mgt port on compute node, you can have a > try to run the following cmd: > > # system host-add -n compute-0 -p compute -m > > > > Regards, > > yong > > > > *From: *Himanshu Goyal > *Date: *Monday, 21 January 2019 at 8:14 PM > *To: *"Alonso, Juan Carlos" > *Cc: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] Deployment Option > > > > Thanks Juan, > > > > Able to unlock my controller node. But facing Issue in PXE boot of compute > node. After unlocking of controller machine not able to see compute host in > "*system host-list*" command. > > my controller machine is directly connected to compute machine. > > > > I'm following the below steps > > Steps: > > *1) system host-unlock controller-0* > > *2) system host-list* > > Output:: > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-list > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | id | hostname | personality | administrative | operational | > availability | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | 1 | controller-0 | controller | unlocked | enabled | > available | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > > > 3) power on my compute machine. And give option to boot from PXE > > my compute machine is directly connected with controller with mgmt port. > > But not able to see host in "system host-list". > > > > 4) i tried with system host-add command also, but it is giving below error: > > *Error:* > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-add > > Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip > when static address allocation is configured. > > > > Please suggest me the needful change. > > > > Regards, > > Himanshu Goyal > > > > > > > > > > On Fri, Jan 18, 2019 at 10:01 PM Alonso, Juan Carlos < > juan.carlos.alonso at intel.com> wrote: > > Hi, > > > > Good that you could resolve config_controller issue. > > > > I am not sure if you can avoid/skip extra configuration for hosts you > don’t want during config_controller. I think you need to follow the > installation steps normally but provision only the host you want to use. > > Yesterday I asked to my team and it is possible to deploy 1 controller and > 1 compute only, please refer to > https://github.com/xe1gyq/starlingx/blob/master/ControllerStorage.md > > To configure cinder on controller-0 you need to have a partition with > space available to be added to cinder-volume. By default should be two > partitions on each host, one with available space. > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] > *Sent:* Thursday, January 17, 2019 11:16 PM > *To:* Alonso, Juan Carlos > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Thanks Juan, > > > > Error is resolved after again boot the server. The reason for that my > management interface was not up. > > > > I have some Questions: > > > > 1) As I'm deploying starlingX with only 1 controller & 1 compute > Machine.while executing "sudo config_controller" it is taking the IP > Addresses of controller 0, controller 1 & floating IP Address. As i have > only one controller, So how can i avoid those type of configurations..? > > > > 2) Is there any specific installation guide for that(1 controller & 1 > compute) type installation currently I'm following the guide: > https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage. > Please suggest me the need full changes i have to done for that. > > > > 3) my controller machine have one Disk of around 3725GB, But when I'm > trying to configure cinder on same disk host-disk-list command shows me > available disk is as 0 GB. please suggest me if there anyway to use that > same disk with cinder. > > > > *snapshot of host-disk-list:* > > [image: image.png] > > > > Regards, > > Himanshu Goyal > > > > On Thu, Jan 17, 2019 at 8:41 PM Alonso, Juan Carlos < > juan.carlos.alonso at intel.com> wrote: > > Hi, > > > > On what step of config_controller it is failing? Can you provide the logs? > > Are you deploying manually or automatic? > > To apply the config_controller again I think you need to start over the > installation process. > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] > *Sent:* Thursday, January 17, 2019 7:22 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Hi, > > > > I'm trying to install StarlingX Controller 0 on Physical Machine, But it > is failing in config_controller at task* waiting for service activation* > ...... > > with Error: "*Configuration failed: Timeout waiting for service enable*" > > > > I'm using ISO available at the path: > http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso > > > > Please suggest us the procedure to debug that & how i can re-run the > config_controller again. > > Many Thanks, > > Himanshu Goyal > > > > On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: > > Hi, > > > > Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node > (Both nodes on different physical Machines). > > > > Many Thanks, > > Himanshu Goyal > > > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 18670 bytes Desc: not available URL: From Volker.Hoesslin at swsn.de Mon Jan 21 16:45:07 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Mon, 21 Jan 2019 16:45:07 +0000 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: <3k03ta01c8bua1sq@shdsegapp2> References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> , <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com>, <3k03ta01c8bua1sq@shdsegapp2> Message-ID: i have tried to extend this code: File: /usr/lib/python2.7/site-packages/nova/objects/fields.py Class: class CPUModel(BaseNovaEnum): and add this two elements to list: "EPIC", "EPIC-IBPB" restart the controller, but the same error: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough i'm not sure, is this code used or have to recompile some stuff? volker... ________________________________________ Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] Gesendet: Montag, 21. Januar 2019 17:07 An: Chris Friesen; starlingx-discuss at lists.starlingx.io Betreff: Re: [Starlingx-discuss] WG: cpu mode kk, it seems it is the right way, but now i get this error here: No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough volker... ________________________________________ Von: Chris Friesen [chris.friesen at windriver.com] Gesendet: Montag, 21. Januar 2019 16:28 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode This is another assumption about Intel CPUs. I *think* the following should work: On the controllers, edit /usr/lib/python2.7/site-packages/nova/scheduler/filters/vcpu_model_filter.py (if that's not the path it should be something pretty close). In the "_is_host_kvm" function add the following before the "return False" line: if 'svm' in info['features']: return True Then, on the active controller node run "sudo sm-restart service nova-scheduler". This should restart the nova scheduler, and at this point you should be able to schedule an instance. Chris On 1/21/2019 9:16 AM, von Hoesslin, Volker wrote: > this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: > > No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts > > volker... > ________________________________________ > Von: Chris Friesen [chris.friesen at windriver.com] > Gesendet: Montag, 21. Januar 2019 16:09 > An: starlingx-discuss at lists.starlingx.io > Betreff: Re: [Starlingx-discuss] WG: cpu mode > > You shouldn't need to modify nova.conf. > > With the current codebase you should be able to specify > "hw:cpu_model=Passthrough" in the flavor extra-specs. > > Chris > > On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: >> i have set "host-passthrough" in "/etc/nova/nova.conf" >> >> ================================= >> [DEFAULT] >> libvirt_cpu_mode = host-passthrough >> >> [libvirt] >> cpu_mode = host-passthrough >> ================================= >> >> and restart nove service: >> # service nova-compute restart >> >> for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": >> >> [libvirt] >> cpu_mode = none >> >> and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Mon Jan 21 17:57:51 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 21 Jan 2019 17:57:51 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 1/23 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E4BD01@SHSMSX103.ccr.corp.intel.com> Agenda: 1. CentOS 7.6 upgrade status (Shuicheng/Martin) 2. Ceph upgrade (Vivian/Changcheng) 3. Python2to3 status (Austin) 4. Bug triage (Cindy) 5. Opens (all) Thx. -----Original Appointment----- From: Xie, Cindy Sent: Sunday, November 4, 2018 10:27 PM To: Xie, Cindy; 'Khalil, Ghada'; Sun, Austin; Somerville, Jim; 'Rowsell, Brent'; Liu, ZhipengS; Wold, Saul; starlingx-discuss at lists.starlingx.io; Shang, Dehao; Waheed, Numan; Troyer, Dean; Jones, Bruce E; Lin, Shuicheng; Zhu, Vivian; Hu, Yong Cc: Hu, Wei W; 'Seiler, Glenn'; Gomez, Juan P; 'Chen, Jacky'; Perez Rodriguez, Humberto I; 'Young, Ken'; Cobbley, David A; 'Waines, Greg'; Arce Moreno, Abraham; 'Eslimi, Dariush'; Lara, Cesar; Perez Carranza, Jose; 'Hellmann, Gil'; Armstrong, Robert H; Martinez Landa, Hayde; Martinez Monroy, Elio Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, January 23, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Mon Jan 21 18:38:48 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 21 Jan 2019 18:38:48 +0000 Subject: [Starlingx-discuss] Spec exception request Message-ID: <422333D1-F068-475C-9004-5AE17B707237@windriver.com> Hi Bruce; Thanks for the proposal – sounds like a good approach, I don’t see a problem. Just to confirm you will post the spec prior to Jan 31st – maybe we can review in real time on the TSC call that day and close it off quickly. How does that sound? @TSC members – please chime in if you object. Regards; Ian From: "Jones, Bruce E" Date: Thursday, January 17, 2019 at 4:09 PM To: StarlingX ML Subject: [Starlingx-discuss] Spec exception request This week is the spec cut off. I would like to request an extension for a Documentation spec that I plan to write and post for comments asap. It will not require code changes, only stx.docs changes. I might need 2 weeks. OK? brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Mon Jan 21 18:52:18 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 21 Jan 2019 18:52:18 +0000 Subject: [Starlingx-discuss] ARs from Chandler Mini-PTG Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC09ED432@ALA-MBD.corp.ad.wrs.com> I've placed the list of ARs I captured in the mini-PTG last week up on the shared folder on Google Drive. - the shared folder is here: https://drive.google.com/drive/folders/1YlAlWT7FtSFNyYDdJ2hbFFW4aNGQXCHY?usp=sharing - the AR list is here: https://docs.google.com/spreadsheets/d/1F-JKh8_gLlUzbrUJRbsf4u65yBGVBl8HGe-RnUQoyc4/edit?usp=sharing I'll add an item on the Community call agenda to go through these. Bill... -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Jan 21 18:53:52 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 21 Jan 2019 18:53:52 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images Message-ID: Guillermo: As discussed at today's containerization meeting, please reply with your initial thoughts on how to address https://storyboard.openstack.org/#!/story/2004711 . If you first need to ask a member of the containerization subteam a few questions to understand what is needed then try reaching out to Bob Church and Angie Wang. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From chris.friesen at windriver.com Mon Jan 21 19:10:26 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 21 Jan 2019 13:10:26 -0600 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com> <3k03ta01c8bua1sq@shdsegapp2> Message-ID: <6e33b4f5-accd-8480-5e07-e8132aeb27f8@windriver.com> I think you'd need to edit the code on both controllers to add "EPYC" and "EPYC-IBPB" (with a "Y" instead of an "I") to the list in objects/fields.py such that it comes immediately after the "Passthrough" item. It should look something like this: class CPUModel(BaseNovaEnum): # We use the ordering of the cpu models to determine whether a # given host can emulate a specified virtual model, so it's not # just an enum. ALL = ("Passthrough", "EPYC", "EPYC-IBPB", "Conroe", "Penryn", You'd then need to restart the nova-scheduler service on the active controller as per the instructions below. Chris On 1/21/2019 10:45 AM, von Hoesslin, Volker wrote: > i have tried to extend this code: > > File: /usr/lib/python2.7/site-packages/nova/objects/fields.py > Class: class CPUModel(BaseNovaEnum): > > and add this two elements to list: > > "EPIC", > "EPIC-IBPB" > > restart the controller, but the same error: > > (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough > > i'm not sure, is this code used or have to recompile some stuff? > > volker... > > ________________________________________ > Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] > Gesendet: Montag, 21. Januar 2019 17:07 > An: Chris Friesen; starlingx-discuss at lists.starlingx.io > Betreff: Re: [Starlingx-discuss] WG: cpu mode > > kk, it seems it is the right way, but now i get this error here: > > No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough > > volker... > ________________________________________ > Von: Chris Friesen [chris.friesen at windriver.com] > Gesendet: Montag, 21. Januar 2019 16:28 > An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io > Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode > > This is another assumption about Intel CPUs. I *think* the following > should work: > > On the controllers, edit > /usr/lib/python2.7/site-packages/nova/scheduler/filters/vcpu_model_filter.py > (if that's not the path it should be something pretty close). In the > "_is_host_kvm" function add the following before the "return False" line: > > if 'svm' in info['features']: > return True > > Then, on the active controller node run "sudo sm-restart service > nova-scheduler". This should restart the nova scheduler, and at this > point you should be able to schedule an instance. > > Chris > > > On 1/21/2019 9:16 AM, von Hoesslin, Volker wrote: >> this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: >> >> No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts >> >> volker... >> ________________________________________ >> Von: Chris Friesen [chris.friesen at windriver.com] >> Gesendet: Montag, 21. Januar 2019 16:09 >> An: starlingx-discuss at lists.starlingx.io >> Betreff: Re: [Starlingx-discuss] WG: cpu mode >> >> You shouldn't need to modify nova.conf. >> >> With the current codebase you should be able to specify >> "hw:cpu_model=Passthrough" in the flavor extra-specs. >> >> Chris >> >> On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: >>> i have set "host-passthrough" in "/etc/nova/nova.conf" >>> >>> ================================= >>> [DEFAULT] >>> libvirt_cpu_mode = host-passthrough >>> >>> [libvirt] >>> cpu_mode = host-passthrough >>> ================================= >>> >>> and restart nove service: >>> # service nova-compute restart >>> >>> for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": >>> >>> [libvirt] >>> cpu_mode = none >>> >>> and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Frank.Miller at windriver.com Mon Jan 21 19:39:26 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 21 Jan 2019 19:39:26 +0000 Subject: [Starlingx-discuss] [Containers] LP 1812519: config_controller --kubernetes fails at step 06 Message-ID: Erich: You indicated you saw this failure when using an ISO from Jan 17: https://bugs.launchpad.net/starlingx/+bug/1812519 Al cannot reproduce this. The one difference between Al's environment and yours is Al does not use a proxy. Mingyuan used a proxy a few weeks ago and this worked for him and he added steps to use a proxy to the wiki. I have 2 questions: 1. Erich can you explain any changes you had to make in your environment that are not listed on the current wiki: https://wiki.openstack.org/wiki/StarlingX/Containers/Installation 2. Mingyuan can you tell us if you are able to get config_controller -kubernetes to succeed with a load from Jan 17th or later? Hopefully answers to the above 2 questions will point to why you are seeing failures while others are not. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From guillermo.a.ponce.castaneda at intel.com Mon Jan 21 19:54:59 2019 From: guillermo.a.ponce.castaneda at intel.com (Ponce Castaneda, Guillermo A) Date: Mon, 21 Jan 2019 19:54:59 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: References: Message-ID: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> Hello Frank, Thanks for initiating the conversation, my proposed solution is to bring up a docker registry that will have to be in the local network of each office, so the speed of the pulls will be the faster. The problem with this approach might be that the references of the docker pull have to change so it points to the local docker registry, I have already implemented this approach locally at GDC and can provide documentation on how to do this. Another approach that I am researching is to use this project: https://github.com/rpardini/docker-registry-proxy, so far this option seems much better but I need to explore it a little bit further, I will provide more details on it as soon as possible. All the feedback and other ideas are welcome. Thanks and Regards. Guillermo (Memo) Ponce From: "Miller, Frank" Date: Monday, January 21, 2019 at 12:54 PM To: "Martin, Guillermo Oscar" Cc: "'starlingx-discuss at lists.starlingx.io'" Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images Guillermo: As discussed at today’s containerization meeting, please reply with your initial thoughts on how to address https://storyboard.openstack.org/#!/story/2004711 . If you first need to ask a member of the containerization subteam a few questions to understand what is needed then try reaching out to Bob Church and Angie Wang. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From Numan.Waheed at windriver.com Mon Jan 21 20:27:07 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Mon, 21 Jan 2019 20:27:07 +0000 Subject: [Starlingx-discuss] [Test] STX Sanity Test Cases after Containerization Message-ID: <3CAA827B7A79BA46B15B280EC82088FE482585E4@ALA-MBD.corp.ad.wrs.com> Hi Ada, As discussed last week, please find below test case titles for STX Sanity after containerization. Test Steps for these sanities are posted at the following shared folder: https://drive.google.com/drive/folders/1YlAlWT7FtSFNyYDdJ2hbFFW4aNGQXCHY Name of the files are "platform_sanity.txt" and "stx_openstack_sanity.txt". Please note that there will be two sanities after containerization. First will be for Platform services i.e. without installing OpenStack and the second will be with OpenStack. Sanity (Platform Only) Platform sanity test cases. Following list of test cases can be executed before stx-openstack is deployed. test_launch_app_via_kubectl(copy_test_apps, delete_test_pod, controller) test_launch_app_via_sysinv(copy_test_apps, cleanup_app) test_push_docker_image_to_local_registry(controller) test_upload_helm_charts(copy_test_apps, controller) test_kube_system_services(controller) test_horizon_host_inventory_display(host_inventory_pg) test_lock_active_controller_reject(no_simplex) test_lock_unlock_host(host_type, collect_kpi) test_swact_controller_platform(wait_for_con_drbd_sync_complete) To be added: - Test case to validate container messaging (eg: ssh into pod, messaging between pods) Sanity (With OpenStack) Sanity with stx-openstack. Following list of test cases are executed after stx-openstack is deployed. Tests should be executed with various configurations: - system with remote storage, single node system, multi-node system, etc. - https - IPv4, IPv6 test_openstack_services_healthy() test_reapply_stx_openstack(skip_for_no_openstack) test_stx_openstack_helm_override_update_and_reset(skip_for_no_openstack, reset_if_modified) test_horizon_create_delete_instance(instances_pg) test_heat_template() test_system_persist_over_host_reboot(host_type) test_add_host_simplex_negative(simplex_only) test_evacuate_vms(self, vms_) test_swact_controllers(wait_for_con_drbd_sync_complete) test_measurements_for_metric(meter) test_ceilometer_meters_exist(meters) test_system_alarms_and_events_on_lock_unlock_compute(no_simplex) test_lock_unlock_host(host_type, collect_kpi) test_vm_meta_data_retrieval() test_reboot_only_host(self, get_zone) test_migrate_vm(check_system, guest_os, mig_type, cpu_pol) test_nova_actions(guest_os, cpu_pol, actions) test_vm_with_a_large_volume_live_migrate(vms_, pre_alarm_) test_ping_between_two_vms(guest_os, vm1_vifs, vm2_vifs, skip_for_ovs) To be added: - Deploy stx-openstack from controller-1 (currently initial deployment of stx-openstack is always done from controller-0) - Recovery scenarios: validate pods recovered if process killed or pod deleted (eg: nova-compute, libvirtd) Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Jan 21 21:10:29 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 21 Jan 2019 21:10:29 +0000 Subject: [Starlingx-discuss] [Test] STX Sanity Test Cases after Containerization In-Reply-To: <3CAA827B7A79BA46B15B280EC82088FE482585E4@ALA-MBD.corp.ad.wrs.com> References: <3CAA827B7A79BA46B15B280EC82088FE482585E4@ALA-MBD.corp.ad.wrs.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD6061D@FMSMSX114.amr.corp.intel.com> Awesome, Numan... let me check them. Let's sync about this tomorrow. A. From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Monday, January 21, 2019 2:27 PM To: 'starlingx-discuss at lists.starlingx.io' ; Cabrales, Ada Subject: [Test] STX Sanity Test Cases after Containerization Hi Ada, As discussed last week, please find below test case titles for STX Sanity after containerization. Test Steps for these sanities are posted at the following shared folder: https://drive.google.com/drive/folders/1YlAlWT7FtSFNyYDdJ2hbFFW4aNGQXCHY Name of the files are "platform_sanity.txt" and "stx_openstack_sanity.txt". Please note that there will be two sanities after containerization. First will be for Platform services i.e. without installing OpenStack and the second will be with OpenStack. Sanity (Platform Only) Platform sanity test cases. Following list of test cases can be executed before stx-openstack is deployed. test_launch_app_via_kubectl(copy_test_apps, delete_test_pod, controller) test_launch_app_via_sysinv(copy_test_apps, cleanup_app) test_push_docker_image_to_local_registry(controller) test_upload_helm_charts(copy_test_apps, controller) test_kube_system_services(controller) test_horizon_host_inventory_display(host_inventory_pg) test_lock_active_controller_reject(no_simplex) test_lock_unlock_host(host_type, collect_kpi) test_swact_controller_platform(wait_for_con_drbd_sync_complete) To be added: - Test case to validate container messaging (eg: ssh into pod, messaging between pods) Sanity (With OpenStack) Sanity with stx-openstack. Following list of test cases are executed after stx-openstack is deployed. Tests should be executed with various configurations: - system with remote storage, single node system, multi-node system, etc. - https - IPv4, IPv6 test_openstack_services_healthy() test_reapply_stx_openstack(skip_for_no_openstack) test_stx_openstack_helm_override_update_and_reset(skip_for_no_openstack, reset_if_modified) test_horizon_create_delete_instance(instances_pg) test_heat_template() test_system_persist_over_host_reboot(host_type) test_add_host_simplex_negative(simplex_only) test_evacuate_vms(self, vms_) test_swact_controllers(wait_for_con_drbd_sync_complete) test_measurements_for_metric(meter) test_ceilometer_meters_exist(meters) test_system_alarms_and_events_on_lock_unlock_compute(no_simplex) test_lock_unlock_host(host_type, collect_kpi) test_vm_meta_data_retrieval() test_reboot_only_host(self, get_zone) test_migrate_vm(check_system, guest_os, mig_type, cpu_pol) test_nova_actions(guest_os, cpu_pol, actions) test_vm_with_a_large_volume_live_migrate(vms_, pre_alarm_) test_ping_between_two_vms(guest_os, vm1_vifs, vm2_vifs, skip_for_ovs) To be added: - Deploy stx-openstack from controller-1 (currently initial deployment of stx-openstack is always done from controller-0) - Recovery scenarios: validate pods recovered if process killed or pod deleted (eg: nova-compute, libvirtd) Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Mon Jan 21 21:37:56 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 21 Jan 2019 13:37:56 -0800 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> References: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> Message-ID: <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> On 1/21/19 11:54 AM, Ponce Castaneda, Guillermo A wrote: > Hello Frank, > > Thanks for initiating the conversation, my proposed solution is to bring > up a docker registry that will have to be in the local network of each > office, so the speed of the pulls will be the faster. > > The problem with this approach might be that the references of the > docker pull have to change so it points to the local docker registry, I > have already implemented this approach locally at GDC and can provide > documentation on how to do this. > > Another approach that I am researching is to use this project: > https://github.com/rpardini/docker-registry-proxy, so far this option > seems much better but I need to explore it a little bit further, I will > provide more details on it as soon as possible. > Do we need access to more than the standard docker hub? It also seems that this approach will require modifications to the images wanting to use the proxy. I am sure this is true in most proxy setups. Sau! > All the feedback and other ideas are welcome. > > Thanks and Regards. > > Guillermo (Memo) Ponce > > *From: *"Miller, Frank" > *Date: *Monday, January 21, 2019 at 12:54 PM > *To: *"Martin, Guillermo Oscar" > *Cc: *"'starlingx-discuss at lists.starlingx.io'" > > *Subject: *[Starlingx-discuss] [Containers] Approach for adding a local > mirror of docker images > > Guillermo: > > As discussed at today’s containerization meeting, please reply with your > initial thoughts on how to address > https://storyboard.openstack.org/#!/story/2004711 .  If you first need > to ask a member of the containerization subteam a few questions to > understand what is needed then try reaching out to Bob Church and Angie > Wang. > > Frank > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From ada.cabrales at intel.com Mon Jan 21 21:51:13 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 21 Jan 2019 21:51:13 +0000 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: <7DF6804B-15E9-4998-B132-DB38969CFFD2@windriver.com> References: <7DF6804B-15E9-4998-B132-DB38969CFFD2@windriver.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD606C3@FMSMSX114.amr.corp.intel.com> Comments inline > -----Original Message----- > From: Young, Ken [mailto:Ken.Young at windriver.com] > Sent: Friday, January 18, 2019 9:34 AM > To: Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for > security > > See inline. > > On 2019-01-17, 5:34 PM, "Victor Rodriguez" wrote: > > On Wed, Jan 2, 2019 at 10:35 AM Young, Ken > wrote: > > > > Victor, > > > > > > > > Security work is never completed. There is always a long list of inventive > new vulnerabilities and a laundry list of hardening work to be completed. > The vulnerability work, considering the severity, is generally urgent. > Hardening work is not urgent but important. In this case, we are dealing with > a hardening initiative that focuses on a small area of the code. > > > > > > > > The challenge is that these small change proposed have larger > implications. As was pointed out on the gerrit reviews, performance and / or > functional testing is required. > > Hi Ken > > Just to follow the idea of this mail after hollliday break, you mention that: > > My concern is that we affect the timing / behaviour of stx-ha and > stx-metal such that they do not work together in some scenarios. This > will need to be tested and is certainly larger than a sanity. > > Could you please help to describe n human words, ( I can do the script > ) how a good test to probe this would look like? > If you provide me with a basic description of the security test I > could help writing the first draft of a code test that help us to > prove if the flags break the functionality > > Victor, > > At a high level, we need to regress the behaviour of stx-ha and stx-metal to > ensure that there is functional issues introduced by the change to the > compiler. As well, we need to look at the system behaviour of ha and metal > to ensure no changes have been introduced which affect has behaviour: > > - SWACT detection and time > - Multinode failure avoidance > - Heartbeat loss > - lock / unlock > - etc > > I believe that Ada has the test for ha and metal. Please review. > Yes, we executed several test cases covering what Ken mentions (manually). What I'm not sure is about heartbeat loss, but let me check. What we can do is to build a test plan and submit it for revision. When do you need it (and please, don't say tomorrow)? Ada > Regards, > Ken Y > > thanks > > Victor R > > > > > > > > Also, I am wondering if there is a way to phase the effort. For example, is > there a way to break up the flag changes such that the warnings are > separated from the flags which change the compiled code? That way, we > are not trying to jam everything through at once. > > > > > > > > Hope this helps. Happy to discuss when you return from Holliday. > > > > > > > > Regards, > > > > Ken Y > > > > > > > > From: Victor Rodriguez > > Date: Friday, December 28, 2018 at 7:34 PM > > To: Curtis > > Cc: "starlingx-discuss at lists.starlingx.io" discuss at lists.starlingx.io> > > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for > security > > > > > > > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > > > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez > wrote: > > > > Hi StarlingX community > > > > We can all agree that security is an important feature to be taken > > into consideration in any SW project. In the aim of improving the > > security of the StarlingX project, we have been taking the task to > > propose the use of some compiler flags that prevent and detect some > > security holes, especially by buffer overflow that could lead into ROP > > attacks. > > > > The list of flags that we are proposing are : > > > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector- > strong” > > > > Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" > > Format string vulnerabilities: CFLAGS="-Wformat -Wformat- > security" > > Stack execution protection: LDFLAGS="-z noexecstack" > > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > > > > These are being analyzed in the following Gerrit reviews (thanks a lot > > for all the good feedback) > > > > https://review.openstack.org/#/c/623608/ > > https://review.openstack.org/#/c/623603/ > > https://review.openstack.org/#/c/623601/ > > https://review.openstack.org/#/c/623599/ > > > > As requested in the Gerrit reviews, there is a proper need to first > > understand what these compiler flags do and what is the impact they > > have at the functional and performance area of the project. This is a > > preliminary report, we will be following up with a test plan for > > functional & performance test plans for the services as a next step. > > This report includes: > > > > * Detailed description of what the compiler flag does > > * Code example that shows how does it work to prevent attacks > > * If there is a change in the binary, we create a microbenchmark that > > shows us how the flag impact the performance > > > > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_ex > ercises/cflags_security > > > > As a result of the microbenchmark, the performance impact is not > > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > > details on the HW and SW specification upon requests) > > > > The areas of the code we are suggesting on the patches are: > > > > * stx-ha > > * stx-metal > > * stx-nfv > > * stx-fault > > > > We do take care that these flags are not breaking the following areas > > after being applied. > > > > * Build process of the image > > * Sanity test cases after the image is created > > (Ada can give more details on the sanity report of the image generated > > with these flags) > > > > If running the sanity tests are not enough to prove that a change in > > compiler flags do not affect functionality, please gave us the right > > path to follow. > > > > As mentioned before, this is a preliminary report, and that we will be > > following up with a test plan for functional & performance test plans > > for the services as a next step. > > > > Hope this email helps to clarify some questions related to the flags > > and start the follow-up discussion. > > > > > > > > Thanks for the context Victor, it's very helpful to me. > > > > > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > > > > > One thing I want to mention is something the Kata Containers team was > talking about at the Berlin OpenStack summit, which is when many small > performance hits start to add up. They have to be careful to ensure they > don't have a bunch of smallish looking changes that add up to a large > performance hit over a longer period of time. > > > > > > > > You are right, it's a valid point that we need to take care too > > > > > > > > Overall I'm sure the StarlingX project would like to have some > performance testing, if we don't already, though that can be challenging for > an open source project. I had mentioned OPNFV's Functest and related > projects on the TSC call, but now seeing which components are affected I'm > not sure that would be directly helpful. I look forward to further discussions > around this area. > > > > > > > > Thanks for let me know that, I will take a look at OPNFV's functest and > other projects before the next TSC of 2019 > > > > > > > > I will do my best to came up with a proposal for a better performance > testing. > > > > > > > > Thanks > > > > > > > > Victor Rodriguez > > > > > > > > Thanks, > > > > Curtis > > > > > > > > > > Regards > > > > Victor Rodriguez > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > > > -- > > > > Blog: serverascode.com > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From guillermo.a.ponce.castaneda at intel.com Mon Jan 21 22:28:48 2019 From: guillermo.a.ponce.castaneda at intel.com (Ponce Castaneda, Guillermo A) Date: Mon, 21 Jan 2019 22:28:48 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> References: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> Message-ID: On 1/21/19, 3:39 PM, "Saul Wold" wrote: On 1/21/19 11:54 AM, Ponce Castaneda, Guillermo A wrote: > Hello Frank, > > Thanks for initiating the conversation, my proposed solution is to bring > up a docker registry that will have to be in the local network of each > office, so the speed of the pulls will be the faster. > > The problem with this approach might be that the references of the > docker pull have to change so it points to the local docker registry, I > have already implemented this approach locally at GDC and can provide > documentation on how to do this. > > Another approach that I am researching is to use this project: > https://github.com/rpardini/docker-registry-proxy, so far this option > seems much better but I need to explore it a little bit further, I will > provide more details on it as soon as possible. > Do we need access to more than the standard docker hub? It also seems that this approach will require modifications to the images wanting to use the proxy. We do not really need access to more than the standard docker hub, but this one way to solve the problem of the people having troubles with slow networks, the docker registry proxy method promises to be transparent for the user, the user will have to modify their docker daemon file to add the registry as proxy and just pull images normally, I am working to set that up and do a test on our network right now, once it is done I will be able to tell if it is really transparent. I am sure this is true in most proxy setups. Sau! > All the feedback and other ideas are welcome. > > Thanks and Regards. > > Guillermo (Memo) Ponce > > *From: *"Miller, Frank" > *Date: *Monday, January 21, 2019 at 12:54 PM > *To: *"Martin, Guillermo Oscar" > *Cc: *"'starlingx-discuss at lists.starlingx.io'" > > *Subject: *[Starlingx-discuss] [Containers] Approach for adding a local > mirror of docker images > > Guillermo: > > As discussed at today’s containerization meeting, please reply with your > initial thoughts on how to address > https://storyboard.openstack.org/#!/story/2004711 . If you first need > to ask a member of the containerization subteam a few questions to > understand what is needed then try reaching out to Bob Church and Angie > Wang. > > Frank > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Mon Jan 21 23:09:42 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 21 Jan 2019 23:09:42 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting agenda - 01/22/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD607A3@FMSMSX114.amr.corp.intel.com> Agenda for 01/22/2019 testing meeting * Review of tasks for 2019.05.0 release * Update on test repository - Cristopher * Update on Test Dashboard - Cristopher * Opens Regards, Ada From juan.carlos.alonso at intel.com Mon Jan 21 23:55:10 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 21 Jan 2019 23:55:10 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190121 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8B826@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-21 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 25 TCs [PASS] TOTAL: [ 30 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Jan 21 14:51:01 2019 From: yong.hu at intel.com (Hu, Yong) Date: Mon, 21 Jan 2019 14:51:01 +0000 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> Message-ID: Hi Himanshu, “system host-list” doesn’t “see” your compute node and LLDP won’t work, because the mgt port on compute node directly connects to mgt port on controller-0 (rather than both connecting to a hub). Anyway, given you know the MAC of mgt port on compute node, you can have a try to run the following cmd: # system host-add -n compute-0 -p compute -m Regards, yong From: Himanshu Goyal Date: Monday, 21 January 2019 at 8:14 PM To: "Alonso, Juan Carlos" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Able to unlock my controller node. But facing Issue in PXE boot of compute node. After unlocking of controller machine not able to see compute host in "system host-list" command. my controller machine is directly connected to compute machine. I'm following the below steps Steps: 1) system host-unlock controller-0 2) system host-list Output:: [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ 3) power on my compute machine. And give option to boot from PXE my compute machine is directly connected with controller with mgmt port. But not able to see host in "system host-list". 4) i tried with system host-add command also, but it is giving below error: Error: [wrsroot at controller-0 ~(keystone_admin)]$ system host-add Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. Please suggest me the needful change. Regards, Himanshu Goyal On Fri, Jan 18, 2019 at 10:01 PM Alonso, Juan Carlos > wrote: Hi, Good that you could resolve config_controller issue. I am not sure if you can avoid/skip extra configuration for hosts you don’t want during config_controller. I think you need to follow the installation steps normally but provision only the host you want to use. Yesterday I asked to my team and it is possible to deploy 1 controller and 1 compute only, please refer to https://github.com/xe1gyq/starlingx/blob/master/ControllerStorage.md To configure cinder on controller-0 you need to have a partition with space available to be added to cinder-volume. By default should be two partitions on each host, one with available space. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 11:16 PM To: Alonso, Juan Carlos > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Error is resolved after again boot the server. The reason for that my management interface was not up. I have some Questions: 1) As I'm deploying starlingX with only 1 controller & 1 compute Machine.while executing "sudo config_controller" it is taking the IP Addresses of controller 0, controller 1 & floating IP Address. As i have only one controller, So how can i avoid those type of configurations..? 2) Is there any specific installation guide for that(1 controller & 1 compute) type installation currently I'm following the guide: https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage. Please suggest me the need full changes i have to done for that. 3) my controller machine have one Disk of around 3725GB, But when I'm trying to configure cinder on same disk host-disk-list command shows me available disk is as 0 GB. please suggest me if there anyway to use that same disk with cinder. snapshot of host-disk-list: [image.png] Regards, Himanshu Goyal On Thu, Jan 17, 2019 at 8:41 PM Alonso, Juan Carlos > wrote: Hi, On what step of config_controller it is failing? Can you provide the logs? Are you deploying manually or automatic? To apply the config_controller again I think you need to start over the installation process. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 7:22 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi, I'm trying to install StarlingX Controller 0 on Physical Machine, But it is failing in config_controller at task waiting for service activation...... with Error: "Configuration failed: Timeout waiting for service enable" I'm using ISO available at the path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso Please suggest us the procedure to debug that & how i can re-run the config_controller again. Many Thanks, Himanshu Goyal On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: Hi, Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node (Both nodes on different physical Machines). Many Thanks, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 18670 bytes Desc: image001.png URL: From juan.carlos.alonso at intel.com Mon Jan 21 15:12:14 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 21 Jan 2019 15:12:14 +0000 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> Hi, The personality of computes changed to “worker”, so the command should be: system host-add -n compute-0 -p worker -m ${mac_address} Regards. Juan Carlos Alonso From: Hu, Yong Sent: Monday, January 21, 2019 8:51 AM To: Himanshu Goyal ; Alonso, Juan Carlos Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi Himanshu, “system host-list” doesn’t “see” your compute node and LLDP won’t work, because the mgt port on compute node directly connects to mgt port on controller-0 (rather than both connecting to a hub). Anyway, given you know the MAC of mgt port on compute node, you can have a try to run the following cmd: # system host-add -n compute-0 -p compute -m Regards, yong From: Himanshu Goyal > Date: Monday, 21 January 2019 at 8:14 PM To: "Alonso, Juan Carlos" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Able to unlock my controller node. But facing Issue in PXE boot of compute node. After unlocking of controller machine not able to see compute host in "system host-list" command. my controller machine is directly connected to compute machine. I'm following the below steps Steps: 1) system host-unlock controller-0 2) system host-list Output:: [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ 3) power on my compute machine. And give option to boot from PXE my compute machine is directly connected with controller with mgmt port. But not able to see host in "system host-list". 4) i tried with system host-add command also, but it is giving below error: Error: [wrsroot at controller-0 ~(keystone_admin)]$ system host-add Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. Please suggest me the needful change. Regards, Himanshu Goyal On Fri, Jan 18, 2019 at 10:01 PM Alonso, Juan Carlos > wrote: Hi, Good that you could resolve config_controller issue. I am not sure if you can avoid/skip extra configuration for hosts you don’t want during config_controller. I think you need to follow the installation steps normally but provision only the host you want to use. Yesterday I asked to my team and it is possible to deploy 1 controller and 1 compute only, please refer to https://github.com/xe1gyq/starlingx/blob/master/ControllerStorage.md To configure cinder on controller-0 you need to have a partition with space available to be added to cinder-volume. By default should be two partitions on each host, one with available space. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 11:16 PM To: Alonso, Juan Carlos > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Error is resolved after again boot the server. The reason for that my management interface was not up. I have some Questions: 1) As I'm deploying starlingX with only 1 controller & 1 compute Machine.while executing "sudo config_controller" it is taking the IP Addresses of controller 0, controller 1 & floating IP Address. As i have only one controller, So how can i avoid those type of configurations..? 2) Is there any specific installation guide for that(1 controller & 1 compute) type installation currently I'm following the guide: https://docs.starlingx.io/installation_guide/controller_storage.html#controller-storage. Please suggest me the need full changes i have to done for that. 3) my controller machine have one Disk of around 3725GB, But when I'm trying to configure cinder on same disk host-disk-list command shows me available disk is as 0 GB. please suggest me if there anyway to use that same disk with cinder. snapshot of host-disk-list: [image.png] Regards, Himanshu Goyal On Thu, Jan 17, 2019 at 8:41 PM Alonso, Juan Carlos > wrote: Hi, On what step of config_controller it is failing? Can you provide the logs? Are you deploying manually or automatic? To apply the config_controller again I think you need to start over the installation process. Regards. Juan Carlos Alonso From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Thursday, January 17, 2019 7:22 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi, I'm trying to install StarlingX Controller 0 on Physical Machine, But it is failing in config_controller at task waiting for service activation...... with Error: "Configuration failed: Timeout waiting for service enable" I'm using ISO available at the path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/bootimage.iso Please suggest us the procedure to debug that & how i can re-run the config_controller again. Many Thanks, Himanshu Goyal On Tue, Jan 1, 2019 at 3:34 PM Himanshu Goyal > wrote: Hi, Can we deploy starlingX with 2 Machines 1 controller & 1 Compute Node (Both nodes on different physical Machines). Many Thanks, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 18670 bytes Desc: image001.png URL: From mingyuan.qi at intel.com Tue Jan 22 01:20:49 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Tue, 22 Jan 2019 01:20:49 +0000 Subject: [Starlingx-discuss] [Containers] LP 1812519: config_controller --kubernetes fails at step 06 In-Reply-To: References: Message-ID: My latest try is built from last Wednesday's code. I'll try the latest code today along with the proxy patch. Mingyuan From: Miller, Frank [mailto:Frank.Miller at windriver.com] Sent: Tuesday, January 22, 2019 3:39 To: Cordoba Malibran, Erich ; Qi, Mingyuan Cc: Bailey, Henry Albert (Al) ; 'starlingx-discuss at lists.starlingx.io' Subject: [Containers] LP 1812519: config_controller --kubernetes fails at step 06 Erich: You indicated you saw this failure when using an ISO from Jan 17: https://bugs.launchpad.net/starlingx/+bug/1812519 Al cannot reproduce this. The one difference between Al's environment and yours is Al does not use a proxy. Mingyuan used a proxy a few weeks ago and this worked for him and he added steps to use a proxy to the wiki. I have 2 questions: 1. Erich can you explain any changes you had to make in your environment that are not listed on the current wiki: https://wiki.openstack.org/wiki/StarlingX/Containers/Installation 2. Mingyuan can you tell us if you are able to get config_controller -kubernetes to succeed with a load from Jan 17th or later? Hopefully answers to the above 2 questions will point to why you are seeing failures while others are not. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Jan 22 06:00:03 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 01:00:03 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 108 - Failure! Message-ID: <1391271611.248.1548136804188.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 108 Status: Failure Timestamp: 20190122T060000Z Check logs at: $PUBLISH_LOGS_URL -------------------------------------------------------------------------------- Parameters From kyle.oh95 at gmail.com Tue Jan 22 06:34:08 2019 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Tue, 22 Jan 2019 15:34:08 +0900 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network Message-ID: Hello, this is Jaewook Oh from IISTRC. I installed StarlingX on a server, and now I'm trying to create "flat network" for public. However I couldn't find the way to make the network. I found "managed_flat", "managed_vlan", and "managed_vxlan" options in '/etc/neutron/plugins/ml2/ml2_conf.ini' file. When I install some OpenStack platform, I usually used devstack, and with devstack I could choose 'flat' option. Is there any way to create flat network on StarlingX openstack platform? And also network creation keeps failing on horizon dashboard. I had to use OpenStack CLI. Is it also a bug? Thanks in advance for any help! Best Regards, Jaewook. ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Jan 22 07:07:50 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 22 Jan 2019 07:07:50 +0000 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: Hey, Pls share the error messages you saw on Horizon. As to your question: “Is there any way to create flat network on StarlingX openstack platform?” Yes, you can refer to CMD: $ openstack help providernet create $ openstack help network create Of course, since you had error on Horizon, there should be something wrong. So, let’s figure out why it failed first. From: Jaewook Oh Date: Tuesday, 22 January 2019 at 2:35 PM To: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network Hello, this is Jaewook Oh from IISTRC. I installed StarlingX on a server, and now I'm trying to create "flat network" for public. However I couldn't find the way to make the network. I found "managed_flat", "managed_vlan", and "managed_vxlan" options in '/etc/neutron/plugins/ml2/ml2_conf.ini' file. When I install some OpenStack platform, I usually used devstack, and with devstack I could choose 'flat' option. Is there any way to create flat network on StarlingX openstack platform? And also network creation keeps failing on horizon dashboard. I had to use OpenStack CLI. Is it also a bug? Thanks in advance for any help! Best Regards, Jaewook. ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Jan 22 07:13:09 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 22 Jan 2019 07:13:09 +0000 Subject: [Starlingx-discuss] Deployment Option In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> Message-ID: <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D@intel.com> Himanshu, Could you have a try with a hub which 2 mgt ports (from controller and compute) are plugged into? Let’s assure the normal setup works first, and then figure out why the direct linkage of cable doesn’t work. BTW: “worker” and “compute” are just different “personality” names in different STX version. On your current setup “compute” will do, supposedly. From: Himanshu Goyal Date: Tuesday, 22 January 2019 at 12:32 AM To: "Alonso, Juan Carlos" Cc: "Hu, Yong" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan ,Yong I tried both the commands output shows as below: 1) [wrsroot at controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p worker -m 00:1e:67:fd:3d:fe usage: system host-add [-n ] [-p ] [-s ] [-m ] [-i ] [-I ] [-T ] [-U ] [-P ] [-b ] [-r ] [-o ] [-c ] [-v ] [-l ] [-D ] system host-add: error: argument -p/--personality: invalid choice: 'worker' (choose from 'controller', 'compute', 'storage', 'network', 'profile') [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ 2) [wrsroot at controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 00:1e:67:fd:3d:fe Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. [wrsroot at controller-0 ~(keystone_admin)]$ Regards, Himanshu Goyal On Mon, Jan 21, 2019 at 8:42 PM Alonso, Juan Carlos > wrote: Hi, The personality of computes changed to “worker”, so the command should be: system host-add -n compute-0 -p worker -m ${mac_address} Regards. Juan Carlos Alonso From: Hu, Yong Sent: Monday, January 21, 2019 8:51 AM To: Himanshu Goyal >; Alonso, Juan Carlos > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi Himanshu, “system host-list” doesn’t “see” your compute node and LLDP won’t work, because the mgt port on compute node directly connects to mgt port on controller-0 (rather than both connecting to a hub). Anyway, given you know the MAC of mgt port on compute node, you can have a try to run the following cmd: # system host-add -n compute-0 -p compute -m Regards, yong From: Himanshu Goyal > Date: Monday, 21 January 2019 at 8:14 PM To: "Alonso, Juan Carlos" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Able to unlock my controller node. But facing Issue in PXE boot of compute node. After unlocking of controller machine not able to see compute host in "system host-list" command. my controller machine is directly connected to compute machine. I'm following the below steps Steps: 1) system host-unlock controller-0 2) system host-list Output:: [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ 3) power on my compute machine. And give option to boot from PXE my compute machine is directly connected with controller with mgmt port. But not able to see host in "system host-list". 4) i tried with system host-add command also, but it is giving below error: Error: [wrsroot at controller-0 ~(keystone_admin)]$ system host-add Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. Please suggest me the needful change. Regards, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Tue Jan 22 07:43:31 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 22 Jan 2019 07:43:31 +0000 Subject: [Starlingx-discuss] Starlingx-discuss Digest, Vol 8, Issue 73 In-Reply-To: References: Message-ID: <56829C2A36C2E542B0CCB9854828E4D8561E503A@CDSMSX101.ccr.corp.intel.com> Hi Himanshu You seems make "Dynamic IP address allocation" to N, in config_controller for management network. I have not tried static IP deployment. You can make such triage. $ system host-add -n compute-0 -p compute -m 00:1e:67:fd:3d:fe -i 192.168.204.3 BR! Martin, Chen SSG OTC, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io [mailto:starlingx-discuss-request at lists.starlingx.io] Sent: Tuesday, January 22, 2019 3:13 PM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 8, Issue 73 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. Re: Deployment Option (Hu, Yong) ---------------------------------------------------------------------- Message: 1 Date: Tue, 22 Jan 2019 07:13:09 +0000 From: "Hu, Yong" To: Himanshu Goyal , "Alonso, Juan Carlos" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Option Message-ID: <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D at intel.com> Content-Type: text/plain; charset="utf-8" Himanshu, Could you have a try with a hub which 2 mgt ports (from controller and compute) are plugged into? Let’s assure the normal setup works first, and then figure out why the direct linkage of cable doesn’t work. BTW: “worker” and “compute” are just different “personality” names in different STX version. On your current setup “compute” will do, supposedly. From: Himanshu Goyal Date: Tuesday, 22 January 2019 at 12:32 AM To: "Alonso, Juan Carlos" Cc: "Hu, Yong" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan ,Yong I tried both the commands output shows as below: 1) [wrsroot at controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p worker -m 00:1e:67:fd:3d:fe usage: system host-add [-n ] [-p ] [-s ] [-m ] [-i ] [-I ] [-T ] [-U ] [-P ] [-b ] [-r ] [-o ] [-c ] [-v ] [-l ] [-D ] system host-add: error: argument -p/--personality: invalid choice: 'worker' (choose from 'controller', 'compute', 'storage', 'network', 'profile') [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ 2) [wrsroot at controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 00:1e:67:fd:3d:fe Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. [wrsroot at controller-0 ~(keystone_admin)]$ Regards, Himanshu Goyal On Mon, Jan 21, 2019 at 8:42 PM Alonso, Juan Carlos > wrote: Hi, The personality of computes changed to “worker”, so the command should be: system host-add -n compute-0 -p worker -m ${mac_address} Regards. Juan Carlos Alonso From: Hu, Yong Sent: Monday, January 21, 2019 8:51 AM To: Himanshu Goyal >; Alonso, Juan Carlos > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi Himanshu, “system host-list” doesn’t “see” your compute node and LLDP won’t work, because the mgt port on compute node directly connects to mgt port on controller-0 (rather than both connecting to a hub). Anyway, given you know the MAC of mgt port on compute node, you can have a try to run the following cmd: # system host-add -n compute-0 -p compute -m Regards, yong From: Himanshu Goyal > Date: Monday, 21 January 2019 at 8:14 PM To: "Alonso, Juan Carlos" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Able to unlock my controller node. But facing Issue in PXE boot of compute node. After unlocking of controller machine not able to see compute host in "system host-list" command. my controller machine is directly connected to compute machine. I'm following the below steps Steps: 1) system host-unlock controller-0 2) system host-list Output:: [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ 3) power on my compute machine. And give option to boot from PXE my compute machine is directly connected with controller with mgmt port. But not able to see host in "system host-list". 4) i tried with system host-add command also, but it is giving below error: Error: [wrsroot at controller-0 ~(keystone_admin)]$ system host-add Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. Please suggest me the needful change. Regards, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 8, Issue 73 ************************************************ From jwoh95 at dcn.ssu.ac.kr Tue Jan 22 07:46:17 2019 From: jwoh95 at dcn.ssu.ac.kr (Jaewook Oh) Date: Tue, 22 Jan 2019 16:46:17 +0900 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: Hello Hu, Yong, Thanks for the advice. On my dashboard, "*Danger: *An error occurred. Please try again later." Above error message appears, and I cannot open network creation panel. And also I'm now trying to see log in the host, but I cannot find it. Is the log disabled by default for StarlingX? BR, Jaewook. 2019년 1월 22일 (화) 오후 4:08, Hu, Yong 님이 작성: > Hey, > > Pls share the error messages you saw on Horizon. > > > > As to your question: “Is there any way to create flat network on StarlingX > openstack platform?” > > Yes, you can refer to CMD: > > $ openstack help providernet create > > $ openstack help network create > > > > Of course, since you had error on Horizon, there should be something wrong. > > So, let’s figure out why it failed first. > > > > *From: *Jaewook Oh > *Date: *Tuesday, 22 January 2019 at 2:35 PM > *To: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *[Starlingx-discuss] After deployment finished, cannot create > public flat network > > > > Hello, > > this is Jaewook Oh from IISTRC. > > > > I installed StarlingX on a server, and now I'm trying to create "flat > network" for public. > > However I couldn't find the way to make the network. > > I found "managed_flat", "managed_vlan", and "managed_vxlan" options in > '/etc/neutron/plugins/ml2/ml2_conf.ini' file. > > > > When I install some OpenStack platform, I usually used devstack, and with > devstack I could choose 'flat' option. > > > > Is there any way to create flat network on StarlingX openstack platform? > > > > And also network creation keeps failing on horizon dashboard. I had to use > OpenStack CLI. Is it also a bug? > > > > Thanks in advance for any help! > > > > Best Regards, > > Jaewook. > > > > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 E-mail : jwoh95 at dcn.ssu.ac.kr ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Jan 22 07:56:36 2019 From: yong.hu at intel.com (Hu, Yong) Date: Tue, 22 Jan 2019 07:56:36 +0000 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: <3A5F2D40-9ED7-4D9E-9E08-B1AE1864823C@intel.com> Normally, logs are saved in /var/log/, but by far we don’t have clue what went wrong. Assume the host here you mentioned means the active controller, you can check the alarms first by: controller-0:~$ source /etc/nova/openrc [wrsroot at controller-0 ~(keystone_admin)]$ fm alarm-list From: Jaewook Oh Date: Tuesday, 22 January 2019 at 3:46 PM To: "Hu, Yong" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] After deployment finished, cannot create public flat network Hello Hu, Yong, Thanks for the advice. On my dashboard, "Danger: An error occurred. Please try again later." Above error message appears, and I cannot open network creation panel. And also I'm now trying to see log in the host, but I cannot find it. Is the log disabled by default for StarlingX? BR, Jaewook. 2019년 1월 22일 (화) 오후 4:08, Hu, Yong >님이 작성: Hey, Pls share the error messages you saw on Horizon. As to your question: “Is there any way to create flat network on StarlingX openstack platform?” Yes, you can refer to CMD: $ openstack help providernet create $ openstack help network create Of course, since you had error on Horizon, there should be something wrong. So, let’s figure out why it failed first. From: Jaewook Oh > Date: Tuesday, 22 January 2019 at 2:35 PM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network Hello, this is Jaewook Oh from IISTRC. I installed StarlingX on a server, and now I'm trying to create "flat network" for public. However I couldn't find the way to make the network. I found "managed_flat", "managed_vlan", and "managed_vxlan" options in '/etc/neutron/plugins/ml2/ml2_conf.ini' file. When I install some OpenStack platform, I usually used devstack, and with devstack I could choose 'flat' option. Is there any way to create flat network on StarlingX openstack platform? And also network creation keeps failing on horizon dashboard. I had to use OpenStack CLI. Is it also a bug? Thanks in advance for any help! Best Regards, Jaewook. ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 E-mail : jwoh95 at dcn.ssu.ac.kr ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From km.giuseppesannino at gmail.com Tue Jan 22 08:09:56 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Tue, 22 Jan 2019 09:09:56 +0100 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: Hi Jaewook, I had a similar issue after deploying an AIO Simplex StarlingX. I had to re-define the host-if on the controller-0 first and then create the related providernet. Here an example. Hope it helps. [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-list controller-0 +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ | uuid | name | class | type | vlan | ports | uses | used | attributes | provider networks | | | | | | id | | i/f | by | | | | | | | | | | | i/f | | | +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ | 38da435a-5fc5-44ac-b038-52af9f23d52d | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | 7f806264-9c19-45f4-b7d7-df1f90e9d540 | eno5 | platform | ethernet | None | [u'eno5'] | [] | [] | MTU=1500 | None | | 9f7365e8-bc9a-4c9c-8725-72d40f5a18ff | eno6 | data | ethernet | None | [u'eno6'] | [] | [] | MTU=1500, | public_flat | | | | | | | | | | accelerated=True | | | | | | | | | | | | | +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet list +--------------------------------------+---------------+------+------+--------+ | ID | Name | Type | MTU | Ranges | +--------------------------------------+---------------+------+------+--------+ | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | public_flat | flat | 1500 | | +--------------------------------------+---------------+------+------+--------+ Besides, you can't create it via dashboard,. I managed to do it only via command. So something like: neutron providernet-create public_flat --type=flat system host-if-list -a controller-0 system host-if-modify -c data controller-0 eno6 -p public_flat system host-if-list -a controller-0 openstack network create provider_flat --provider-physical-network public_flat --provider-network-type flat --share --external which will create something like: [wrsroot at controller-0 ~(keystone_admin)]$ openstack network show provider_flat +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | : | provider:network_type | flat | | provider:physical_network | public_flat | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 4 | | router:external | External | : +---------------------------+--------------------------------------+ /Giuseppe On Tue, 22 Jan 2019 at 08:47, Jaewook Oh wrote: > Hello Hu, Yong, > Thanks for the advice. > > On my dashboard, "*Danger: *An error occurred. Please try again later." > Above error message appears, and I cannot open network creation panel. > > And also I'm now trying to see log in the host, but I cannot find it. Is > the log disabled by default for StarlingX? > > BR, > Jaewook. > > 2019년 1월 22일 (화) 오후 4:08, Hu, Yong 님이 작성: > >> Hey, >> >> Pls share the error messages you saw on Horizon. >> >> >> >> As to your question: “Is there any way to create flat network on >> StarlingX openstack platform?” >> >> Yes, you can refer to CMD: >> >> $ openstack help providernet create >> >> $ openstack help network create >> >> >> >> Of course, since you had error on Horizon, there should be something >> wrong. >> >> So, let’s figure out why it failed first. >> >> >> >> *From: *Jaewook Oh >> *Date: *Tuesday, 22 January 2019 at 2:35 PM >> *To: *"starlingx-discuss at lists.starlingx.io" < >> starlingx-discuss at lists.starlingx.io> >> *Subject: *[Starlingx-discuss] After deployment finished, cannot create >> public flat network >> >> >> >> Hello, >> >> this is Jaewook Oh from IISTRC. >> >> >> >> I installed StarlingX on a server, and now I'm trying to create "flat >> network" for public. >> >> However I couldn't find the way to make the network. >> >> I found "managed_flat", "managed_vlan", and "managed_vxlan" options in >> '/etc/neutron/plugins/ml2/ml2_conf.ini' file. >> >> >> >> When I install some OpenStack platform, I usually used devstack, and with >> devstack I could choose 'flat' option. >> >> >> >> Is there any way to create flat network on StarlingX openstack platform? >> >> >> >> And also network creation keeps failing on horizon dashboard. I had to >> use OpenStack CLI. Is it also a bug? >> >> >> >> Thanks in advance for any help! >> >> >> >> Best Regards, >> >> Jaewook. >> >> >> >> ================================================ >> *Jaewook Oh* (오재욱) >> IISTRC - Internet Infra System Technology Research Center >> 369 Sangdo-ro, Dongjak-gu, >> 06978, Seoul, Republic of Korea >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > > -- > > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 > E-mail : jwoh95 at dcn.ssu.ac.kr > ================================================ > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Tue Jan 22 08:33:10 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Tue, 22 Jan 2019 08:33:10 +0000 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: <6e33b4f5-accd-8480-5e07-e8132aeb27f8@windriver.com> References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com> <3k03ta01c8bua1sq@shdsegapp2> , <6e33b4f5-accd-8480-5e07-e8132aeb27f8@windriver.com> Message-ID: great! works like a charm! big thx !!!! volker... ________________________________________ Von: Chris Friesen [chris.friesen at windriver.com] Gesendet: Montag, 21. Januar 2019 20:10 An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode I think you'd need to edit the code on both controllers to add "EPYC" and "EPYC-IBPB" (with a "Y" instead of an "I") to the list in objects/fields.py such that it comes immediately after the "Passthrough" item. It should look something like this: class CPUModel(BaseNovaEnum): # We use the ordering of the cpu models to determine whether a # given host can emulate a specified virtual model, so it's not # just an enum. ALL = ("Passthrough", "EPYC", "EPYC-IBPB", "Conroe", "Penryn", You'd then need to restart the nova-scheduler service on the active controller as per the instructions below. Chris On 1/21/2019 10:45 AM, von Hoesslin, Volker wrote: > i have tried to extend this code: > > File: /usr/lib/python2.7/site-packages/nova/objects/fields.py > Class: class CPUModel(BaseNovaEnum): > > and add this two elements to list: > > "EPIC", > "EPIC-IBPB" > > restart the controller, but the same error: > > (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough > > i'm not sure, is this code used or have to recompile some stuff? > > volker... > > ________________________________________ > Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] > Gesendet: Montag, 21. Januar 2019 17:07 > An: Chris Friesen; starlingx-discuss at lists.starlingx.io > Betreff: Re: [Starlingx-discuss] WG: cpu mode > > kk, it seems it is the right way, but now i get this error here: > > No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough > > volker... > ________________________________________ > Von: Chris Friesen [chris.friesen at windriver.com] > Gesendet: Montag, 21. Januar 2019 16:28 > An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io > Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode > > This is another assumption about Intel CPUs. I *think* the following > should work: > > On the controllers, edit > /usr/lib/python2.7/site-packages/nova/scheduler/filters/vcpu_model_filter.py > (if that's not the path it should be something pretty close). In the > "_is_host_kvm" function add the following before the "return False" line: > > if 'svm' in info['features']: > return True > > Then, on the active controller node run "sudo sm-restart service > nova-scheduler". This should restart the nova scheduler, and at this > point you should be able to schedule an instance. > > Chris > > > On 1/21/2019 9:16 AM, von Hoesslin, Volker wrote: >> this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: >> >> No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts >> >> volker... >> ________________________________________ >> Von: Chris Friesen [chris.friesen at windriver.com] >> Gesendet: Montag, 21. Januar 2019 16:09 >> An: starlingx-discuss at lists.starlingx.io >> Betreff: Re: [Starlingx-discuss] WG: cpu mode >> >> You shouldn't need to modify nova.conf. >> >> With the current codebase you should be able to specify >> "hw:cpu_model=Passthrough" in the flavor extra-specs. >> >> Chris >> >> On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: >>> i have set "host-passthrough" in "/etc/nova/nova.conf" >>> >>> ================================= >>> [DEFAULT] >>> libvirt_cpu_mode = host-passthrough >>> >>> [libvirt] >>> cpu_mode = host-passthrough >>> ================================= >>> >>> and restart nove service: >>> # service nova-compute restart >>> >>> for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": >>> >>> [libvirt] >>> cpu_mode = none >>> >>> and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From zhipengs.liu at intel.com Tue Jan 22 08:35:39 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Tue, 22 Jan 2019 08:35:39 +0000 Subject: [Starlingx-discuss] After deploy VM on SRIOV NIC, VM could not get IP from dnsmasq automatically. Message-ID: <93814834B4855241994F290E959305C75302F13F@SHSMSX103.ccr.corp.intel.com> Hi huifeng, We just want to deploy two VM(VM1, VM2) to 2 sriov VFs and PT_VM to sriov PF as pass through in the same worker node (2 different physical ports of one sriov NIC ) As we talked, the issue is VM1, VM2 and PT_VM could not get IP from Dnsmasq. Still not sure if it is an expected case. I can configure IP for them manually, then ping between VM1 and VM2 is OK! If I use network cable to connect this 2 physical ports after configured IP, ping between VM1 and PT_VM doesn't work! openstack server create --flavor flavor-pcipt --image centos-root-img --port sriov-port vm1 openstack server create --flavor flavor-pcipt --image centos-root-img --port sriov-port2 vm2 Then I tried to deploy VM3 as below openstack server create --flavor flavor-pcipt --image centos-root-img -nic net-id = net-testpci vm3 I can see VM3 can get IP automatically from Dnsmasq, but ping from VM3 to VM1/VM2 doesn't work! vm3 | ACTIVE | net-testpci=28.10.10.20 | centos-root-img | flavor-pcipt vm2 | ACTIVE | net-testpci=28.10.10.19 | centos-root-img | flavor-pcipt vm1 | ACTIVE | net-testpci=28.10.10.16 | centos-root-img | flavor-pcipt >From below doc, it seems that PING between VM1 and VM3 should work after some configuration for FDB L2 Agent Extension I tried to add this extensions to below file and restart service. However, it will cause VM could not be created successfully. compute-5:/etc/neutron/plugins/ml2/ openvswitch_agent.ini https://wiki.openstack.org/wiki/SR-IOV-Passthrough-For-Networking FDB L2 Agent Extension The FDB population is an L2 agent extension to OVS agent or Linux bridge. Its objective is to update the FDB table for existing instance using normal port, thus enabling communication between SR-IOV instances and normal instances. The use cases of the FDB population extension are: 1. Direct port and normal port instances reside on the same compute node. 2. Direct port instance using floating IP and network node are located on the same host. Thanks! zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Tue Jan 22 08:35:44 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 22 Jan 2019 08:35:44 +0000 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: Hi Jaewook: You can try http://10.10.10.2/admin/providernets/ to open , please change 10.10.10.2 to your oam ip. Then you can define flat provider networks. About log, you can try run command ‘collect’ to collect all logs , configs . Thanks. BR Austin Sun. From: Giuseppe Sannino [mailto:km.giuseppesannino at gmail.com] Sent: Tuesday, January 22, 2019 4:10 PM To: Jaewook Oh Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] After deployment finished, cannot create public flat network Hi Jaewook, I had a similar issue after deploying an AIO Simplex StarlingX. I had to re-define the host-if on the controller-0 first and then create the related providernet. Here an example. Hope it helps. [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-list controller-0 +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ | uuid | name | class | type | vlan | ports | uses | used | attributes | provider networks | | | | | | id | | i/f | by | | | | | | | | | | | i/f | | | +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ | 38da435a-5fc5-44ac-b038-52af9f23d52d | lo | platform | virtual | None | [] | [] | [] | MTU=1500 | None | | 7f806264-9c19-45f4-b7d7-df1f90e9d540 | eno5 | platform | ethernet | None | [u'eno5'] | [] | [] | MTU=1500 | None | | 9f7365e8-bc9a-4c9c-8725-72d40f5a18ff | eno6 | data | ethernet | None | [u'eno6'] | [] | [] | MTU=1500, | public_flat | | | | | | | | | | accelerated=True | | | | | | | | | | | | | +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet list +--------------------------------------+---------------+------+------+--------+ | ID | Name | Type | MTU | Ranges | +--------------------------------------+---------------+------+------+--------+ | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | public_flat | flat | 1500 | | +--------------------------------------+---------------+------+------+--------+ Besides, you can't create it via dashboard,. I managed to do it only via command. So something like: neutron providernet-create public_flat --type=flat system host-if-list -a controller-0 system host-if-modify -c data controller-0 eno6 -p public_flat system host-if-list -a controller-0 openstack network create provider_flat --provider-physical-network public_flat --provider-network-type flat --share --external which will create something like: [wrsroot at controller-0 ~(keystone_admin)]$ openstack network show provider_flat +---------------------------+--------------------------------------+ | Field | Value | +---------------------------+--------------------------------------+ | admin_state_up | UP | : | provider:network_type | flat | | provider:physical_network | public_flat | | provider:segmentation_id | None | | qos_policy_id | None | | revision_number | 4 | | router:external | External | : +---------------------------+--------------------------------------+ /Giuseppe On Tue, 22 Jan 2019 at 08:47, Jaewook Oh > wrote: Hello Hu, Yong, Thanks for the advice. On my dashboard, "Danger: An error occurred. Please try again later." Above error message appears, and I cannot open network creation panel. And also I'm now trying to see log in the host, but I cannot find it. Is the log disabled by default for StarlingX? BR, Jaewook. 2019년 1월 22일 (화) 오후 4:08, Hu, Yong >님이 작성: Hey, Pls share the error messages you saw on Horizon. As to your question: “Is there any way to create flat network on StarlingX openstack platform?” Yes, you can refer to CMD: $ openstack help providernet create $ openstack help network create Of course, since you had error on Horizon, there should be something wrong. So, let’s figure out why it failed first. From: Jaewook Oh > Date: Tuesday, 22 January 2019 at 2:35 PM To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network Hello, this is Jaewook Oh from IISTRC. I installed StarlingX on a server, and now I'm trying to create "flat network" for public. However I couldn't find the way to make the network. I found "managed_flat", "managed_vlan", and "managed_vxlan" options in '/etc/neutron/plugins/ml2/ml2_conf.ini' file. When I install some OpenStack platform, I usually used devstack, and with devstack I could choose 'flat' option. Is there any way to create flat network on StarlingX openstack platform? And also network creation keeps failing on horizon dashboard. I had to use OpenStack CLI. Is it also a bug? Thanks in advance for any help! Best Regards, Jaewook. ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 E-mail : jwoh95 at dcn.ssu.ac.kr ================================================ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Jan 22 09:00:03 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 04:00:03 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! Message-ID: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> Project: STX_build_stein_master Build #: 23 Status: Failure Timestamp: 20190122T090000Z Check logs at: $PUBLISH_LOGS_URL -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From chenjie.xu at intel.com Tue Jan 22 12:52:54 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 22 Jan 2019 12:52:54 +0000 Subject: [Starlingx-discuss] Questions on [Enhancement] OVS process monitoring and alarming Message-ID: Hi Matt, I'm assigned the story [Enhancement] OVS process monitoring and alarming: https://storyboard.openstack.org/#!/story/2002947 And I have several questions on this story as below: 1. Does PMON refer to the following code: https://git.starlingx.io/cgit/stx-metal/tree/mtce-common/cgts-mtce-common-1.0/pmon?id=82e851d65129e819e2564fde91d48235e528efdd 2. Do I need to extend the above PMON as OVS PMON? 3. Will OVS PMON be integrated into stx-neutron? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From claire at openstack.org Tue Jan 22 14:31:16 2019 From: claire at openstack.org (Claire Massey) Date: Tue, 22 Jan 2019 06:31:16 -0800 Subject: [Starlingx-discuss] CFP Open Until January 23, Open Infrastructure Summit in Denver In-Reply-To: <5668ABF7-6492-4FB7-B8A8-38F282262CEE@openstack.org> References: <5668ABF7-6492-4FB7-B8A8-38F282262CEE@openstack.org> Message-ID: <7A568A23-2FF2-4838-AB69-A413D511199D@openstack.org> Friendly reminder - *Tomorrow, Jan 23*, is the CFP deadline for the Denver Open Infrastructure Summit (formerly the OpenStack Summit). Submit talks here: https://www.openstack.org/summit/denver-2019/. > On Dec 18, 2018, at 8:13 AM, Claire Massey wrote: > > Hi everyone, > > FYI - the CFP is now open for the first Open Infrastructure Summit (formerly the OpenStack Summit) which will be held in Denver, Colorado April 29 - May 1, 2019. > > Wednesday, *January 23* is the deadline to Submit presentations . > > The Open Infrastructure Summit is organized by OSF and designed to be a place where open source infrastructure communities can come together and collaborate in the open. StarlingX will have a large and prominent presence at the event so please submit talks to the CFP and plan to attend! > > SUBMIT YOUR PRESENTATION > Important info: > Based on previous Program Committee and attendee feedback, we have added / updated three Tracks: Security, Getting Started, and Open Development (previously Open Source Community). You can find the Track descriptions here . > All of the OSF pilot projects —including Airship, Kata Containers, StarlingX and Zuul — will be front and center alongside other open source communities like Ansible, Cloud Foundry, Docker, Kubernetes, and many more. > The Open Infrastructure Summit (formerly the OpenStack Summit), has evolved to recognize our diverse audience, and to signal to the market that the event is relevant for all IT infrastructure decision makers. > If you’re interested in influencing the Summit content, apply to be a Programming Committee member *, where you can also find a full list of time requirements and expectations. Nominations will close on January 4, 2019. The content submission process for the Forum and Project Teams Gathering will be managed separately in the upcoming months. > > *OSF Staff will serve as the Programming Committee for the Getting Started Track. > > Denver Summit registration and sponsor sales are currently open. Learn more and email summit at openstack.org with any questions. > > Please email speakersupport at openstack.org with any questions or feedback. > > Thanks, > Claire > -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiongzhiwei at baicells.com Tue Jan 22 10:31:31 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Tue, 22 Jan 2019 18:31:31 +0800 Subject: [Starlingx-discuss] Mount error when executing build-pkgs Message-ID: <2019012218313131162786@baicells.com> Hi all, When I execute the "build-pkgs" in container, below errors printed: 06:22:54 mock_update_or_init: in 06:22:55 b1: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg 06:22:55 b1: Updating the mock environment 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b1: Start: init plugins 06:22:55 b1: INFO: tmpfs initialized 06:22:55 b1: INFO: selinux disabled 06:22:55 b1: Finish: init plugins 06:22:55 b1: Start: run 06:22:55 b1: Start: chroot init 06:22:55 b1: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root. 06:22:55 b1: ERROR: Command failed: 06:22:55 b1: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root 06:22:55 b1: 06:22:55 b2: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg 06:22:55 b2: Updating the mock environment 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b2: Start: init plugins 06:22:55 b2: INFO: tmpfs initialized 06:22:55 b2: INFO: selinux disabled 06:22:55 b2: Finish: init plugins 06:22:55 b2: Start: run 06:22:55 b2: Start: chroot init 06:22:55 b2: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root. 06:22:55 b2: ERROR: Command failed: 06:22:55 b2: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root 06:22:55 b2: 06:22:55 b3: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg 06:22:55 b3: Updating the mock environment 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b3: Start: init plugins 06:22:55 b3: INFO: tmpfs initialized 06:22:55 b3: INFO: selinux disabled 06:22:55 b3: Finish: init plugins 06:22:55 b3: Start: run 06:22:55 b3: Start: chroot init 06:22:55 b3: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root. 06:22:55 b3: ERROR: Command failed: 06:22:55 b3: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root The full log attached. Who can help me to fix it? thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: build-pkgs.log Type: application/octet-stream Size: 1373956 bytes Desc: not available URL: From Matt.Peters at windriver.com Tue Jan 22 14:35:36 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 22 Jan 2019 14:35:36 +0000 Subject: [Starlingx-discuss] Questions on [Enhancement] OVS process monitoring and alarming In-Reply-To: References: Message-ID: Hi Chenjie, The code that you reference below is the pmon daemon code which does not need to be updated. The story requires the creation of a set of pmon configuration files and associated puppet file link operations for when the vswitch type is ovs-dpdk. Here is the existing one for ovsdb-server for OVS (not yet integrated). https://github.com/openstack/stx-integ/blob/master/networking/openvswitch/files/ovsdb-server.pmon.conf It is packaged by this rpm spec. https://github.com/openstack/stx-integ/blob/master/networking/openvswitch-config/centos/openvswitch-config.spec You will also require puppet changes to link to this file from /etc/pmon.d (this will register the configuration with pmon daemon). https://github.com/openstack/stx-config/blob/master/puppet-manifests/src/modules/platform/manifests/vswitch.pp if $::platform::params::vswitch_type == 'ovs-dpdk' { $pmon_ensure = link } else { $pmon_ensure = absent } file { '/etc/pmon.d/ovsdb-server.conf': ensure => $pmon_ensure, target => '/etc/openvswitch/ovsdb-server.pmon.conf', owner => 'root', group => 'root', mode => '0755', } From: "Xu, Chenjie" Date: Tuesday, January 22, 2019 at 7:53 AM To: "Peters, Matt" Cc: "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] Questions on [Enhancement] OVS process monitoring and alarming Hi Matt, I’m assigned the story [Enhancement] OVS process monitoring and alarming: https://storyboard.openstack.org/#!/story/2002947 And I have several questions on this story as below: 1. Does PMON refer to the following code: https://git.starlingx.io/cgit/stx-metal/tree/mtce-common/cgts-mtce-common-1.0/pmon?id=82e851d65129e819e2564fde91d48235e528efdd 2. Do I need to extend the above PMON as OVS PMON? 3. Will OVS PMON be integrated into stx-neutron? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jan 22 15:00:02 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 22 Jan 2019 10:00:02 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! In-Reply-To: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> References: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> Message-ID: Scripting error on my part.  Variable used before set.  A rebuild is underway. Scott On 2019-01-22 4:00 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_stein_master > Build #: 23 > Status: Failure > Timestamp: 20190122T090000Z > > Check logs at: > $PUBLISH_LOGS_URL > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Tue Jan 22 15:41:43 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Tue, 22 Jan 2019 15:41:43 +0000 Subject: [Starlingx-discuss] Mount error when executing build-pkgs In-Reply-To: <2019012218313131162786@baicells.com> References: <2019012218313131162786@baicells.com> Message-ID: Hi Tim, It's nr_inode=0 issue, please try below cmd in container and build again: sudo sed -i 's/nr_inodes=0/nr_inodes=100k/g' /usr/lib/python2.7/site-packages/mockbuild/plugins/tmpfs.py Mingyuan From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Tuesday, January 22, 2019 18:32 To: starlingx-discuss Subject: [Starlingx-discuss] Mount error when executing build-pkgs Hi all, When I execute the "build-pkgs" in container, below errors printed: 06:22:54 mock_update_or_init: in 06:22:55 b1: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg 06:22:55 b1: Updating the mock environment 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b1: Start: init plugins 06:22:55 b1: INFO: tmpfs initialized 06:22:55 b1: INFO: selinux disabled 06:22:55 b1: Finish: init plugins 06:22:55 b1: Start: run 06:22:55 b1: Start: chroot init 06:22:55 b1: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root. 06:22:55 b1: ERROR: Command failed: 06:22:55 b1: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root 06:22:55 b1: 06:22:55 b2: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg 06:22:55 b2: Updating the mock environment 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b2: Start: init plugins 06:22:55 b2: INFO: tmpfs initialized 06:22:55 b2: INFO: selinux disabled 06:22:55 b2: Finish: init plugins 06:22:55 b2: Start: run 06:22:55 b2: Start: chroot init 06:22:55 b2: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root. 06:22:55 b2: ERROR: Command failed: 06:22:55 b2: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root 06:22:55 b2: 06:22:55 b3: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg 06:22:55 b3: Updating the mock environment 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b3: Start: init plugins 06:22:55 b3: INFO: tmpfs initialized 06:22:55 b3: INFO: selinux disabled 06:22:55 b3: Finish: init plugins 06:22:55 b3: Start: run 06:22:55 b3: Start: chroot init 06:22:55 b3: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root. 06:22:55 b3: ERROR: Command failed: 06:22:55 b3: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root The full log attached. Who can help me to fix it? thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Tue Jan 22 15:52:38 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Tue, 22 Jan 2019 15:52:38 +0000 Subject: [Starlingx-discuss] [Containers] LP 1812519: config_controller --kubernetes fails at step 06 In-Reply-To: References: Message-ID: <080ca09131a1535fbd7d0b5ca6c83f9102e26948.camel@intel.com> On Tue, 2019-01-22 at 09:20 +0800, Qi, Mingyuan wrote: > My latest try is built from last Wednesday’s code. I’ll try the > latest code today along with the proxy patch. > > Mingyuan > > From: Miller, Frank [mailto:Frank.Miller at windriver.com] > Sent: Tuesday, January 22, 2019 3:39 > To: Cordoba Malibran, Erich ; Qi, > Mingyuan > Cc: Bailey, Henry Albert (Al) ; 'starlingx-d > iscuss at lists.starlingx.io' > Subject: [Containers] LP 1812519: config_controller --kubernetes > fails at step 06 > > Erich: > > You indicated you saw this failure when using an ISO from Jan 17: htt > ps://bugs.launchpad.net/starlingx/+bug/1812519 > > Al cannot reproduce this. The one difference between Al’s > environment and yours is Al does not use a proxy. Mingyuan used a > proxy a few weeks ago and this worked for him and he added steps to > use a proxy to the wiki. I have 2 questions: > 1. Erich can you explain any changes you had to make in your > environment that are not listed on the current wiki: > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation I'm using a libvirt/qemu setup. I wasn't unable to set a NAT network but I define some iptables rules to get internet access from the VM. I set the proxy settings on /etc/environment and /etc/systemd/system/docker.service.d/ This was my initial no_proxy no_proxy=localhost,127.0.0.1,192.168.206.2,172.16.0.0/16,10.96.0.0/12 I started adding some networks there as I saw errors on the puppet.log. At this point I can curl any host on internet, so external networking is working. With this setup I run the config_controller --kubernetes command and I see the behavior described in the initial bug report. Then I tried a different no_proxy value, Jose told me that he as able to pass config_controller on virtualbox using only no_proxy=127.0.0.1. With no_proxy=127.0.0.1 the config_controller --kubernetes succeed but I can't do a 'source /etc/platform/openrc', I get the following errors: controller-0:~$ source /etc/platform/openrc Openstack Admin credentials can only be loaded from the active controller. I tried to force execution of the controller_config script but I get: ***************************************************** ***************************************************** Unable to get IP from host: controller-0 ***************************************************** ***************************************************** Pausing for 5 seconds... and checking /etc/hosts the file is incomplete: # HEADER: This file was autogenerated at 2019-01-21 18:59:58 +0000 # HEADER: by puppet. While it can still be managed manually, it # HEADER: is definitely not recommended. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 127.0.0.1 localhost localhost.localdomain controller ::1 controller Anyway, something bad happened even with the config_controller running without errors. Also, I tried the VirtualBox in a non-proxy environment and everything worked. Today I'll try VirtualBox behind a proxy. As my libvirt/qemu setup has internet access I can't think on reason on why it works with virtualbox, but no with my setup. I'll confirm this. > 2. Mingyuan can you tell us if you are able to get > config_controller –kubernetes to succeed with a load from Jan 17th or > later? > > Hopefully answers to the above 2 questions will point to why you are > seeing failures while others are not. > > Frank > From chris.friesen at windriver.com Tue Jan 22 15:58:30 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Tue, 22 Jan 2019 09:58:30 -0600 Subject: [Starlingx-discuss] WG: cpu mode In-Reply-To: References: <3k03ta01c8bua1mm@shdsegapp2> <1547827665.3455.168.camel@windriver.com> <874eb4ad-b4c9-2ebb-e781-eb043e02308e@windriver.com> <3cad1d6e-3744-1f82-b1d7-ff4743c0e933@windriver.com> <3k03ta01c8bua1sq@shdsegapp2> <6e33b4f5-accd-8480-5e07-e8132aeb27f8@windriver.com> Message-ID: <03219d2f-0163-0c5d-2441-f308d25c6c73@windriver.com> Good to hear. Just a caveat for you and anyone else that wants to try this, you shouldn't run with that change on a cluster that has compute nodes with Intel CPUs as it'll let you try to run an EPYC guest on an Intel host, which will fail miserably. Once we switch over to the upstream nova code this issue will go away, however so will the whole mechanism to specify a CPU model in the flavor. Instead, there will be a separate way to request CPU features. (See https://github.com/openstack/nova-specs/blob/master/specs/stein/approved/cpu-model-selection.rst for details on what we're planning to push upstream.) Chris On 1/22/2019 2:33 AM, von Hoesslin, Volker wrote: > great! works like a charm! big thx !!!! > > volker... > ________________________________________ > Von: Chris Friesen [chris.friesen at windriver.com] > Gesendet: Montag, 21. Januar 2019 20:10 > An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io > Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode > > I think you'd need to edit the code on both controllers to add "EPYC" > and "EPYC-IBPB" (with a "Y" instead of an "I") to the list in > objects/fields.py such that it comes immediately after the "Passthrough" > item. It should look something like this: > > class CPUModel(BaseNovaEnum): > # We use the ordering of the cpu models to determine whether a > # given host can emulate a specified virtual model, so it's not > # just an enum. > ALL = ("Passthrough", > "EPYC", > "EPYC-IBPB", > "Conroe", > "Penryn", > > You'd then need to restart the nova-scheduler service on the active > controller as per the instructions below. > > Chris > > On 1/21/2019 10:45 AM, von Hoesslin, Volker wrote: >> i have tried to extend this code: >> >> File: /usr/lib/python2.7/site-packages/nova/objects/fields.py >> Class: class CPUModel(BaseNovaEnum): >> >> and add this two elements to list: >> >> "EPIC", >> "EPIC-IBPB" >> >> restart the controller, but the same error: >> >> (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough >> >> i'm not sure, is this code used or have to recompile some stuff? >> >> volker... >> >> ________________________________________ >> Von: von Hoesslin, Volker [Volker.Hoesslin at swsn.de] >> Gesendet: Montag, 21. Januar 2019 17:07 >> An: Chris Friesen; starlingx-discuss at lists.starlingx.io >> Betreff: Re: [Starlingx-discuss] WG: cpu mode >> >> kk, it seems it is the right way, but now i get this error here: >> >> No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough, compute-1: (VCpuModelFilter) Host VCPU model EPYC-IBPB required Passthrough >> >> volker... >> ________________________________________ >> Von: Chris Friesen [chris.friesen at windriver.com] >> Gesendet: Montag, 21. Januar 2019 16:28 >> An: von Hoesslin, Volker; starlingx-discuss at lists.starlingx.io >> Betreff: Re: AW: [Starlingx-discuss] WG: cpu mode >> >> This is another assumption about Intel CPUs. I *think* the following >> should work: >> >> On the controllers, edit >> /usr/lib/python2.7/site-packages/nova/scheduler/filters/vcpu_model_filter.py >> (if that's not the path it should be something pretty close). In the >> "_is_host_kvm" function add the following before the "return False" line: >> >> if 'svm' in info['features']: >> return True >> >> Then, on the active controller node run "sudo sm-restart service >> nova-scheduler". This should restart the nova scheduler, and at this >> point you should be able to schedule an instance. >> >> Chris >> >> >> On 1/21/2019 9:16 AM, von Hoesslin, Volker wrote: >>> this would be very nice, but if i try to launch a vm with a flavor that contain the given extra-spec, i get this error: >>> >>> No valid host was found. There are not enough hosts available. compute-0: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts, compute-1: (VCpuModelFilter) Passthrough VCPU Model only available on 'kvm' hosts >>> >>> volker... >>> ________________________________________ >>> Von: Chris Friesen [chris.friesen at windriver.com] >>> Gesendet: Montag, 21. Januar 2019 16:09 >>> An: starlingx-discuss at lists.starlingx.io >>> Betreff: Re: [Starlingx-discuss] WG: cpu mode >>> >>> You shouldn't need to modify nova.conf. >>> >>> With the current codebase you should be able to specify >>> "hw:cpu_model=Passthrough" in the flavor extra-specs. >>> >>> Chris >>> >>> On 1/21/2019 8:30 AM, von Hoesslin, Volker wrote: >>>> i have set "host-passthrough" in "/etc/nova/nova.conf" >>>> >>>> ================================= >>>> [DEFAULT] >>>> libvirt_cpu_mode = host-passthrough >>>> >>>> [libvirt] >>>> cpu_mode = host-passthrough >>>> ================================= >>>> >>>> and restart nove service: >>>> # service nova-compute restart >>>> >>>> for now it works! "lscpu" on guest os shows me the AMD EPIC with all features, very nice. but after reboot the compute-node, the auto-config script change this setting back to "none": >>>> >>>> [libvirt] >>>> cpu_mode = none >>>> >>>> and passthrough did not work anymore :( so how can i prevent this auto-config or define my new config as persistent? >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > From vm.rod25 at gmail.com Tue Jan 22 16:18:23 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 22 Jan 2019 10:18:23 -0600 Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 108 - Failure! In-Reply-To: <1391271611.248.1548136804188.JavaMail.javamailuser@localhost> References: <1391271611.248.1548136804188.JavaMail.javamailuser@localhost> Message-ID: On Mon, Jan 21, 2019 at 11:59 PM wrote: > > Project: STX_build_master_pike > Build #: 108 > Status: Failure > Timestamp: 20190122T060000Z > > Check logs at: > $PUBLISH_LOGS_URL Can we post the log URL here ? > -------------------------------------------------------------------------------- > Parameters > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Tue Jan 22 16:20:40 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 22 Jan 2019 10:20:40 -0600 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! In-Reply-To: References: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> Message-ID: On Tue, Jan 22, 2019 at 9:03 AM Scott Little wrote: > > Scripting error on my part. Variable used before set. A rebuild is underway. > Thanks, Scott Please also send the link of the script so we can send patches to imrpove it Regards > Scott > > > On 2019-01-22 4:00 a.m., build.starlingx at gmail.com wrote: > > Project: STX_build_stein_master > Build #: 23 > Status: Failure > Timestamp: 20190122T090000Z > > Check logs at: > $PUBLISH_LOGS_URL > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS: false > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Tue Jan 22 16:50:40 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 22 Jan 2019 16:50:40 +0000 Subject: [Starlingx-discuss] Reminder: StarlingX bugs should be reported in launchpad not StoryBoard Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4A974A@ALA-MBD.corp.ad.wrs.com> Hello all, This is just a friendly reminder that StarlingX bugs should be reported in Launchpad, not StoryBoard: https://bugs.launchpad.net/starlingx There have been a couple of recent instances of stories opened to report bugs: https://storyboard.openstack.org/#!/story/2004812 https://storyboard.openstack.org/#!/story/2004825 StarlingX Launchpad bugs go through a regular screening process to determine severity and target release. For enhancements/features in StoryBoard, we follow a release train model, so any items that don't make the release are automatically deferred (unless deemed an anchor feature). Please report bugs in launchpad. Thanks, Ghada StarlingX Release Prime From scott.little at windriver.com Tue Jan 22 17:31:28 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 22 Jan 2019 12:31:28 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 108 - Failure! In-Reply-To: References: <1391271611.248.1548136804188.JavaMail.javamailuser@localhost> Message-ID: <30d610e5-4c69-b82b-04e6-fad6f15c3992@windriver.com> Normally it would have.  The failure was so early, even the log publication path had not yet been calculated. Scott On 2019-01-22 11:18 a.m., Victor Rodriguez wrote: > On Mon, Jan 21, 2019 at 11:59 PM wrote: >> Project: STX_build_master_pike >> Build #: 108 >> Status: Failure >> Timestamp: 20190122T060000Z >> >> Check logs at: >> $PUBLISH_LOGS_URL > Can we post the log URL here ? > >> -------------------------------------------------------------------------------- >> Parameters >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Jan 22 17:32:17 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 22 Jan 2019 12:32:17 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! In-Reply-To: References: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> Message-ID: That raises an interesting subject. The script in question is a jenkins script.  Part of a larger build job, which in turn is a part of a family of jobs that do various sub-tasks.  The editing is through the jenkins web-gui, and that will not be made accessible to the public.  I'm troubled that Jenkins doesn't seem to keep an edit history of 'config' changes out of the box, and I've yet to spot a plugin that adds this feature. I've been pondering making the jenkins home directory one big git, with a lot of excludes for all the build history, logs, workspaces and such.  Has anyone solved this?   A git could readily be published. In the mean time, the script content can be inferred from the various build logs.   A successful build would be more informative.  e.g. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190121T060000Z/logs/ Accepting feedback/revisions would be another matter entirely.  We'll probably stick with e-mail to myself, and build.starlingx at gmail.com, for the near term. Scott On 2019-01-22 11:20 a.m., Victor Rodriguez wrote: > Thanks, Scott > > Please also send the link of the script so we can send patches to imrpove it > > Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jan 22 17:31:57 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 22 Jan 2019 12:31:57 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 108 - Failure! In-Reply-To: <1391271611.248.1548136804188.JavaMail.javamailuser@localhost> References: <1391271611.248.1548136804188.JavaMail.javamailuser@localhost> Message-ID: <85b3c9c0-c3b2-d333-6992-9910416c4bc1@windriver.com> Scripting error on my part.  Variable used before set.  A rebuild is underway. Scott On 2019-01-22 1:00 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_pike > Build #: 108 > Status: Failure > Timestamp: 20190122T060000Z > > Check logs at: > $PUBLISH_LOGS_URL > -------------------------------------------------------------------------------- > Parameters > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Tue Jan 22 17:54:15 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 22 Jan 2019 09:54:15 -0800 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! In-Reply-To: References: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> Message-ID: <4f30829a-25ee-9d94-0a6e-29a19b2d156a@linux.intel.com> On 1/22/19 9:32 AM, Scott Little wrote: > That raises an interesting subject. The script in question is a jenkins > script.  Part of a larger build job, which in turn is a part of a family > of jobs that do various sub-tasks.  The editing is through the jenkins > web-gui, and that will not be made accessible to the public.  I'm > troubled that Jenkins doesn't seem to keep an edit history of 'config' > changes out of the box, and I've yet to spot a plugin that adds this > feature. > I had this kind of issue in a previous job, we ended up having a jenkins specific git repo that managed the jenkins related scripts and had very basic code in the Jenkins Web-GUI. The challenge with that was how and when the jobs got triggered, if using CI based on repo change. So I will ask, on what conditions do you trigger builds on the Jenkins? Sau! > I've been pondering making the jenkins home directory one big git, with > a lot of excludes for all the build history, logs, workspaces and such. > Has anyone solved this?   A git could readily be published. > > In the mean time, the script content can be inferred from the various > build logs.   A successful build would be more informative.  e.g. > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190121T060000Z/logs/ > > > Accepting feedback/revisions would be another matter entirely.  We'll > probably stick with e-mail to myself, and build.starlingx at gmail.com, for > the near term. > > Scott > > > > > > On 2019-01-22 11:20 a.m., Victor Rodriguez wrote: >> Thanks, Scott >> >> Please also send the link of the script so we can send patches to imrpove it >> >> Regards > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From guillermo.a.ponce.castaneda at intel.com Tue Jan 22 17:58:49 2019 From: guillermo.a.ponce.castaneda at intel.com (Ponce Castaneda, Guillermo A) Date: Tue, 22 Jan 2019 17:58:49 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! In-Reply-To: References: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> Message-ID: <3EA5DF3E-8418-4229-8363-A7C4BA7EEEF2@intel.com> Hello Scott, Why don’t you put your Jenkins jobs on a git repository? This can be done by issuing a GET method to the address /job//config.xml. The GET method will bring you back an xml file that represents the configuration of the job, including any scripts contained inside. To issue this GET method you may use your USER and PASSWORD or if you have an API_TOKEN you can use it instead of your password. This is an example on how this can be done. $ curl https://somejenkinsserver.com/job/some_job/config.xml --user ${USER}:${API_TOKEN} It is also possible to modify jobs by issuing a new XML file on a POST method, in case you want to automate changes based on your git repo. I also happen to have a script that backups and restore Jenkins Jobs configs, here’s a github gist where you can see it: https://gist.github.com/gaponcec/8f43635707849feae8555fd4d2572755 From: Scott Little Date: Tuesday, January 22, 2019 at 11:35 AM To: Victor Rodriguez Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! That raises an interesting subject. The script in question is a jenkins script. Part of a larger build job, which in turn is a part of a family of jobs that do various sub-tasks. The editing is through the jenkins web-gui, and that will not be made accessible to the public. I'm troubled that Jenkins doesn't seem to keep an edit history of 'config' changes out of the box, and I've yet to spot a plugin that adds this feature. I've been pondering making the jenkins home directory one big git, with a lot of excludes for all the build history, logs, workspaces and such. Has anyone solved this? A git could readily be published. In the mean time, the script content can be inferred from the various build logs. A successful build would be more informative. e.g. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190121T060000Z/logs/ Accepting feedback/revisions would be another matter entirely. We'll probably stick with e-mail to myself, and build.starlingx at gmail.com, for the near term. Scott On 2019-01-22 11:20 a.m., Victor Rodriguez wrote: Thanks, Scott Please also send the link of the script so we can send patches to imrpove it Regards -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Tue Jan 22 18:02:46 2019 From: serverascode at gmail.com (Curtis) Date: Tue, 22 Jan 2019 13:02:46 -0500 Subject: [Starlingx-discuss] New contributors - tagging work with "help wanted" or "good first issue"? Message-ID: Hi All, There was a bit of discussion at the community meeting in Phoenix regarding the possibility of tagging work in StarlingX to make them easily available to potential new contributors. (There was even a related question on IRC last night.) Basically can we apply labels like "help wanted" or "good first issue" (like what I see on a lot of github based projects)? I'm not exactly sure what this would look like in this project, but it's something to think about. Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Jan 22 18:18:08 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 22 Jan 2019 12:18:08 -0600 Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! In-Reply-To: <3EA5DF3E-8418-4229-8363-A7C4BA7EEEF2@intel.com> References: <1718106207.251.1548147603995.JavaMail.javamailuser@localhost> <3EA5DF3E-8418-4229-8363-A7C4BA7EEEF2@intel.com> Message-ID: On Tue, Jan 22, 2019 at 11:58 AM Ponce Castaneda, Guillermo A wrote: > > Hello Scott, > > Why don’t you put your Jenkins jobs on a git repository? > > This can be done by issuing a GET method to the address /job//config.xml. > The GET method will bring you back an xml file that represents the configuration of the job, including any scripts contained inside. > > To issue this GET method you may use your USER and PASSWORD or if you have an API_TOKEN you can use it instead of your password. > > This is an example on how this can be done. > > $ curl https://somejenkinsserver.com/job/some_job/config.xml --user ${USER}:${API_TOKEN} > > > > It is also possible to modify jobs by issuing a new XML file on a POST method, in case you want to automate changes based on your git repo. > > > > I also happen to have a script that backups and restore Jenkins Jobs configs, here’s a github gist where you can see it: > > https://gist.github.com/gaponcec/8f43635707849feae8555fd4d2572755 > > > Sounds like a great way we all can collaborate, to improve the quality of the tools/CI-CD we have Scott, what do you thing? > > > From: Scott Little > Date: Tuesday, January 22, 2019 at 11:35 AM > To: Victor Rodriguez > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 23 - Failure! > > > > That raises an interesting subject. The script in question is a jenkins script. Part of a larger build job, which in turn is a part of a family of jobs that do various sub-tasks. The editing is through the jenkins web-gui, and that will not be made accessible to the public. I'm troubled that Jenkins doesn't seem to keep an edit history of 'config' changes out of the box, and I've yet to spot a plugin that adds this feature. > > > > I've been pondering making the jenkins home directory one big git, with a lot of excludes for all the build history, logs, workspaces and such. Has anyone solved this? A git could readily be published. > > > > In the mean time, the script content can be inferred from the various build logs. A successful build would be more informative. e.g. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190121T060000Z/logs/ > > > Accepting feedback/revisions would be another matter entirely. We'll probably stick with e-mail to myself, and build.starlingx at gmail.com, for the near term. > > > > Scott > > > > > > > > > > > > On 2019-01-22 11:20 a.m., Victor Rodriguez wrote: > > Thanks, Scott > > > > Please also send the link of the script so we can send patches to imrpove it > > > > Regards > > From vm.rod25 at gmail.com Tue Jan 22 18:19:21 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 22 Jan 2019 12:19:21 -0600 Subject: [Starlingx-discuss] New contributors - tagging work with "help wanted" or "good first issue"? In-Reply-To: References: Message-ID: On Tue, Jan 22, 2019 at 12:03 PM Curtis wrote: > > Hi All, > > There was a bit of discussion at the community meeting in Phoenix regarding the possibility of tagging work in StarlingX to make them easily available to potential new contributors. (There was even a related question on IRC last night.) Basically can we apply labels like "help wanted" or "good first issue" (like what I see on a lot of github based projects)? > > I'm not exactly sure what this would look like in this project, but it's something to think about. > +1 from my side , also we can document that on the developer wiki as good recomendations > Thanks, > Curtis > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Jan 22 18:53:12 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 13:53:12 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 32 - Failure! Message-ID: <2021052846.255.1548183194103.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 32 Status: Failure Timestamp: 20190122T183949Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190122T145934Z OS: centos MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root BASE_VERSION: f-stein-20190122T145934Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: f-stein PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs FLOCK_VERSION: f-stein-centos-master-20190122T145934Z PREFIX: f-stein OPENSTACK_RELEASE: master TIMESTAMP: 20190122T145934Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Tue Jan 22 18:53:16 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 13:53:16 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 28 - Failure! Message-ID: <1573484560.258.1548183197717.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 28 Status: Failure Timestamp: 20190122T173309Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: f/stein MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190122T145934Z OS: centos MUNGED_BRANCH: f-stein MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein PUBLISH_DISTRO_BASE: /export/mirror/starlingx/feature/stein/centos DOCKER_BUILD_ID: jenkins-f-stein-20190122T145934Z-builder OPENSTACK_RELEASE: master TIMESTAMP: 20190122T145934Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/outputs From build.starlingx at gmail.com Tue Jan 22 18:53:20 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 13:53:20 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 24 - Still Failing! In-Reply-To: <479817535.249.1548147600871.JavaMail.javamailuser@localhost> References: <479817535.249.1548147600871.JavaMail.javamailuser@localhost> Message-ID: <2079759249.261.1548183201078.JavaMail.javamailuser@localhost> Project: STX_build_stein_master Build #: 24 Status: Still Failing Timestamp: 20190122T145934Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: true From build.starlingx at gmail.com Tue Jan 22 19:37:15 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 14:37:15 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 33 - Still Failing! In-Reply-To: <334492306.253.1548183190416.JavaMail.javamailuser@localhost> References: <334492306.253.1548183190416.JavaMail.javamailuser@localhost> Message-ID: <522386905.265.1548185836260.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 33 Status: Still Failing Timestamp: 20190122T192232Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: dev-20190122T145945Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: dev PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs FLOCK_VERSION: dev-centos-pike-20190122T145945Z PREFIX: dev OPENSTACK_RELEASE: pike TIMESTAMP: 20190122T145945Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Tue Jan 22 19:37:18 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 14:37:18 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 29 - Still Failing! In-Reply-To: <1769036038.256.1548183194927.JavaMail.javamailuser@localhost> References: <1769036038.256.1548183194927.JavaMail.javamailuser@localhost> Message-ID: <1596916073.268.1548185840279.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 29 Status: Still Failing Timestamp: 20190122T191315Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos DOCKER_BUILD_ID: jenkins-master-20190122T145945Z-builder OPENSTACK_RELEASE: pike TIMESTAMP: 20190122T145945Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs From build.starlingx at gmail.com Tue Jan 22 19:37:22 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 14:37:22 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 110 - Still Failing! In-Reply-To: <975017162.252.1548168533368.JavaMail.javamailuser@localhost> References: <975017162.252.1548168533368.JavaMail.javamailuser@localhost> Message-ID: <1135221934.271.1548185843526.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 110 Status: Still Failing Timestamp: 20190122T145945Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters From Eric.MacDonald at windriver.com Tue Jan 22 19:59:31 2019 From: Eric.MacDonald at windriver.com (MacDonald, Eric) Date: Tue, 22 Jan 2019 19:59:31 +0000 Subject: [Starlingx-discuss] Recommended C/C++ compiler flag for security In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CD606C3@FMSMSX114.amr.corp.intel.com> References: <7DF6804B-15E9-4998-B132-DB38969CFFD2@windriver.com> <4F6AACE4B0F173488D033B02A8BB5B7E7CD606C3@FMSMSX114.amr.corp.intel.com> Message-ID: <210898B96CA058408C55992CCAD98676B9F913E4@ALA-MBD.corp.ad.wrs.com> Ada, Regarding .... > > Yes, we executed several test cases covering what Ken mentions (manually). What I'm not sure is about > heartbeat loss, but let me check. > What we can do is to build a test plan and submit it for revision. When do you need it (and please, don't > say tomorrow)? > What are you not sure about regarding heartbeat loss ? I assume testing and detection. Suggest reboot an in-service (unlocked-enabled-available) node and see that there are heartbeat communication loss and inservice failure alarms, that the system detects and Gracefully Recovers the rebooted host and clears said alarms once the host recovers back in-service. Eric. > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, January 21, 2019 4:51 PM > To: Young, Ken; Victor Rodriguez > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for security > > Comments inline > > > -----Original Message----- > > From: Young, Ken [mailto:Ken.Young at windriver.com] > > Sent: Friday, January 18, 2019 9:34 AM > > To: Victor Rodriguez > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for > > security > > > > See inline. > > > > On 2019-01-17, 5:34 PM, "Victor Rodriguez" wrote: > > > > On Wed, Jan 2, 2019 at 10:35 AM Young, Ken > > wrote: > > > > > > Victor, > > > > > > > > > > > > Security work is never completed. There is always a long list of inventive > > new vulnerabilities and a laundry list of hardening work to be completed. > > The vulnerability work, considering the severity, is generally urgent. > > Hardening work is not urgent but important. In this case, we are dealing with > > a hardening initiative that focuses on a small area of the code. > > > > > > > > > > > > The challenge is that these small change proposed have larger > > implications. As was pointed out on the gerrit reviews, performance and / or > > functional testing is required. > > > > Hi Ken > > > > Just to follow the idea of this mail after hollliday break, you mention that: > > > > My concern is that we affect the timing / behaviour of stx-ha and > > stx-metal such that they do not work together in some scenarios. This > > will need to be tested and is certainly larger than a sanity. > > > > Could you please help to describe n human words, ( I can do the script > > ) how a good test to probe this would look like? > > If you provide me with a basic description of the security test I > > could help writing the first draft of a code test that help us to > > prove if the flags break the functionality > > > > Victor, > > > > At a high level, we need to regress the behaviour of stx-ha and stx-metal to > > ensure that there is functional issues introduced by the change to the > > compiler. As well, we need to look at the system behaviour of ha and metal > > to ensure no changes have been introduced which affect has behaviour: > > > > - SWACT detection and time > > - Multinode failure avoidance > > - Heartbeat loss > > - lock / unlock > > - etc > > > > I believe that Ada has the test for ha and metal. Please review. > > > > Yes, we executed several test cases covering what Ken mentions (manually). What I'm not sure is about > heartbeat loss, but let me check. > What we can do is to build a test plan and submit it for revision. When do you need it (and please, don't > say tomorrow)? > > Ada > > > Regards, > > Ken Y > > > > thanks > > > > Victor R > > > > > > > > > > > > Also, I am wondering if there is a way to phase the effort. For example, is > > there a way to break up the flag changes such that the warnings are > > separated from the flags which change the compiled code? That way, we > > are not trying to jam everything through at once. > > > > > > > > > > > > Hope this helps. Happy to discuss when you return from Holliday. > > > > > > > > > > > > Regards, > > > > > > Ken Y > > > > > > > > > > > > From: Victor Rodriguez > > > Date: Friday, December 28, 2018 at 7:34 PM > > > To: Curtis > > > Cc: "starlingx-discuss at lists.starlingx.io" > discuss at lists.starlingx.io> > > > Subject: Re: [Starlingx-discuss] Recommended C/C++ compiler flag for > > security > > > > > > > > > > > > > > > > > > On Fri, Dec 21, 2018, 07:08 Curtis > > > > > > > > > > > > > > > > > On Thu, Dec 20, 2018 at 3:47 PM Victor Rodriguez > > wrote: > > > > > > Hi StarlingX community > > > > > > We can all agree that security is an important feature to be taken > > > into consideration in any SW project. In the aim of improving the > > > security of the StarlingX project, we have been taking the task to > > > propose the use of some compiler flags that prevent and detect some > > > security holes, especially by buffer overflow that could lead into ROP > > > attacks. > > > > > > The list of flags that we are proposing are : > > > > > > Stack-based Buffer Overrun Detection: CFLAGS=”-fstack-protector- > > strong” > > > > > > Fortify source: CFLAGS="-O2 -D_FORTIFY_SOURCE=2" > > > Format string vulnerabilities: CFLAGS="-Wformat -Wformat- > > security" > > > Stack execution protection: LDFLAGS="-z noexecstack" > > > Data relocation and protection (RELRO): LDLFAGS="-z relro -z now" > > > > > > > > > These are being analyzed in the following Gerrit reviews (thanks a lot > > > for all the good feedback) > > > > > > https://review.openstack.org/#/c/623608/ > > > https://review.openstack.org/#/c/623603/ > > > https://review.openstack.org/#/c/623601/ > > > https://review.openstack.org/#/c/623599/ > > > > > > As requested in the Gerrit reviews, there is a proper need to first > > > understand what these compiler flags do and what is the impact they > > > have at the functional and performance area of the project. This is a > > > preliminary report, we will be following up with a test plan for > > > functional & performance test plans for the services as a next step. > > > This report includes: > > > > > > * Detailed description of what the compiler flag does > > > * Code example that shows how does it work to prevent attacks > > > * If there is a change in the binary, we create a microbenchmark that > > > shows us how the flag impact the performance > > > > > > > > https://github.com/VictorRodriguez/hobbies/tree/master/c_programing_ex > > ercises/cflags_security > > > > > > As a result of the microbenchmark, the performance impact is not > > > relevant ( less than 1% ) using an Ubuntu x86 system ( GCC 5 ) (more > > > details on the HW and SW specification upon requests) > > > > > > The areas of the code we are suggesting on the patches are: > > > > > > * stx-ha > > > * stx-metal > > > * stx-nfv > > > * stx-fault > > > > > > We do take care that these flags are not breaking the following areas > > > after being applied. > > > > > > * Build process of the image > > > * Sanity test cases after the image is created > > > (Ada can give more details on the sanity report of the image generated > > > with these flags) > > > > > > If running the sanity tests are not enough to prove that a change in > > > compiler flags do not affect functionality, please gave us the right > > > path to follow. > > > > > > As mentioned before, this is a preliminary report, and that we will be > > > following up with a test plan for functional & performance test plans > > > for the services as a next step. > > > > > > Hope this email helps to clarify some questions related to the flags > > > and start the follow-up discussion. > > > > > > > > > > > > Thanks for the context Victor, it's very helpful to me. > > > > > > > > > > > > Hi Curtis, glad it helps, it was fun to do the research > > > > > > > > > > > > One thing I want to mention is something the Kata Containers team was > > talking about at the Berlin OpenStack summit, which is when many small > > performance hits start to add up. They have to be careful to ensure they > > don't have a bunch of smallish looking changes that add up to a large > > performance hit over a longer period of time. > > > > > > > > > > > > You are right, it's a valid point that we need to take care too > > > > > > > > > > > > Overall I'm sure the StarlingX project would like to have some > > performance testing, if we don't already, though that can be challenging for > > an open source project. I had mentioned OPNFV's Functest and related > > projects on the TSC call, but now seeing which components are affected I'm > > not sure that would be directly helpful. I look forward to further discussions > > around this area. > > > > > > > > > > > > Thanks for let me know that, I will take a look at OPNFV's functest and > > other projects before the next TSC of 2019 > > > > > > > > > > > > I will do my best to came up with a proposal for a better performance > > testing. > > > > > > > > > > > > Thanks > > > > > > > > > > > > Victor Rodriguez > > > > > > > > > > > > Thanks, > > > > > > Curtis > > > > > > > > > > > > > > > Regards > > > > > > Victor Rodriguez > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > > > > > > > -- > > > > > > Blog: serverascode.com > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From guillermo.a.ponce.castaneda at intel.com Tue Jan 22 20:20:13 2019 From: guillermo.a.ponce.castaneda at intel.com (Ponce Castaneda, Guillermo A) Date: Tue, 22 Jan 2019 20:20:13 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: References: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> Message-ID: <38DDB8BE-5BBE-4F00-B6CB-AB7FC9637B7C@intel.com> Hello Frank and team, After having some ideas exchanged with Saul, we reach to the idea that the Docker registry/proxy could be located on the controller-0, and that way all the other nodes will point to it to pull the docker images they require. This approach will require to have extra disk space on controller-0 to host the images that all the other want to pull, and we also need to change the configs on every node so they can see the controller-0 as docker registry. What do you all think? - Memo On 1/21/19, 4:29 PM, "Ponce Castaneda, Guillermo A" wrote: On 1/21/19, 3:39 PM, "Saul Wold" wrote: On 1/21/19 11:54 AM, Ponce Castaneda, Guillermo A wrote: > Hello Frank, > > Thanks for initiating the conversation, my proposed solution is to bring > up a docker registry that will have to be in the local network of each > office, so the speed of the pulls will be the faster. > > The problem with this approach might be that the references of the > docker pull have to change so it points to the local docker registry, I > have already implemented this approach locally at GDC and can provide > documentation on how to do this. > > Another approach that I am researching is to use this project: > https://github.com/rpardini/docker-registry-proxy, so far this option > seems much better but I need to explore it a little bit further, I will > provide more details on it as soon as possible. > Do we need access to more than the standard docker hub? It also seems that this approach will require modifications to the images wanting to use the proxy. We do not really need access to more than the standard docker hub, but this one way to solve the problem of the people having troubles with slow networks, the docker registry proxy method promises to be transparent for the user, the user will have to modify their docker daemon file to add the registry as proxy and just pull images normally, I am working to set that up and do a test on our network right now, once it is done I will be able to tell if it is really transparent. I am sure this is true in most proxy setups. Sau! > All the feedback and other ideas are welcome. > > Thanks and Regards. > > Guillermo (Memo) Ponce > > *From: *"Miller, Frank" > *Date: *Monday, January 21, 2019 at 12:54 PM > *To: *"Martin, Guillermo Oscar" > *Cc: *"'starlingx-discuss at lists.starlingx.io'" > > *Subject: *[Starlingx-discuss] [Containers] Approach for adding a local > mirror of docker images > > Guillermo: > > As discussed at today’s containerization meeting, please reply with your > initial thoughts on how to address > https://storyboard.openstack.org/#!/story/2004711 . If you first need > to ask a member of the containerization subteam a few questions to > understand what is needed then try reaching out to Bob Church and Angie > Wang. > > Frank > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Brent.Rowsell at windriver.com Tue Jan 22 20:27:51 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 22 Jan 2019 20:27:51 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: <38DDB8BE-5BBE-4F00-B6CB-AB7FC9637B7C@intel.com> References: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> <38DDB8BE-5BBE-4F00-B6CB-AB7FC9637B7C@intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3939EA@ALA-MBD.corp.ad.wrs.com> The controller will already host a local docker registry after the system has been boot strapped. What is required is a private external registry that a system call pull from. I will be updating the story with additional info Brent -----Original Message----- From: Ponce Castaneda, Guillermo A [mailto:guillermo.a.ponce.castaneda at intel.com] Sent: Tuesday, January 22, 2019 3:20 PM To: Saul Wold ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images Hello Frank and team, After having some ideas exchanged with Saul, we reach to the idea that the Docker registry/proxy could be located on the controller-0, and that way all the other nodes will point to it to pull the docker images they require. This approach will require to have extra disk space on controller-0 to host the images that all the other want to pull, and we also need to change the configs on every node so they can see the controller-0 as docker registry. What do you all think? - Memo On 1/21/19, 4:29 PM, "Ponce Castaneda, Guillermo A" wrote: On 1/21/19, 3:39 PM, "Saul Wold" wrote: On 1/21/19 11:54 AM, Ponce Castaneda, Guillermo A wrote: > Hello Frank, > > Thanks for initiating the conversation, my proposed solution is to bring > up a docker registry that will have to be in the local network of each > office, so the speed of the pulls will be the faster. > > The problem with this approach might be that the references of the > docker pull have to change so it points to the local docker registry, I > have already implemented this approach locally at GDC and can provide > documentation on how to do this. > > Another approach that I am researching is to use this project: > https://github.com/rpardini/docker-registry-proxy, so far this option > seems much better but I need to explore it a little bit further, I will > provide more details on it as soon as possible. > Do we need access to more than the standard docker hub? It also seems that this approach will require modifications to the images wanting to use the proxy. We do not really need access to more than the standard docker hub, but this one way to solve the problem of the people having troubles with slow networks, the docker registry proxy method promises to be transparent for the user, the user will have to modify their docker daemon file to add the registry as proxy and just pull images normally, I am working to set that up and do a test on our network right now, once it is done I will be able to tell if it is really transparent. I am sure this is true in most proxy setups. Sau! > All the feedback and other ideas are welcome. > > Thanks and Regards. > > Guillermo (Memo) Ponce > > *From: *"Miller, Frank" > *Date: *Monday, January 21, 2019 at 12:54 PM > *To: *"Martin, Guillermo Oscar" > *Cc: *"'starlingx-discuss at lists.starlingx.io'" > > *Subject: *[Starlingx-discuss] [Containers] Approach for adding a local > mirror of docker images > > Guillermo: > > As discussed at today’s containerization meeting, please reply with your > initial thoughts on how to address > https://storyboard.openstack.org/#!/story/2004711 . If you first need > to ask a member of the containerization subteam a few questions to > understand what is needed then try reaching out to Bob Church and Angie > Wang. > > Frank > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Tue Jan 22 21:01:38 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 22 Jan 2019 13:01:38 -0800 Subject: [Starlingx-discuss] CFP tracking etherpad Message-ID: <1E0F2549-B95A-4311-BBAC-A40074CFEAF8@gmail.com> Hi, Following up on my action item from the StarlingX Contributor Meetup as well as today’s edge working group call I created an etherpad to track our planned/submitted session proposals for industry events: https://etherpad.openstack.org/p/edge-cfp-for-industry-events I created one etherpad for all the projects with the intention to get an overall picture, we can create more etherpads if we feel the need in the future. Please add your proposals and also feel free to add further events you are planning to propose edge related sessions that cover topics such as the work of this group, StarlingX, OpenStack projects, and so forth. Please let me know if you have any questions. Thanks, Ildikó From juan.carlos.alonso at intel.com Tue Jan 22 21:46:40 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Tue, 22 Jan 2019 21:46:40 +0000 Subject: [Starlingx-discuss] Simplex STX containerized Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8BC0A@FMSMSX108.amr.corp.intel.com> Hi, I am trying to deploy an STX containerized system following steps on https://wiki.openstack.org/wiki/StarlingX/Containers/Installation On Provisioning the platform section, on third bullet, to create partitions on the root disk. The first partition is for cgts volume and the second is for nova-local. The commands show that both should be applied to the same disk ID: system host-disk-list controller-0 | awk '/sda/{print $2} ' Is this correct? cgts-vg and nova-local should be configured in the same disk partition? I could not apply because size available was not enough, instead I use a different partition (sdb) for nova-local. After unlock the controller: system host-unlock controller-0, the system reboot about 4 times and then boot correctly. Is this an expected behavior? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Tue Jan 22 21:56:01 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 22 Jan 2019 21:56:01 +0000 Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul Message-ID: About 3 hours ago a new version of Pip was released to the pypi site. https://pypi.org/project/pip/19.0/ Tox jobs which pick up that version of pip will likely fail to install their dependencies. Docker image jobs that are using loci (which uses pip) have also been observed to fail. I don't know what the fix is, I assume many python users in many projects will be impacted. If you see an error with a signature like this, you've hit the problem Exception: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 176, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 346, in run session=session, autobuilding=True File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/wheel.py", line 848, in build assert building_is_possible AssertionError Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Jan 22 22:02:17 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 22 Jan 2019 22:02:17 +0000 Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA41B46D@ALA-MBD.corp.ad.wrs.com> Note that this also impacts loci docker image builds. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 22, 2019 4:56 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul About 3 hours ago a new version of Pip was released to the pypi site. https://pypi.org/project/pip/19.0/ Tox jobs which pick up that version of pip will likely fail to install their dependencies. Docker image jobs that are using loci (which uses pip) have also been observed to fail. I don't know what the fix is, I assume many python users in many projects will be impacted. If you see an error with a signature like this, you've hit the problem Exception: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 176, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 346, in run session=session, autobuilding=True File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/wheel.py", line 848, in build assert building_is_possible AssertionError Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jan 22 22:20:33 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 22 Jan 2019 17:20:33 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 29 - Still Failing! In-Reply-To: <1596916073.268.1548185840279.JavaMail.javamailuser@localhost> References: <1769036038.256.1548183194927.JavaMail.javamailuser@localhost> <1596916073.268.1548185840279.JavaMail.javamailuser@localhost> Message-ID: OK, those build steps need to be serialized.  Another script fix, restaring job with same parameters.   Sorry for the noise folks. The fun part is that one error generate three build failure e-mails due to the nested jobs. Scott On 2019-01-22 2:37 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_docker_images > Build #: 29 > Status: Still Failing > Timestamp: 20190122T191315Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: master > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs > MY_REPO_ROOT: /localdisk/designer/jenkins/master > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos > DOCKER_BUILD_ID: jenkins-master-20190122T145945Z-builder > OPENSTACK_RELEASE: pike > TIMESTAMP: 20190122T145945Z > OS_VERSION: 7.5.1804 > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/inputs > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Jan 22 22:34:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 17:34:35 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 34 - Still Failing! In-Reply-To: <1724465660.263.1548185832508.JavaMail.javamailuser@localhost> References: <1724465660.263.1548185832508.JavaMail.javamailuser@localhost> Message-ID: <1817445559.275.1548196477677.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 34 Status: Still Failing Timestamp: 20190122T221732Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190122T145934Z OS: centos MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root BASE_VERSION: f-stein-20190122T145934Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: f-stein PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs FLOCK_VERSION: f-stein-centos-master-20190122T145934Z PREFIX: f-stein OPENSTACK_RELEASE: master TIMESTAMP: 20190122T145934Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Tue Jan 22 22:34:40 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 17:34:40 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 30 - Still Failing! In-Reply-To: <1141379924.266.1548185837171.JavaMail.javamailuser@localhost> References: <1141379924.266.1548185837171.JavaMail.javamailuser@localhost> Message-ID: <118928900.278.1548196481283.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 30 Status: Still Failing Timestamp: 20190122T221649Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: f/stein MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190122T145934Z OS: centos MUNGED_BRANCH: f-stein MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein PUBLISH_DISTRO_BASE: /export/mirror/starlingx/feature/stein/centos DOCKER_BUILD_ID: jenkins-f-stein-20190122T145934Z-builder OPENSTACK_RELEASE: master TIMESTAMP: 20190122T145934Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190122T145934Z/outputs From build.starlingx at gmail.com Tue Jan 22 22:35:05 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 17:35:05 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_base_image - Build # 37 - Failure! Message-ID: <1731420598.281.1548196506024.JavaMail.javamailuser@localhost> Project: STX_build_docker_base_image Build #: 37 Status: Failure Timestamp: 20190122T223442Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: dev-20190122T145945Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs OPENSTACK_RELEASE: pike OS_VERSION: 7.5.1804 REGISTRY_ORG: starlingx BASE_LATEST_TAG: dev-latest PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Tue Jan 22 22:35:08 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 17:35:08 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 31 - Still Failing! In-Reply-To: <1739672843.276.1548196478568.JavaMail.javamailuser@localhost> References: <1739672843.276.1548196478568.JavaMail.javamailuser@localhost> Message-ID: <989408254.284.1548196509654.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 31 Status: Still Failing Timestamp: 20190122T223442Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos DOCKER_BUILD_ID: jenkins-master-20190122T145945Z-builder OPENSTACK_RELEASE: pike TIMESTAMP: 20190122T145945Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs From Ghada.Khalil at windriver.com Wed Jan 23 00:06:55 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 23 Jan 2019 00:06:55 +0000 Subject: [Starlingx-discuss] Bulk update of story tags from stx.2019.03 to stx.2019.05 Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4A9B2B@ALA-MBD.corp.ad.wrs.com> Hello all, The StarlingX story board tags are being updated from stx.2019.03 to stx.2019.05 to align with the next StarlingX release. Updates to active stories have been done https://storyboard.openstack.org/#!/story/list?status=active&project_group_id=86&tags=stx.2019.05 There are 68 active stories tagged for stx.2019.05. If you are planning to deliver a story for the May release, please tag it with the stx.2019.05 release tag. Note: Updates to merged stories are still in progress. Regards, Ghada StarlingX Release Prime From erich.cordoba.malibran at intel.com Wed Jan 23 00:34:28 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Wed, 23 Jan 2019 00:34:28 +0000 Subject: [Starlingx-discuss] [Containers] system application-apply fails and "unauthorized: incorrect username or password" in sysinv.log Message-ID: <68fb5b16167a37d32886058542c141ad50460d2e.camel@intel.com> Hi, After run system application-apply stx-openstack, the status of the application goes to apply-failed. After checking the logs, in sysinv.log I see the following errors: ERROR sysinv.conductor.kube_app [-] Image docker.io/prom/mysqld-exporter:v0.10.0 download failed from local registry: 500 Server Error: Internal Server Error ("G et https://registry-1.docker.io/v2/prom/mysqld-exporter/manifests/v0.10.0: unauthorized: incorrect username or password") Also, 2019-01-22 23:52:03.859 3016 ERROR sysinv.conductor.kube_app [-] Image quay.io/attcomdev/ubuntu-source-gnocchi-statsd:3.0.3 download failed from local registry: 500 Server Error: Internal Se rver Error ("Get https://quay.io/v2/attcomdev/ubuntu-source-gnocchi-statsd/manifests/3.0.3: error parsing HTTP 429 response body: invalid character 'T' looking for beginning of value: "Too m any login attempts. \nPlease reset your Quay password and try again."") and 2019-01-22 23:53:43.750 3016 ERROR sysinv.conductor.kube_app [-] Image docker.io/openstackhelm/openvswitch:v2.8.1 download failed from local registry: 500 Server Error: Internal Server Error ("Get https://registry-1.docker.io/v2/openstackhelm/openvswitch/manifests/v2.8.1: toomanyrequests: too many failed login attempts for username or IP address") I'm not sure if I missed some configuration step. wrsrootLooking into the code it seems that a token is generated and then used with the registry. I removed those lines and use a docker pull without a token and the images started to download. See the code below: sysinv/conductor/kube_app.py DOCKER_REGISTRY_USER = 'admin' DOCKER_REGISTRY_SERVICE = 'CGCS' ... def get_docker_registry_authentication(): docker_registry_user_password = keyring.get_password( DOCKER_REGISTRY_SERVICE, DOCKER_REGISTRY_USER) if not docker_registry_user_password: raise exception.DockerRegistryCredentialNotFound( name=DOCKER_REGISTRY_USER) ... def download_an_image(self, loc_img_tag): rc = True start = time.time() try: # Pull image from local docker registry LOG.info("Image %s download started from local registry" % loc_img_tag) # docker_registry_auth = get_docker_registry_authentication() client = docker.APIClient(timeout=INSTALLATION_TIMEOUT) # client.pull(loc_img_tag, auth_config=docker_registry_auth) client.pull(loc_img_tag) except docker.errors.NotFound: -Erich From xiongzhiwei at baicells.com Wed Jan 23 00:50:06 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Wed, 23 Jan 2019 08:50:06 +0800 Subject: [Starlingx-discuss] Mount error when executing build-pkgs References: <2019012218313131162786@baicells.com>, Message-ID: <2019012308500644965493@baicells.com> Hi Mingyuna, After changed the nr_inode as your suggestion, I encounter a new issue as below, do you know how can I fix it? 00:43:41 Start: build phase for registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Start: build setup for registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Finish: build setup for registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Start: rpmbuild registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Start: Outputting list of installed packages 00:43:41 Finish: Outputting list of installed packages 00:43:41 ERROR: Exception(/localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SRPMS/registry-token-server-1.0.0-1.tis.1.src.rpm) Config(mock/b0) 0 minutes 7 seconds 00:43:41 INFO: Results and/or logs in: /localdisk/loadbuild/ubuntu/starlingx/std/results/ubuntu-starlingx-tis-r5-pike-std/registry-token-server-1.0.0-1.tis.1 00:43:41 ERROR: Command failed: 00:43:41 # bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/registry-token-server.spec 00:43:41 00:43:42 End build on 'b0': /localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SRPMS/registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:42 Error building registry-token-server-1.0.0-1.tis.1.src.rpm on 'b0'. 00:43:42 Will try to build again (if some other package will succeed). 00:43:42 ===== iteration 1 complete ===== Thank you very much. Tim Xiong From: Qi, Mingyuan Date: 2019-01-22 23:41 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] Mount error when executing build-pkgs Hi Tim, It’s nr_inode=0 issue, please try below cmd in container and build again: sudo sed -i 's/nr_inodes=0/nr_inodes=100k/g' /usr/lib/python2.7/site-packages/mockbuild/plugins/tmpfs.py Mingyuan From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Tuesday, January 22, 2019 18:32 To: starlingx-discuss Subject: [Starlingx-discuss] Mount error when executing build-pkgs Hi all, When I execute the "build-pkgs" in container, below errors printed: 06:22:54 mock_update_or_init: in 06:22:55 b1: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg 06:22:55 b1: Updating the mock environment 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b1: Start: init plugins 06:22:55 b1: INFO: tmpfs initialized 06:22:55 b1: INFO: selinux disabled 06:22:55 b1: Finish: init plugins 06:22:55 b1: Start: run 06:22:55 b1: Start: chroot init 06:22:55 b1: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root. 06:22:55 b1: ERROR: Command failed: 06:22:55 b1: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root 06:22:55 b1: 06:22:55 b2: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg 06:22:55 b2: Updating the mock environment 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b2: Start: init plugins 06:22:55 b2: INFO: tmpfs initialized 06:22:55 b2: INFO: selinux disabled 06:22:55 b2: Finish: init plugins 06:22:55 b2: Start: run 06:22:55 b2: Start: chroot init 06:22:55 b2: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root. 06:22:55 b2: ERROR: Command failed: 06:22:55 b2: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root 06:22:55 b2: 06:22:55 b3: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg 06:22:55 b3: Updating the mock environment 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b3: Start: init plugins 06:22:55 b3: INFO: tmpfs initialized 06:22:55 b3: INFO: selinux disabled 06:22:55 b3: Finish: init plugins 06:22:55 b3: Start: run 06:22:55 b3: Start: chroot init 06:22:55 b3: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root. 06:22:55 b3: ERROR: Command failed: 06:22:55 b3: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root The full log attached. Who can help me to fix it? thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Wed Jan 23 00:55:11 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 23 Jan 2019 00:55:11 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 01/22/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD61349@FMSMSX114.amr.corp.intel.com> Agenda for 1/22/2019 Attendees: Bruce, Cristopher, Abraham, Elio, Erich, Fernando, JC, JP, Jose Victor, Maria, Numan, Bill, Ricardo * Review of tasks for 2019.05.0 release + Here you can find the list of stories created for the testing team - https://storyboard.openstack.org/#!/worklist/553 + For performance testing: Victor is leading the effort. A proposal will be sent to the mailing list for the metrics to be followed and the tools to use for that. Initial email to be sent today. - Stress and stability is not covered by this performance topic. - Initially Rally is being proposed. To be run on bare metal, on the same config every time. + Regression test suite - - Numan's team working on reviewing Nova test cases - checking and updating as required. - Elio checking networking domain - estimated date for finishing is two weeks from now. - Review again in 02/05 - Ada to start an email thread for distributing the revision of the rest of the domains + Please create the stories for the things you are working on and let me know to add them to the worklist. * Update on test repository - Cristopher + Repo is created within the OpenStack infrastructure (stx-test) + Storyboard for the repo is also ready (stx-test) + Define the reviewers for the repo (Cristopher) + Investigating if there's the ability to define sub-repos + Numan will send a proposal (mailing list) for the structure of the repo - 01/25 - please include your proposal for reviewers + This repo will contain manual and code for automated test cases. * Update on Test Dashboard - Cristopher + Checking requirements. List to be provided on 01/29 + Involve Ken by the time asking to CENGN for this. * Sanity on Containers - Jose + Working on setting simplex on a virtual environment. + once we have this ready, will try a sanity run to check the connections from testing FW to the instances + Currently we are blocked because our lab doesn't have access to the internet for downloading the data. Memo and Cristopher working on a WorkAround - After we have this solved, we will do a setup on baremetal. + Numan confirmed no changes are required in the infrastructure. + Simplex and Storage configs are ready, duplex is almost ready, dedicated (controller) storage ready next week. Asking for volunteers for testing the setup. + Containers will be ready on 02/15. * Opens + Bill - Tasks pending from the community meeting sent. - https://docs.google.com/spreadsheets/d/1F-JKh8_gLlUzbrUJRbsf4u65yBGVBl8HGe-RnUQoyc4/edit?usp=sharing - Ada to check the list and match with the storyboard worklist - What has happened with the keys to upload content to CENGN keys were sent to Ken and Raymond from CENGN. Cristopher to follow up. + Fernando - thank you Numan for all the help with Security tests. Kudos! - waiting for Ken's feedback. + Bruce - OpenStack distro team - stories for tests required to be sent this week. From build.starlingx at gmail.com Wed Jan 23 01:09:47 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 20:09:47 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 35 - Still Failing! In-Reply-To: <836483565.273.1548196473616.JavaMail.javamailuser@localhost> References: <836483565.273.1548196473616.JavaMail.javamailuser@localhost> Message-ID: <1174564679.288.1548205788977.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 35 Status: Still Failing Timestamp: 20190123T005150Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root BASE_VERSION: dev-20190122T145945Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: dev PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs FLOCK_VERSION: dev-centos-pike-20190122T145945Z PREFIX: dev OPENSTACK_RELEASE: pike TIMESTAMP: 20190122T145945Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Wed Jan 23 01:09:51 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 22 Jan 2019 20:09:51 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 32 - Still Failing! In-Reply-To: <2005520634.282.1548196506906.JavaMail.javamailuser@localhost> References: <2005520634.282.1548196506906.JavaMail.javamailuser@localhost> Message-ID: <622051744.291.1548205792559.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 32 Status: Still Failing Timestamp: 20190123T005110Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos DOCKER_BUILD_ID: jenkins-master-20190122T145945Z-builder OPENSTACK_RELEASE: pike TIMESTAMP: 20190122T145945Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs From mingyuan.qi at intel.com Wed Jan 23 01:30:22 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 23 Jan 2019 01:30:22 +0000 Subject: [Starlingx-discuss] [Containers] LP 1812519: config_controller --kubernetes fails at step 06 In-Reply-To: <080ca09131a1535fbd7d0b5ca6c83f9102e26948.camel@intel.com> References: <080ca09131a1535fbd7d0b5ca6c83f9102e26948.camel@intel.com> Message-ID: Erich, Current docker version 18.03.1-ce in starlingx doesn't support wildcard or CIDR notation subnet in no_proxy. Go-lang net lib used by docker supports CIDR notation in this commit[0] which is after docker 18.03.1-ce. Have you tried with a particular ip address in no_proxy? [0] https://github.com/golang/net/blob/c21de06aaf072cea07f3a65d6970e5c7d8b6cd6d/http/httpproxy/proxy.go#L39-L52 Mingyuan -----Original Message----- From: Cordoba Malibran, Erich Sent: Tuesday, January 22, 2019 23:53 To: Frank.Miller at windriver.com; Qi, Mingyuan Cc: Al.Bailey at windriver.com; starlingx-discuss at lists.starlingx.io Subject: Re: [Containers] LP 1812519: config_controller --kubernetes fails at step 06 On Tue, 2019-01-22 at 09:20 +0800, Qi, Mingyuan wrote: > My latest try is built from last Wednesday’s code. I’ll try the latest > code today along with the proxy patch. > > Mingyuan > > From: Miller, Frank [mailto:Frank.Miller at windriver.com] > Sent: Tuesday, January 22, 2019 3:39 > To: Cordoba Malibran, Erich ; Qi, > Mingyuan > Cc: Bailey, Henry Albert (Al) ; 'starlingx-d > iscuss at lists.starlingx.io' > Subject: [Containers] LP 1812519: config_controller --kubernetes fails > at step 06 > > Erich: > > You indicated you saw this failure when using an ISO from Jan 17: htt > ps://bugs.launchpad.net/starlingx/+bug/1812519 > > Al cannot reproduce this. The one difference between Al’s environment > and yours is Al does not use a proxy. Mingyuan used a proxy a few > weeks ago and this worked for him and he added steps to use a proxy to > the wiki. I have 2 questions: > 1. Erich can you explain any changes you had to make in your > environment that are not listed on the current wiki: > https://wiki.openstack.org/wiki/StarlingX/Containers/Installation I'm using a libvirt/qemu setup. I wasn't unable to set a NAT network but I define some iptables rules to get internet access from the VM. I set the proxy settings on /etc/environment and /etc/systemd/system/docker.service.d/ This was my initial no_proxy no_proxy=localhost,127.0.0.1,192.168.206.2,172.16.0.0/16,10.96.0.0/12 I started adding some networks there as I saw errors on the puppet.log. At this point I can curl any host on internet, so external networking is working. With this setup I run the config_controller --kubernetes command and I see the behavior described in the initial bug report. Then I tried a different no_proxy value, Jose told me that he as able to pass config_controller on virtualbox using only no_proxy=127.0.0.1. With no_proxy=127.0.0.1 the config_controller --kubernetes succeed but I can't do a 'source /etc/platform/openrc', I get the following errors: controller-0:~$ source /etc/platform/openrc Openstack Admin credentials can only be loaded from the active controller. I tried to force execution of the controller_config script but I get: ***************************************************** ***************************************************** Unable to get IP from host: controller-0 ***************************************************** ***************************************************** Pausing for 5 seconds... and checking /etc/hosts the file is incomplete: # HEADER: This file was autogenerated at 2019-01-21 18:59:58 +0000 # HEADER: by puppet. While it can still be managed manually, it # HEADER: is definitely not recommended. 127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4 127.0.0.1 localhost localhost.localdomain controller ::1 controller Anyway, something bad happened even with the config_controller running without errors. Also, I tried the VirtualBox in a non-proxy environment and everything worked. Today I'll try VirtualBox behind a proxy. As my libvirt/qemu setup has internet access I can't think on reason on why it works with virtualbox, but no with my setup. I'll confirm this. > 2. Mingyuan can you tell us if you are able to get > config_controller –kubernetes to succeed with a load from Jan 17th or > later? > > Hopefully answers to the above 2 questions will point to why you are > seeing failures while others are not. > > Frank > From shuicheng.lin at intel.com Wed Jan 23 01:37:40 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Wed, 23 Jan 2019 01:37:40 +0000 Subject: [Starlingx-discuss] Mount error when executing build-pkgs In-Reply-To: <2019012308500644965493@baicells.com> References: <2019012218313131162786@baicells.com>, <2019012308500644965493@baicells.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE6D494@SHSMSX101.ccr.corp.intel.com> Hi Tim, Please check the “build.log” in below folder. You should get the failure info in it. INFO: Results and/or logs in: /localdisk/loadbuild/ubuntu/starlingx/std/results/ubuntu-starlingx-tis-r5-pike-std/registry-token-server-1.0.0-1.tis.1 Best Regards Shuicheng From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Wednesday, January 23, 2019 8:50 AM To: Qi, Mingyuan ; starlingx-discuss Subject: Re: [Starlingx-discuss] Mount error when executing build-pkgs Hi Mingyuna, After changed the nr_inode as your suggestion, I encounter a new issue as below, do you know how can I fix it? 00:43:41 Start: build phase for registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Start: build setup for registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Finish: build setup for registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Start: rpmbuild registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:41 Start: Outputting list of installed packages 00:43:41 Finish: Outputting list of installed packages 00:43:41 ERROR: Exception(/localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SRPMS/registry-token-server-1.0.0-1.tis.1.src.rpm) Config(mock/b0) 0 minutes 7 seconds 00:43:41 INFO: Results and/or logs in: /localdisk/loadbuild/ubuntu/starlingx/std/results/ubuntu-starlingx-tis-r5-pike-std/registry-token-server-1.0.0-1.tis.1 00:43:41 ERROR: Command failed: 00:43:41 # bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/registry-token-server.spec 00:43:41 00:43:42 End build on 'b0': /localdisk/loadbuild/ubuntu/starlingx/std/rpmbuild/SRPMS/registry-token-server-1.0.0-1.tis.1.src.rpm 00:43:42 Error building registry-token-server-1.0.0-1.tis.1.src.rpm on 'b0'. 00:43:42 Will try to build again (if some other package will succeed). 00:43:42 ===== iteration 1 complete ===== Thank you very much. Tim Xiong From: Qi, Mingyuan Date: 2019-01-22 23:41 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] Mount error when executing build-pkgs Hi Tim, It’s nr_inode=0 issue, please try below cmd in container and build again: sudo sed -i 's/nr_inodes=0/nr_inodes=100k/g' /usr/lib/python2.7/site-packages/mockbuild/plugins/tmpfs.py Mingyuan From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Tuesday, January 22, 2019 18:32 To: starlingx-discuss > Subject: [Starlingx-discuss] Mount error when executing build-pkgs Hi all, When I execute the "build-pkgs" in container, below errors printed: 06:22:54 mock_update_or_init: in 06:22:55 b1: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg 06:22:55 b1: Updating the mock environment 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b1.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b1: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b1: Start: init plugins 06:22:55 b1: INFO: tmpfs initialized 06:22:55 b1: INFO: selinux disabled 06:22:55 b1: Finish: init plugins 06:22:55 b1: Start: run 06:22:55 b1: Start: chroot init 06:22:55 b1: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root. 06:22:55 b1: ERROR: Command failed: 06:22:55 b1: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b1/root 06:22:55 b1: 06:22:55 b2: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg 06:22:55 b2: Updating the mock environment 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b2.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b2: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b2: Start: init plugins 06:22:55 b2: INFO: tmpfs initialized 06:22:55 b2: INFO: selinux disabled 06:22:55 b2: Finish: init plugins 06:22:55 b2: Start: run 06:22:55 b2: Start: chroot init 06:22:55 b2: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root. 06:22:55 b2: ERROR: Command failed: 06:22:55 b2: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b2/root 06:22:55 b2: 06:22:55 b3: mock_update_or_init_cfg: /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg 06:22:55 b3: Updating the mock environment 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: /usr/bin/mock -r /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std/ubuntu-starlingx-tis-r5-pike-std.b3.cfg --configdir /localdisk/loadbuild/ubuntu/starlingx/std/configs/ubuntu-starlingx-tis-r5-pike-std --update 06:22:55 b3: INFO: mock.py version 1.4.13 starting (python version = 2.7.5)... 06:22:55 b3: Start: init plugins 06:22:55 b3: INFO: tmpfs initialized 06:22:55 b3: INFO: selinux disabled 06:22:55 b3: Finish: init plugins 06:22:55 b3: Start: run 06:22:55 b3: Start: chroot init 06:22:55 b3: INFO: mounting tmpfs at /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root. 06:22:55 b3: ERROR: Command failed: 06:22:55 b3: # mount -n -t tmpfs -o mode=0755 -o nr_inodes=0 -o size=11g mock_chroot_tmpfs /localdisk/loadbuild/ubuntu/starlingx/std/mock/b3/root The full log attached. Who can help me to fix it? thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Wed Jan 23 02:02:58 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 23 Jan 2019 02:02:58 +0000 Subject: [Starlingx-discuss] Questions on [Enhancement] OVS process monitoring and alarming In-Reply-To: References: Message-ID: Hi Matt, Thank you so much for your information! That’s very helpful! Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Tuesday, January 22, 2019 10:36 PM To: Xu, Chenjie Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Questions on [Enhancement] OVS process monitoring and alarming Hi Chenjie, The code that you reference below is the pmon daemon code which does not need to be updated. The story requires the creation of a set of pmon configuration files and associated puppet file link operations for when the vswitch type is ovs-dpdk. Here is the existing one for ovsdb-server for OVS (not yet integrated). https://github.com/openstack/stx-integ/blob/master/networking/openvswitch/files/ovsdb-server.pmon.conf It is packaged by this rpm spec. https://github.com/openstack/stx-integ/blob/master/networking/openvswitch-config/centos/openvswitch-config.spec You will also require puppet changes to link to this file from /etc/pmon.d (this will register the configuration with pmon daemon). https://github.com/openstack/stx-config/blob/master/puppet-manifests/src/modules/platform/manifests/vswitch.pp if $::platform::params::vswitch_type == 'ovs-dpdk' { $pmon_ensure = link } else { $pmon_ensure = absent } file { '/etc/pmon.d/ovsdb-server.conf': ensure => $pmon_ensure, target => '/etc/openvswitch/ovsdb-server.pmon.conf', owner => 'root', group => 'root', mode => '0755', } From: "Xu, Chenjie" > Date: Tuesday, January 22, 2019 at 7:53 AM To: "Peters, Matt" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Questions on [Enhancement] OVS process monitoring and alarming Hi Matt, I’m assigned the story [Enhancement] OVS process monitoring and alarming: https://storyboard.openstack.org/#!/story/2002947 And I have several questions on this story as below: 1. Does PMON refer to the following code: https://git.starlingx.io/cgit/stx-metal/tree/mtce-common/cgts-mtce-common-1.0/pmon?id=82e851d65129e819e2564fde91d48235e528efdd 2. Do I need to extend the above PMON as OVS PMON? 3. Will OVS PMON be integrated into stx-neutron? Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From jwoh95 at dcn.ssu.ac.kr Wed Jan 23 02:33:37 2019 From: jwoh95 at dcn.ssu.ac.kr (Jaewook Oh) Date: Wed, 23 Jan 2019 11:33:37 +0900 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: Thanks for all helps and advices! Especially Giuseppe's advice was exactly what I wanted to do. I could create flat network and the vms are now able to connect external network :) I'm so appreciate of your helps and thanks again. However, creating network in horizon is still impossible. I saw $fm alarm-list command, but the result showed nothing. Best Regards, Jaewook. I 2019년 1월 22일 (화) 오후 5:35, Sun, Austin 님이 작성: > Hi Jaewook: > > You can try http://10.10.10.2/admin/providernets/ to open , please change 10.10.10.2 to your oam ip. > > Then you can define flat provider networks. > > > > About log, you can try run command ‘collect’ to collect all logs , configs . > > > > Thanks. > > BR > Austin Sun. > > > > *From:* Giuseppe Sannino [mailto:km.giuseppesannino at gmail.com] > *Sent:* Tuesday, January 22, 2019 4:10 PM > *To:* Jaewook Oh > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] After deployment finished, cannot > create public flat network > > > > Hi Jaewook, > > I had a similar issue after deploying an AIO Simplex StarlingX. > > I had to re-define the host-if on the controller-0 first and then create > the related providernet. > > > > Here an example. Hope it helps. > > > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-list controller-0 > > > +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ > > | uuid | name | class | type | vlan > | ports | uses | used | attributes | provider networks | > > | | | | | id > | | i/f | by | | | > > | | | | | > | | | i/f | | | > > > +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ > > | 38da435a-5fc5-44ac-b038-52af9f23d52d | lo | platform | virtual | None > | [] | [] | [] | MTU=1500 | None | > > | 7f806264-9c19-45f4-b7d7-df1f90e9d540 | eno5 | platform | ethernet | None > | [u'eno5'] | [] | [] | MTU=1500 | None | > > | 9f7365e8-bc9a-4c9c-8725-72d40f5a18ff | eno6 | data | ethernet | None > | [u'eno6'] | [] | [] | MTU=1500, | public_flat | > > | | | | | > | | | | accelerated=True | | > > | | | | | > | | | | | | > > > +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ > > [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet list > > > +--------------------------------------+---------------+------+------+--------+ > > | ID | Name | Type | MTU | > Ranges | > > > +--------------------------------------+---------------+------+------+--------+ > > | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | public_flat | flat | 1500 | > | > > > +--------------------------------------+---------------+------+------+--------+ > > > > > > Besides, you can't create it via dashboard,. I managed to do it only via > command. So something like: > > > > neutron providernet-create public_flat --type=flat > > system host-if-list -a controller-0 > > system host-if-modify -c data controller-0 eno6 -p public_flat > > system host-if-list -a controller-0 > > openstack network create provider_flat --provider-physical-network > public_flat --provider-network-type flat --share --external > > > > which will create something like: > > [wrsroot at controller-0 ~(keystone_admin)]$ openstack network show > provider_flat > > +---------------------------+--------------------------------------+ > > | Field | Value | > > +---------------------------+--------------------------------------+ > > | admin_state_up | UP | > > : > > | provider:network_type | flat | > > | provider:physical_network | public_flat | > > | provider:segmentation_id | None | > > | qos_policy_id | None | > > | revision_number | 4 | > > | router:external | External | > > : > > +---------------------------+--------------------------------------+ > > > > /Giuseppe > > > > > > On Tue, 22 Jan 2019 at 08:47, Jaewook Oh wrote: > > Hello Hu, Yong, > > Thanks for the advice. > > > > On my dashboard, "*Danger: *An error occurred. Please try again later." > > Above error message appears, and I cannot open network creation panel. > > > > And also I'm now trying to see log in the host, but I cannot find it. Is > the log disabled by default for StarlingX? > > > > BR, > > Jaewook. > > > > 2019년 1월 22일 (화) 오후 4:08, Hu, Yong 님이 작성: > > Hey, > > Pls share the error messages you saw on Horizon. > > > > As to your question: “Is there any way to create flat network on StarlingX > openstack platform?” > > Yes, you can refer to CMD: > > $ openstack help providernet create > > $ openstack help network create > > > > Of course, since you had error on Horizon, there should be something wrong. > > So, let’s figure out why it failed first. > > > > *From: *Jaewook Oh > *Date: *Tuesday, 22 January 2019 at 2:35 PM > *To: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *[Starlingx-discuss] After deployment finished, cannot create > public flat network > > > > Hello, > > this is Jaewook Oh from IISTRC. > > > > I installed StarlingX on a server, and now I'm trying to create "flat > network" for public. > > However I couldn't find the way to make the network. > > I found "managed_flat", "managed_vlan", and "managed_vxlan" options in > '/etc/neutron/plugins/ml2/ml2_conf.ini' file. > > > > When I install some OpenStack platform, I usually used devstack, and with > devstack I could choose 'flat' option. > > > > Is there any way to create flat network on StarlingX openstack platform? > > > > And also network creation keeps failing on horizon dashboard. I had to use > OpenStack CLI. Is it also a bug? > > > > Thanks in advance for any help! > > > > Best Regards, > > Jaewook. > > > > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > -- > > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 > E-mail : jwoh95 at dcn.ssu.ac.kr > ================================================ > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -- ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 E-mail : jwoh95 at dcn.ssu.ac.kr ================================================ -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Wed Jan 23 02:47:45 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 23 Jan 2019 02:47:45 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3939EA@ALA-MBD.corp.ad.wrs.com> References: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> <38DDB8BE-5BBE-4F00-B6CB-AB7FC9637B7C@intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB3939EA@ALA-MBD.corp.ad.wrs.com> Message-ID: Memo, The issue is that during config_controller to bootstrap kubernetes and apply stx-openstack-helm, timeout occurred if pulling images from public docker registry are too slow. Regarding to the "local mirror", it means (1)setting up a local mirror registry server within the LAN that controller-0 can access, or (2)redirect default docker registry to a public docker registry mirror located in user's local region. Both of them are not related to the "on-host" registry on controller-0. One obstacle is that docker native mirror support is not enough for registries like gcr.io/quay.io. The project you mentioned[0] is a good approach for users willing/be able to setup a self-controlled local registry mirror. It won't introduce much change in starlingx and leverages proxy setting as the redirection to registry mirror. But for users not willing to hold a local registry mirror and relies on regional public registry mirror, it's not an option. These 2 cases seems can't be resolved by one approach. My thinking for the later one is to enhance docker's native mirror pulling mechanism, or to add/change the override of the image address. [0] https://github.com/rpardini/docker-registry-proxy Mingyuan -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, January 23, 2019 4:28 To: Ponce Castaneda, Guillermo A ; Saul Wold ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images The controller will already host a local docker registry after the system has been boot strapped. What is required is a private external registry that a system call pull from. I will be updating the story with additional info Brent -----Original Message----- From: Ponce Castaneda, Guillermo A [mailto:guillermo.a.ponce.castaneda at intel.com] Sent: Tuesday, January 22, 2019 3:20 PM To: Saul Wold ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images Hello Frank and team, After having some ideas exchanged with Saul, we reach to the idea that the Docker registry/proxy could be located on the controller-0, and that way all the other nodes will point to it to pull the docker images they require. This approach will require to have extra disk space on controller-0 to host the images that all the other want to pull, and we also need to change the configs on every node so they can see the controller-0 as docker registry. What do you all think? - Memo On 1/21/19, 4:29 PM, "Ponce Castaneda, Guillermo A" wrote: On 1/21/19, 3:39 PM, "Saul Wold" wrote: On 1/21/19 11:54 AM, Ponce Castaneda, Guillermo A wrote: > Hello Frank, > > Thanks for initiating the conversation, my proposed solution is to bring > up a docker registry that will have to be in the local network of each > office, so the speed of the pulls will be the faster. > > The problem with this approach might be that the references of the > docker pull have to change so it points to the local docker registry, I > have already implemented this approach locally at GDC and can provide > documentation on how to do this. > > Another approach that I am researching is to use this project: > https://github.com/rpardini/docker-registry-proxy, so far this option > seems much better but I need to explore it a little bit further, I will > provide more details on it as soon as possible. > Do we need access to more than the standard docker hub? It also seems that this approach will require modifications to the images wanting to use the proxy. We do not really need access to more than the standard docker hub, but this one way to solve the problem of the people having troubles with slow networks, the docker registry proxy method promises to be transparent for the user, the user will have to modify their docker daemon file to add the registry as proxy and just pull images normally, I am working to set that up and do a test on our network right now, once it is done I will be able to tell if it is really transparent. I am sure this is true in most proxy setups. Sau! > All the feedback and other ideas are welcome. > > Thanks and Regards. > > Guillermo (Memo) Ponce > > *From: *"Miller, Frank" > *Date: *Monday, January 21, 2019 at 12:54 PM > *To: *"Martin, Guillermo Oscar" > *Cc: *"'starlingx-discuss at lists.starlingx.io'" > > *Subject: *[Starlingx-discuss] [Containers] Approach for adding a local > mirror of docker images > > Guillermo: > > As discussed at today’s containerization meeting, please reply with your > initial thoughts on how to address > https://storyboard.openstack.org/#!/story/2004711 . If you first need > to ask a member of the containerization subteam a few questions to > understand what is needed then try reaching out to Bob Church and Angie > Wang. > > Frank > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.carlos.alonso at intel.com Wed Jan 23 02:51:04 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 23 Jan 2019 02:51:04 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190122 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8BCAE@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jan-22 (link) Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 25 TCs [PASS] TOTAL: [ 30 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 26 TCs [PASS] TOTAL: [ 31 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Jan 23 02:51:23 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 23 Jan 2019 02:51:23 +0000 Subject: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images In-Reply-To: References: <8DA47EAB-9659-4772-8081-BE17CC689541@intel.com> <9e20a178-59fd-7ce8-f46b-b8938da087fb@linux.intel.com> <38DDB8BE-5BBE-4F00-B6CB-AB7FC9637B7C@intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB3939EA@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3945DB@ALA-MBD.corp.ad.wrs.com> We will need to provide the ability to update the address for all images in question if the standard external registries are not used. As mentioned below, I will add some additional detail to the story. Brent -----Original Message----- From: Qi, Mingyuan [mailto:mingyuan.qi at intel.com] Sent: Tuesday, January 22, 2019 9:48 PM To: Rowsell, Brent ; Ponce Castaneda, Guillermo A ; Saul Wold ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images Memo, The issue is that during config_controller to bootstrap kubernetes and apply stx-openstack-helm, timeout occurred if pulling images from public docker registry are too slow. Regarding to the "local mirror", it means (1)setting up a local mirror registry server within the LAN that controller-0 can access, or (2)redirect default docker registry to a public docker registry mirror located in user's local region. Both of them are not related to the "on-host" registry on controller-0. One obstacle is that docker native mirror support is not enough for registries like gcr.io/quay.io. The project you mentioned[0] is a good approach for users willing/be able to setup a self-controlled local registry mirror. It won't introduce much change in starlingx and leverages proxy setting as the redirection to registry mirror. But for users not willing to hold a local registry mirror and relies on regional public registry mirror, it's not an option. These 2 cases seems can't be resolved by one approach. My thinking for the later one is to enhance docker's native mirror pulling mechanism, or to add/change the override of the image address. [0] https://github.com/rpardini/docker-registry-proxy Mingyuan -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Wednesday, January 23, 2019 4:28 To: Ponce Castaneda, Guillermo A ; Saul Wold ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images The controller will already host a local docker registry after the system has been boot strapped. What is required is a private external registry that a system call pull from. I will be updating the story with additional info Brent -----Original Message----- From: Ponce Castaneda, Guillermo A [mailto:guillermo.a.ponce.castaneda at intel.com] Sent: Tuesday, January 22, 2019 3:20 PM To: Saul Wold ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Approach for adding a local mirror of docker images Hello Frank and team, After having some ideas exchanged with Saul, we reach to the idea that the Docker registry/proxy could be located on the controller-0, and that way all the other nodes will point to it to pull the docker images they require. This approach will require to have extra disk space on controller-0 to host the images that all the other want to pull, and we also need to change the configs on every node so they can see the controller-0 as docker registry. What do you all think? - Memo On 1/21/19, 4:29 PM, "Ponce Castaneda, Guillermo A" wrote: On 1/21/19, 3:39 PM, "Saul Wold" wrote: On 1/21/19 11:54 AM, Ponce Castaneda, Guillermo A wrote: > Hello Frank, > > Thanks for initiating the conversation, my proposed solution is to bring > up a docker registry that will have to be in the local network of each > office, so the speed of the pulls will be the faster. > > The problem with this approach might be that the references of the > docker pull have to change so it points to the local docker registry, I > have already implemented this approach locally at GDC and can provide > documentation on how to do this. > > Another approach that I am researching is to use this project: > https://github.com/rpardini/docker-registry-proxy, so far this option > seems much better but I need to explore it a little bit further, I will > provide more details on it as soon as possible. > Do we need access to more than the standard docker hub? It also seems that this approach will require modifications to the images wanting to use the proxy. We do not really need access to more than the standard docker hub, but this one way to solve the problem of the people having troubles with slow networks, the docker registry proxy method promises to be transparent for the user, the user will have to modify their docker daemon file to add the registry as proxy and just pull images normally, I am working to set that up and do a test on our network right now, once it is done I will be able to tell if it is really transparent. I am sure this is true in most proxy setups. Sau! > All the feedback and other ideas are welcome. > > Thanks and Regards. > > Guillermo (Memo) Ponce > > *From: *"Miller, Frank" > *Date: *Monday, January 21, 2019 at 12:54 PM > *To: *"Martin, Guillermo Oscar" > *Cc: *"'starlingx-discuss at lists.starlingx.io'" > > *Subject: *[Starlingx-discuss] [Containers] Approach for adding a local > mirror of docker images > > Guillermo: > > As discussed at today’s containerization meeting, please reply with your > initial thoughts on how to address > https://storyboard.openstack.org/#!/story/2004711 . If you first need > to ask a member of the containerization subteam a few questions to > understand what is needed then try reaching out to Bob Church and Angie > Wang. > > Frank > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Jan 23 05:31:00 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 22 Jan 2019 23:31:00 -0600 Subject: [Starlingx-discuss] Performance Framework (requirements and first ideas) Message-ID: Hi team I would like to start the thread about the Performance testing framework. I took the liberty to make an initial document to gather requirements and ideas about how to address this problem https://docs.google.com/document/d/1gNHthtJSaijz5VewHAMCxeaXk_C1gbCS1FP18-z942c/edit?usp=sharing Despite the fact that there are few suggestions feel free to make any comment or add requirements as your needs, the idea is to try to cover as much as end-user scenarios as possible Hope you find this useful, happy to help Victor Rodriguez From changcheng.liu at intel.com Wed Jan 23 05:37:11 2019 From: changcheng.liu at intel.com (Liu, Changcheng) Date: Wed, 23 Jan 2019 05:37:11 +0000 Subject: [Starlingx-discuss] check_osds_down_up check range Message-ID: <0D7994A90DD70040A9F5E77C4D23C57D50F4B4F2@SHSMSX103.ccr.corp.intel.com> Hi John, Could you help check whether there's something wrong with below code? File: cgcs-root/stx/stx-metal/inventory/inventory/inventory/common/ceph.py 96 def check_osds_down_up(self, hostname, upgrade): 97 # check if osds from a storage are down/up 98 response, body = self._ceph_api.osd_tree(body='json') 99 osd_tree = body['output']['nodes'] 100 size = len(osd_tree) 101 for i in range(1, size): Is there something special reason to not range from 0? i.e. range(0, size) B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Jan 23 05:47:39 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 22 Jan 2019 23:47:39 -0600 Subject: [Starlingx-discuss] Performance Framework (requirements and first ideas) In-Reply-To: References: Message-ID: On Tue, Jan 22, 2019 at 11:31 PM Victor Rodriguez wrote: > > Hi team > > I would like to start the thread about the Performance testing > framework. I took the liberty to make an initial document to gather > requirements and ideas about how to address this problem > > https://docs.google.com/document/d/1gNHthtJSaijz5VewHAMCxeaXk_C1gbCS1FP18-z942c/edit?usp=sharing > Sorry for the spam, this is the actual link for the document https://docs.google.com/document/d/1js5uaeJRz4mX_WkioqiGwK5FC7pVOW0mdELAr2yuvKI/edit?usp=sharing Regards > Despite the fact that there are few suggestions feel free to make any > comment or add requirements as your needs, the idea is to try to cover > as much as end-user scenarios as possible > > Hope you find this useful, happy to help > > Victor Rodriguez From mingyuan.qi at intel.com Wed Jan 23 08:14:33 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 23 Jan 2019 08:14:33 +0000 Subject: [Starlingx-discuss] Mount error when executing build-pkgs Message-ID: You have to download these tarballs through stx download script, it will change the arcname of the tarball. In this case mux-456bcfa82d672db7cae587c9b541463f65bc2718 will be changed to gorilla-mux in the download script. Mingyuan From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Wednesday, January 23, 2019 15:20 To: Lin, Shuicheng >; Qi, Mingyuan >; starlingx-discuss > Subject: Re: RE: [Starlingx-discuss] Mount error when executing build-pkgs Hi Shuicheng, Yes, I had checked /localdisk/loadbuild/ubuntu/starlingx/std/results/ubuntu-starlingx-tis-r5-pike-std/registry-token-server-1.0.0-1.tis.1/build.log, found below informations: + cd registry-token-server-1.0.0 + /usr/bin/pigz -dc /builddir/build/SOURCES/gorilla-mux-456bcfa82d672db7cae587c9b541463f65bc2718.tar.gz + /usr/bin/tar -xvvof - drwxrwxr-x root/root 0 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/ -rw-rw-r-- root/root 292 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/.travis.yml -rw-rw-r-- root/root 1476 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/LICENSE -rw-rw-r-- root/root 11972 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/README.md -rw-rw-r-- root/root 1399 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/bench_test.go -rw-rw-r-- root/root 380 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/context_gorilla.go -rw-rw-r-- root/root 868 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/context_gorilla_test.go -rw-rw-r-- root/root 380 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/context_native.go -rw-rw-r-- root/root 690 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/context_native_test.go -rw-rw-r-- root/root 8528 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/doc.go -rw-rw-r-- root/root 15842 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/mux.go -rw-rw-r-- root/root 58658 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/mux_test.go -rw-rw-r-- root/root 17516 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/old_test.go -rw-rw-r-- root/root 8956 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/regexp.go -rw-rw-r-- root/root 19752 2017-05-22 15:17 mux-456bcfa82d672db7cae587c9b541463f65bc2718/route.go + STATUS=0 + '[' 0 -ne 0 ']' + /usr/bin/chmod -Rf a+rX,u+w,g-w,o-w . + mkdir -p _build/src/github.com/gorilla/ + mv gorilla-mux _build/src/github.com/gorilla/mux BUILDSTDERR: mv: cannot stat 'gorilla-mux': No such file or directory BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.fu9L2u (%prep) RPM build errors: BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.fu9L2u (%prep) Child return code was: 1 EXCEPTION: [Error()] Traceback (most recent call last): File "/usr/lib/python2.7/site-packages/mockbuild/trace_decorator.py", line 96, in trace result = func(*args, **kw) File "/usr/lib/python2.7/site-packages/mockbuild/util.py", line 636, in do raise exception.Error("Command failed: \n # %s\n%s" % (command, output), child.returncode) Error: Command failed: # bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/registry-token-server.spec It seems there is no 'gorilla-mux' in gorilla-mux-456bcfa82d672db7cae587c9b541463f65bc2718.tar.gz, is this package correct? I had added its download link in tarball-dl.lst as https://github.com/openstack/stx-tools/blob/master/centos-mirror-tools/tarball-dl.lst. How can I fix it? Thank you very much BR Tim Xiong From: Lin, Shuicheng Date: 2019-01-23 09:37 To: xiongzhiwei at baicells.com; Qi, Mingyuan; starlingx-discuss Subject: RE: [Starlingx-discuss] Mount error when executing build-pkgs Hi Tim, Please check the “build.log” in below folder. You should get the failure info in it. INFO: Results and/or logs in: /localdisk/loadbuild/ubuntu/starlingx/std/results/ubuntu-starlingx-tis-r5-pike-std/registry-token-server-1.0.0-1.tis.1 Best Regards Shuicheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Jan 23 08:20:01 2019 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 23 Jan 2019 08:20:01 +0000 Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA41B46D@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA41B46D@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Al & Don: According to https://stackoverflow.com/questions/54315938/why-does-pipenv-fail-to-install-a-package-inside-a-docker-container " --no-cache-dir" will cause pip 19 such issue. made change https://review.openstack.org/#/c/632632/ to remove '--no-cache-dir' , tox and zuul passed. Thanks. BR Austin Sun. From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Wednesday, January 23, 2019 6:02 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Heads up about probable failing tox and zuul Note that this also impacts loci docker image builds. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 22, 2019 4:56 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul About 3 hours ago a new version of Pip was released to the pypi site. https://pypi.org/project/pip/19.0/ Tox jobs which pick up that version of pip will likely fail to install their dependencies. Docker image jobs that are using loci (which uses pip) have also been observed to fail. I don't know what the fix is, I assume many python users in many projects will be impacted. If you see an error with a signature like this, you've hit the problem Exception: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 176, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 346, in run session=session, autobuilding=True File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/wheel.py", line 848, in build assert building_is_possible AssertionError Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Wed Jan 23 09:21:09 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 23 Jan 2019 09:21:09 +0000 Subject: [Starlingx-discuss] Mount error when executing build-pkgs In-Reply-To: <20190123170337664593116@baicells.com> References: <2019012218313131162786@baicells.com>, , <2019012308500644965493@baicells.com>, <9700A18779F35F49AF027300A49E7C765FE6D494@SHSMSX101.ccr.corp.intel.com>, <20190123151930519606102@baicells.com>, <9700A18779F35F49AF027300A49E7C765FE6E6FA@SHSMSX101.ccr.corp.intel.com> <20190123170337664593116@baicells.com> Message-ID: You have to delete manually downloaded ones and download again with the script. The script will not download new one if the file exists, meanwhile the one you manually downloaded is not suitable for build. Mingyuan From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Wednesday, January 23, 2019 17:04 To: Lin, Shuicheng ; Qi, Mingyuan ; starlingx-discuss Subject: Re: RE: [Starlingx-discuss] Mount error when executing build-pkgs Hi Shuicheng and Mingyuan, Indeed, this gorilla-mux-456bcfa82d672db7cae587c9b541463f65bc2718.tar.gz was downloaded successfully with download_mirror.sh and copy to correct mirror directory. but this exception is still reported. [ubuntu at 0270dda4c544 cgcs-root]$ ls stx/downloads/ dep-v0.5.0.tar.gz integrity MLNX_OFED_LINUX-4.3-1.0.1.0-rhel7.4-x86_64.tgz docker-distribution-48294d928ced5dd9b378f7fd7c6f5da3ff3f2c89.tar.gz integrity-kmod-e6aef069.tar.gz MLNX_OFED_LINUX-4.3-3.0.2.1-rhel7.5-x86_64.tgz docker-libtrust-fa567046d9b14f6aa788882a950d69651d230b21.tar.gz ixgbe-5.3.7.tar.gz openstack-helm-9d72fe1a501bc609a875eebf7b6274e18600ed70.tar.gz dpkg_1.18.24.tar.xz ixgbevf-4.3.5.tar.gz openstack-helm-infra-5d356f9265b337b75f605dee839faa8cd0ed3ab2.tar.gz drbd-8.4.11-1.tar.gz keycodemapdb-16e5b07.tar.gz puppet drbd-8.4.3.tar.gz kubernetes-contrib-v1.12.1.tar.gz python-cephclient-v0.1.0.5.tar.gz dtc-1.4.4.tar.gz kubernetes-v1.12.1.tar.gz python-setuptools-v38.5.1.tar.gz e1000e-3.4.2.1.tar.gz kvm-unit-tests.git-4ea7633.tar.bz2 python-smartpm-1.4.1.tar.gz gnocchi-4.2.5.tar.gz ldapscripts-2.0.8.tgz qat1.7.upstream.l.1.0.3-42.tar.gz gnocchiclient-7.0.1.tar.gz libibverbs-41mlnx1-OFED.4.2.1.0.6.42120.src.rpm rdma-core-43mlnx1-1.43101.src.rpm gnulib-ffc927e.tar.gz libibverbs-41mlnx1-OFED.4.3.0.1.8.43101.src.rpm rdma-core-43mlnx1-1.43302.src.rpm gophercloud-gophercloud-aa00757ee3ab58e53520b6cb910ca0543116400a.tar.gz libibverbs-41mlnx1-OFED.4.3.2.1.6.43302.src.rpm requests-toolbelt-0.5.1.tar.gz gorilla-context-08b5f424b9271eedf6f9f0ce86cb9396ed337a42.tar.gz libtpms-0.6.0-4f0d59d.tar.gz rpm-4.14.0.tar.bz2 gorilla-mux-456bcfa82d672db7cae587c9b541463f65bc2718.tar.gz lldpd-0.9.0.tar.gz Sirupsen-logrus-55eb11d21d2a31a3cc93838241d04800f52e823d.tar.gz helm-v2.12.1-linux-amd64.tar.gz mariadb-10.1.28.tar.gz spectre-meltdown-checker-0.37+-5cc77741.tar.gz i40e-2.4.10.tar.gz mlnx-ofa_kernel-4.3-OFED.4.3.1.0.1.1.g8509e41.src.rpm swtpm-0.1.0-253eac5.tar.gz i40evf-3.5.13.tar.gz mlnx-ofa_kernel-4.3-OFED.4.3.3.0.2.1.gcf60532.src.rpm tpm-kmod-e6aef069.tar.gz ibsh-0.3e.tar.gz MLNX_OFED_LINUX-4.2-1.2.0.0-rhel7.4-x86_64.tgz tss2-930.tar.gz When I rename it to gorilla-mux.tar.gz, the exception " find: ‘/import/mirrors/CentOS/pike/downloads/gorilla-mux-456bcfa82d672db7cae587c9b541463f65bc2718.tar.gz’: No such file or directory" printed. Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Wed Jan 23 09:59:30 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Wed, 23 Jan 2019 09:59:30 +0000 Subject: [Starlingx-discuss] Can not launch instance In-Reply-To: References: , <8557B550001AFB46A43A0CCC314BF85153C5CF68@FMSMSX108.amr.corp.intel.com> <1542807798.10509.49.camel@windriver.com>, Message-ID: Ok, after some other hard working starting with my AMD compute-nodes, i will come back to this problem. if i try to start an instance with any typ of security-group the build/launch will fail: No valid host was found. There are not enough hosts available. compute-0: (RetryFilter) Previously tried: [[u'compute-0', u'compute-0'], [u'compute-1', u'compute-1']], compute-1: (RetryFilter) Previously tried: [[u'compute-0', u'compute-0'], [u'compute-1 after some research, the problem seems to be to my networks are missing "enable-port-security". so, there is no way (for me) to enable this feature. via horizon there is by default no button/checkbox to handle this and CLI calls doesnt work as well: $ openstack network create --enable --enable-port-security test Error while executing command: Unrecognized attribute(s) 'port_security_enabled' (HTTP 400) (Request-ID: req-c673e8e6-bf40-49df-8093-ee8015764672) same error if i try to update network settings: $ openstack network set --enable-port-security P1 HttpException: Unrecognized attribute(s) 'port_security_enabled' (HTTP 400) (Request-ID: req-87d63162-80bb-49a2-9987-1bf7e6300e82), Unrecognized attribute(s) 'port_security_enabled' any hints? ________________________________________ Von: von Hoesslin, Volker Gesendet: Mittwoch, 21. November 2018 15:04 An: 'Michel Thebeau' Betreff: AW: [Starlingx-discuss] Can not launch instance Indeed, but current i'm out of time :( soon, i will come back to this issue... Thx, volker... -----Ursprüngliche Nachricht----- Von: Michel Thebeau [mailto:michel.thebeau at windriver.com] Gesendet: Mittwoch, 21. November 2018 14:43 An: von Hoesslin, Volker Betreff: Re: [Starlingx-discuss] Can not launch instance Hi Volker, Jim's (James') message inquired about the nova compute log: "Log into each compute and look for error/exceptions in /var/log/nova/nova-compute.log at the specific timestamp of launch. A similar error message will likely present in the nova-conductor.log (on the controller). Depending on the specific error, you may need to dig into other logs on the compute (eg, /var/log/kern.log, /var/log/libvirt/qemu/instance-.log, /var/log/libvirt/libvirtd.log, /var/log/openstack.log, etc). This should give hints. Some of the logs are extraneous noise, so focus near the exact timestamp." If you shared some of that detail we might be able to comment about "a solution with security groups" M On Wed, 2018-11-21 at 10:24 +0000, von Hoesslin, Volker wrote: > Hi, > nothing easier than that, basically use the normal instance wizard in > the horizon and don't make any entries in the step "security groups". > regardless of that, i would be interested in a solution with security > groups in the future! > > Volker… > > Von: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com]  > Gesendet: Dienstag, 20. November 2018 17:33 > An: von Hoesslin, Volker; starlingx > Betreff: RE: [Starlingx-discuss] Can not launch instance > > Hello Volker, > > Can you share me the steps to create an instance successfully? I am > facing the same issue. > > Regards. > Juan Carlos Alonso > > From: von Hoesslin, Volker [mailto:Volker.Hoesslin at swsn.de]  > Sent: Thursday, November 15, 2018 4:25 AM > To: starlingx > Subject: Re: [Starlingx-discuss] Can not launch instance > > cool, thanks for your feedback! i've already read and understood the > topic with the backing. i'm now so far that it seems to work if I > create an instance without the security-groups (default-group) > specification, i don't know why exactly. as far as i've understood it > now, it's supposed to have something to do with "port-security- > enabled=false"? > > sorry that i got in touch with the group so quickly before i > exhausted all my possibilities... I promise improvement ;) > > Von: Dinescu, Stefan [mailto:Stefan.Dinescu at windriver.com]  > Gesendet: Donnerstag, 15. November 2018 11:15 > An: volker.von.hoesslin at gmx.de; starlingx > Betreff: Re: [Starlingx-discuss] Can not launch instance > > hello, > > Just a few quick questions to eliminate some potential issues (or > maybe even figure out the issue): > 1. Are the storage nodes in an available state? > 2. On the compute nodes, is nova-local configures for remote backing > as well? To check use "system host-lvg-show compute-0 nova-local" and > check the "instance_backing" parameter. > From: volker.von.hoesslin at gmx.de [volker.von.hoesslin at gmx.de] > Sent: Thursday, November 15, 2018 11:03 AM > To: starlingx > Subject: Re: [Starlingx-discuss] Can not launch instance > > ok, after download the first official ISO (Cengn-mirror) and > reinstall the complete starlingX i got the same error. for now, i can > see some lines in "/var/log/nova/nova-scheduler.log". i have pasted > here: https://pastebin.com/TbvYwTqu > > i really need a running openstack and i am willing to spend money for > service, please help me guys... > > volker... > > i don't understand that, either i am the only one who builds a > starlingX-stack at all or i have to fight very hard for all the > additional knowledge i need in addition to the install-doku... > now i have created all necessary components: > - external network > - internal network > - router (add external and internal network) > - add flavoir (aggregate_instance_extra_specs:storage=remote) > - add image (cirrOS 0.4.0 RAW/QCOW2) > - add key pairs > now i created a new instance (via horizon/CLI), without success. the > instance is created but it ends in an error-stat :( error messages in > the horizon GUI are meaningless for me: > > Fault > Message: No valid host was found. There are not enough hosts > available. compute-0: (RetryFilter) Previously tried: [[u'compute-1', > u'compute-1'], [u'compute-0', u'compute-0']], compute-1: > (RetryFilter) Previously tried: [[u'compute-1', u'compute-1'], > [u'compute-0 > Code: 501 > Details: compute-0: (RetryFilter) Previously tried: [[u'compute-1', > u'compute-1'], [u'compute-0', u'compute-0']], compute-1: > (RetryFilter) Previously tried: [[u'compute-1', u'compute-1'], > [u'compute-0', u'compute-0']] > Created: Nov. 13, 2018, 11:49 a.m. > > ================================================ > > Alarm UUID: 15f5bc85-de93-4109-a278-425bafc7c997 > Alarm ID: 700.001 > Severity: critical > Alarm State: set > Alarm Type: processing-error > Timestamp: Nov. 13, 2018, 11:49 a.m. > Suppression: > Entity Instance ID: tenant=571bfa02-4736-4359-a15d- > 871224b3b202.instance=3ef09380-47e6-479a-91cc-6813474a183d > Entity Type ID: tenant.instance > Probable Cause: underlying-resource-unavailable > Proposed Repair Action: Manual intervention required > Service Affecting: True > Management Affecting: True > Reason: Instance foobar123 owned by admin has failed to schedule > > > Interesting is, half of the instance is missing: @see: https://imgur. > com/a/UVmEZhF > - missing NIC > - missing IP Addresses > - missing Security Groups > - missing Volumes Attached > > > does anyone still have a tip for me? Volker > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From himanshugoyal500 at gmail.com Wed Jan 23 10:33:03 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Wed, 23 Jan 2019 16:03:03 +0530 Subject: [Starlingx-discuss] Deployment Option (error: compute boot in loop) In-Reply-To: <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D@intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D@intel.com> Message-ID: Thanks a lot Juan and Yong, Able to see compute host in "system-host-list" after connecting with a hub. starlingX Compute installation has been done. But after unlocking the compute nodes the compute node come into an endless boot loop. dmesg log shows below error: [ 20.967137] iTCO_vendor_support: vendor-support=0 [ 20.968785] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 [ 20.968840] iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS [ 21.066607] device-mapper: uevent: version 1.0.3 [ 21.067124] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel at redhat.com Checked the BIOS setting those are as mentioned in installation document. I'm using ISO Image available at below path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Regards, Himanshu Goyal On Tue, Jan 22, 2019 at 12:43 PM Hu, Yong wrote: > Himanshu, > > Could you have a try with a hub which 2 mgt ports (from controller and > compute) are plugged into? > > Let’s assure the normal setup works first, and then figure out why the > direct linkage of cable doesn’t work. > > > > BTW: “worker” and “compute” are just different “personality” names in > different STX version. > > On your current setup “compute” will do, supposedly. > > *From: *Himanshu Goyal > *Date: *Tuesday, 22 January 2019 at 12:32 AM > *To: *"Alonso, Juan Carlos" > *Cc: *"Hu, Yong" , " > starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] Deployment Option > > > > Thanks Juan ,Yong > > > > I tried both the commands output shows as below: > > > > 1) [wrsroot at controller-0 ~(keystone_admin)]$ *system host-add -n > compute-0 -p worker -m 00:1e:67:fd:3d:fe* > > usage: system host-add [-n ] [-p ] [-s > ] > > [-m ] [-i ] [-I ] > > [-T ] [-U ] [-P ] > > [-b ] [-r ] > > [-o ] [-c ] > > [-v ] [-l ] > > [-D ] > > system host-add: error: argument -p/--personality: invalid choice: > 'worker' (choose from 'controller', 'compute', 'storage', 'network', > 'profile') > > [wrsroot at controller-0 ~(keystone_admin)]$ > > [wrsroot at controller-0 ~(keystone_admin)]$ > > > > > > 2) [wrsroot at controller-0 ~(keystone_admin)]$ *system host-add -n > compute-0 -p compute -m 00:1e:67:fd:3d:fe* > > Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip > when static address allocation is configured. > > [wrsroot at controller-0 ~(keystone_admin)]$ > > > > > > Regards, > > Himanshu Goyal > > > > > > On Mon, Jan 21, 2019 at 8:42 PM Alonso, Juan Carlos < > juan.carlos.alonso at intel.com> wrote: > > Hi, > > > > The personality of computes changed to “worker”, so the command should be: > > > > system host-add -n compute-0 -p worker -m ${mac_address} > > > > Regards. > > Juan Carlos Alonso > > > > *From:* Hu, Yong > *Sent:* Monday, January 21, 2019 8:51 AM > *To:* Himanshu Goyal ; Alonso, Juan Carlos < > juan.carlos.alonso at intel.com> > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Deployment Option > > > > Hi Himanshu, > > “system host-list” doesn’t “see” your compute node and LLDP won’t work, > > because the mgt port on compute node directly connects to mgt port on > controller-0 (rather than both connecting to a hub). > > > > Anyway, given you know the MAC of mgt port on compute node, you can have a > try to run the following cmd: > > # system host-add -n compute-0 -p compute -m > > > > Regards, > > yong > > > > *From: *Himanshu Goyal > *Date: *Monday, 21 January 2019 at 8:14 PM > *To: *"Alonso, Juan Carlos" > *Cc: *"starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] Deployment Option > > > > Thanks Juan, > > > > Able to unlock my controller node. But facing Issue in PXE boot of compute > node. After unlocking of controller machine not able to see compute host in > "*system host-list*" command. > > my controller machine is directly connected to compute machine. > > > > I'm following the below steps > > Steps: > > *1) system host-unlock controller-0* > > *2) system host-list* > > Output:: > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-list > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | id | hostname | personality | administrative | operational | > availability | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | 1 | controller-0 | controller | unlocked | enabled | > available | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > > > 3) power on my compute machine. And give option to boot from PXE > > my compute machine is directly connected with controller with mgmt port. > > But not able to see host in "system host-list". > > > > 4) i tried with system host-add command also, but it is giving below error: > > *Error:* > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-add > > Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip > when static address allocation is configured. > > > > Please suggest me the needful change. > > > > Regards, > > Himanshu Goyal > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Wed Jan 23 14:13:34 2019 From: yong.hu at intel.com (Hu, Yong) Date: Wed, 23 Jan 2019 14:13:34 +0000 Subject: [Starlingx-discuss] Deployment Option (error: compute boot in loop) In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D@intel.com> Message-ID: <7010B47A-A7D7-4084-9A21-577436E10B05@intel.com> Hi Himanshu, For compute/worker node, please make sure Virtualization settings (such as VT-X and VT-D) ENABLED in BIOS. They are mandatory requirements for compute node. Regards, Yong From: Himanshu Goyal Date: Wednesday, 23 January 2019 at 6:33 PM To: "Hu, Yong" Cc: "Alonso, Juan Carlos" , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Deployment Option (error: compute boot in loop) Thanks a lot Juan and Yong, Able to see compute host in "system-host-list" after connecting with a hub. starlingX Compute installation has been done. But after unlocking the compute nodes the compute node come into an endless boot loop. dmesg log shows below error: [ 20.967137] iTCO_vendor_support: vendor-support=0 [ 20.968785] iTCO_wdt: Intel TCO WatchDog Timer Driver v1.11 [ 20.968840] iTCO_wdt: unable to reset NO_REBOOT flag, device disabled by hardware/BIOS [ 21.066607] device-mapper: uevent: version 1.0.3 [ 21.067124] device-mapper: ioctl: 4.37.1-ioctl (2018-04-03) initialised: dm-devel at redhat.com Checked the BIOS setting those are as mentioned in installation document. I'm using ISO Image available at below path: http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ Regards, Himanshu Goyal On Tue, Jan 22, 2019 at 12:43 PM Hu, Yong > wrote: Himanshu, Could you have a try with a hub which 2 mgt ports (from controller and compute) are plugged into? Let’s assure the normal setup works first, and then figure out why the direct linkage of cable doesn’t work. BTW: “worker” and “compute” are just different “personality” names in different STX version. On your current setup “compute” will do, supposedly. From: Himanshu Goyal > Date: Tuesday, 22 January 2019 at 12:32 AM To: "Alonso, Juan Carlos" > Cc: "Hu, Yong" >, "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan ,Yong I tried both the commands output shows as below: 1) [wrsroot at controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p worker -m 00:1e:67:fd:3d:fe usage: system host-add [-n ] [-p ] [-s ] [-m ] [-i ] [-I ] [-T ] [-U ] [-P ] [-b ] [-r ] [-o ] [-c ] [-v ] [-l ] [-D ] system host-add: error: argument -p/--personality: invalid choice: 'worker' (choose from 'controller', 'compute', 'storage', 'network', 'profile') [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ 2) [wrsroot at controller-0 ~(keystone_admin)]$ system host-add -n compute-0 -p compute -m 00:1e:67:fd:3d:fe Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. [wrsroot at controller-0 ~(keystone_admin)]$ Regards, Himanshu Goyal On Mon, Jan 21, 2019 at 8:42 PM Alonso, Juan Carlos > wrote: Hi, The personality of computes changed to “worker”, so the command should be: system host-add -n compute-0 -p worker -m ${mac_address} Regards. Juan Carlos Alonso From: Hu, Yong Sent: Monday, January 21, 2019 8:51 AM To: Himanshu Goyal >; Alonso, Juan Carlos > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Deployment Option Hi Himanshu, “system host-list” doesn’t “see” your compute node and LLDP won’t work, because the mgt port on compute node directly connects to mgt port on controller-0 (rather than both connecting to a hub). Anyway, given you know the MAC of mgt port on compute node, you can have a try to run the following cmd: # system host-add -n compute-0 -p compute -m Regards, yong From: Himanshu Goyal > Date: Monday, 21 January 2019 at 8:14 PM To: "Alonso, Juan Carlos" > Cc: "starlingx-discuss at lists.starlingx.io" > Subject: Re: [Starlingx-discuss] Deployment Option Thanks Juan, Able to unlock my controller node. But facing Issue in PXE boot of compute node. After unlocking of controller machine not able to see compute host in "system host-list" command. my controller machine is directly connected to compute machine. I'm following the below steps Steps: 1) system host-unlock controller-0 2) system host-list Output:: [wrsroot at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ 3) power on my compute machine. And give option to boot from PXE my compute machine is directly connected with controller with mgmt port. But not able to see host in "system host-list". 4) i tried with system host-add command also, but it is giving below error: Error: [wrsroot at controller-0 ~(keystone_admin)]$ system host-add Host-add Rejected: Cannot add a compute host without specifying a mgmt_ip when static address allocation is configured. Please suggest me the needful change. Regards, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From km.giuseppesannino at gmail.com Wed Jan 23 14:36:55 2019 From: km.giuseppesannino at gmail.com (Giuseppe Sannino) Date: Wed, 23 Jan 2019 15:36:55 +0100 Subject: [Starlingx-discuss] After deployment finished, cannot create public flat network In-Reply-To: References: Message-ID: Hi Jaewook, I'm glad I could help! /Giuseppe On Wed, 23 Jan 2019 at 03:33, Jaewook Oh wrote: > Thanks for all helps and advices! > > Especially Giuseppe's advice was exactly what I wanted to do. > > I could create flat network and the vms are now able to connect external > network :) > > I'm so appreciate of your helps and thanks again. > > However, creating network in horizon is still impossible. > I saw $fm alarm-list command, but the result showed nothing. > > > Best Regards, > Jaewook. > > > I > > 2019년 1월 22일 (화) 오후 5:35, Sun, Austin 님이 작성: > >> Hi Jaewook: >> >> You can try http://10.10.10.2/admin/providernets/ to open , please change 10.10.10.2 to your oam ip. >> >> Then you can define flat provider networks. >> >> >> >> About log, you can try run command ‘collect’ to collect all logs , configs . >> >> >> >> Thanks. >> >> BR >> Austin Sun. >> >> >> >> *From:* Giuseppe Sannino [mailto:km.giuseppesannino at gmail.com] >> *Sent:* Tuesday, January 22, 2019 4:10 PM >> *To:* Jaewook Oh >> *Cc:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] After deployment finished, cannot >> create public flat network >> >> >> >> Hi Jaewook, >> >> I had a similar issue after deploying an AIO Simplex StarlingX. >> >> I had to re-define the host-if on the controller-0 first and then create >> the related providernet. >> >> >> >> Here an example. Hope it helps. >> >> >> >> [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-list >> controller-0 >> >> >> +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ >> >> | uuid | name | class | type | >> vlan | ports | uses | used | attributes | provider networks | >> >> | | | | | id >> | | i/f | by | | | >> >> | | | | | >> | | | i/f | | | >> >> >> +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ >> >> | 38da435a-5fc5-44ac-b038-52af9f23d52d | lo | platform | virtual | >> None | [] | [] | [] | MTU=1500 | None | >> >> | 7f806264-9c19-45f4-b7d7-df1f90e9d540 | eno5 | platform | ethernet | >> None | [u'eno5'] | [] | [] | MTU=1500 | None | >> >> | 9f7365e8-bc9a-4c9c-8725-72d40f5a18ff | eno6 | data | ethernet | >> None | [u'eno6'] | [] | [] | MTU=1500, | public_flat | >> >> | | | | | >> | | | | accelerated=True | | >> >> | | | | | >> | | | | | | >> >> >> +--------------------------------------+------+----------+----------+------+-----------+------+------+-------------------+-------------------+ >> >> [wrsroot at controller-0 ~(keystone_admin)]$ openstack providernet list >> >> >> +--------------------------------------+---------------+------+------+--------+ >> >> | ID | Name | Type | MTU | >> Ranges | >> >> >> +--------------------------------------+---------------+------+------+--------+ >> >> | 197c33ba-6db0-4918-9a0e-e98b01aee1e8 | public_flat | flat | 1500 | >> | >> >> >> +--------------------------------------+---------------+------+------+--------+ >> >> >> >> >> >> Besides, you can't create it via dashboard,. I managed to do it only via >> command. So something like: >> >> >> >> neutron providernet-create public_flat --type=flat >> >> system host-if-list -a controller-0 >> >> system host-if-modify -c data controller-0 eno6 -p public_flat >> >> system host-if-list -a controller-0 >> >> openstack network create provider_flat --provider-physical-network >> public_flat --provider-network-type flat --share --external >> >> >> >> which will create something like: >> >> [wrsroot at controller-0 ~(keystone_admin)]$ openstack network show >> provider_flat >> >> +---------------------------+--------------------------------------+ >> >> | Field | Value | >> >> +---------------------------+--------------------------------------+ >> >> | admin_state_up | UP | >> >> : >> >> | provider:network_type | flat | >> >> | provider:physical_network | public_flat | >> >> | provider:segmentation_id | None | >> >> | qos_policy_id | None | >> >> | revision_number | 4 | >> >> | router:external | External | >> >> : >> >> +---------------------------+--------------------------------------+ >> >> >> >> /Giuseppe >> >> >> >> >> >> On Tue, 22 Jan 2019 at 08:47, Jaewook Oh wrote: >> >> Hello Hu, Yong, >> >> Thanks for the advice. >> >> >> >> On my dashboard, "*Danger: *An error occurred. Please try again later." >> >> Above error message appears, and I cannot open network creation panel. >> >> >> >> And also I'm now trying to see log in the host, but I cannot find it. Is >> the log disabled by default for StarlingX? >> >> >> >> BR, >> >> Jaewook. >> >> >> >> 2019년 1월 22일 (화) 오후 4:08, Hu, Yong 님이 작성: >> >> Hey, >> >> Pls share the error messages you saw on Horizon. >> >> >> >> As to your question: “Is there any way to create flat network on >> StarlingX openstack platform?” >> >> Yes, you can refer to CMD: >> >> $ openstack help providernet create >> >> $ openstack help network create >> >> >> >> Of course, since you had error on Horizon, there should be something >> wrong. >> >> So, let’s figure out why it failed first. >> >> >> >> *From: *Jaewook Oh >> *Date: *Tuesday, 22 January 2019 at 2:35 PM >> *To: *"starlingx-discuss at lists.starlingx.io" < >> starlingx-discuss at lists.starlingx.io> >> *Subject: *[Starlingx-discuss] After deployment finished, cannot create >> public flat network >> >> >> >> Hello, >> >> this is Jaewook Oh from IISTRC. >> >> >> >> I installed StarlingX on a server, and now I'm trying to create "flat >> network" for public. >> >> However I couldn't find the way to make the network. >> >> I found "managed_flat", "managed_vlan", and "managed_vxlan" options in >> '/etc/neutron/plugins/ml2/ml2_conf.ini' file. >> >> >> >> When I install some OpenStack platform, I usually used devstack, and with >> devstack I could choose 'flat' option. >> >> >> >> Is there any way to create flat network on StarlingX openstack platform? >> >> >> >> And also network creation keeps failing on horizon dashboard. I had to >> use OpenStack CLI. Is it also a bug? >> >> >> >> Thanks in advance for any help! >> >> >> >> Best Regards, >> >> Jaewook. >> >> >> >> ================================================ >> *Jaewook Oh* (오재욱) >> IISTRC - Internet Infra System Technology Research Center >> 369 Sangdo-ro, Dongjak-gu, >> 06978, Seoul, Republic of Korea >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> >> -- >> >> ================================================ >> *Jaewook Oh* (오재욱) >> IISTRC - Internet Infra System Technology Research Center >> 369 Sangdo-ro, Dongjak-gu, >> 06978, Seoul, Republic of Korea >> Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 >> E-mail : jwoh95 at dcn.ssu.ac.kr >> ================================================ >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> > > -- > > ================================================ > *Jaewook Oh* (오재욱) > IISTRC - Internet Infra System Technology Research Center > 369 Sangdo-ro, Dongjak-gu, > 06978, Seoul, Republic of Korea > Tel : +82-2-820-0841 | Mobile : +82-10-9924-2618 > E-mail : jwoh95 at dcn.ssu.ac.kr > ================================================ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Wed Jan 23 14:38:13 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Wed, 23 Jan 2019 09:38:13 -0500 Subject: [Starlingx-discuss] Deployment Option (error: compute boot in loop) In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D@intel.com> Message-ID: <1548254293.11200.6.camel@windriver.com> If you lock the compute it should stop the reboot loop.  Then you can examine the logs. When it loops like that often it is reported in /var/log/puppet/latest/ M On Wed, 2019-01-23 at 16:03 +0530, Himanshu Goyal wrote: > Thanks a lot Juan and Yong, > > Able to see compute host in "system-host-list" after connecting with > a hub. starlingX  Compute installation has been done. > But after unlocking the compute nodes the compute node come into an > endless boot loop.  From cindy.xie at intel.com Wed Jan 23 14:49:58 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 23 Jan 2019 14:49:58 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 1/23 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E4EBC2@SHSMSX103.ccr.corp.intel.com> Agenda & Notes for 1/23 meeting: 1. CentOS 7.6 upgrade status (Shuicheng/Martin) 1.1 srpm & rpm upgrade status: Done. 49 srpm: 30 upgraded, 19 invalid (15 puppet srpm dropped to be upgraded). 650+ rpm be upgraded. basic deployment tests have done on both VE and bare metal. code rebase from master done by Saul yesterdyay and will rebuild before pass the image to GDC and WR test team. 1.2 kernel & out of tree driver upgrade status: code under review for merge. Plan to provide test ISO before end of this week, after code rebased from master merged, and basic deploy test done. several driver patches require +2 before merge. Now, it's ~10 patches under review. Numan will bulid the image from feature branch once all patches are merged, so that the build can be easy without cherry pick the patches pending. 2. Ceph upgrade (Vivian/Changcheng) Liang Fang from Vivian team join the effort. Workaround to enable Ceph-mgr-deoman & Restful plugin servives. Try to enable Ceph-mgr in controller-0 before other Ceph-mon on controller-1 and storage-0. Ovidiu's comments is that this WR doesn't work as it breaks the HA and will cause the split-brain issue. WR stopped here. Yong is trying to help, without WR and go through the normal way (through SM and Puppet?) Ceph-Python-client refactoring in progress. There are 214 API in the lib, not all of them are used. 29 API are used are just finished refactored, 8 API are tested by checking the API output data format. The remaining 21 API still pending validation on running system. Liang has dedicated storage setup and learning sysInv code. Ceph 13.2.2 output is different from Ceph 10, the API in sysInv needs to remain unchanged, so we need to use a proxy lib (Ceph-Python-client) btw sysInv and Ceph 13. Still questions remain on SysInv API should change or not - working w/ Ovidiu on this change if sysInv API change could be avoid. No API found so far from sysInv directly call Ceph 13. China team will work till end of next week (Jan 31). 3. Python2to3 status (Austin) stx-distcloud patch merged. stx-config, stx-distcloudclient and stx-integ are still open. unit test patches for stx-config (sysInv) under review and should be able to merge this week. Victor: ETA for stx-distcloudclient patches, end of next week. Python3 testing after master switch to Stein for compatibility issue. 4. Bug triage (Cindy) 7 bugs pending, reviewed the old ones and make sure we have owner working on the bugs. Victor will work w/ GDC engineer for the bug assigned to Cesar. 5. Opens (all) None -----Original Appointment----- From: Xie, Cindy Sent: Sunday, November 4, 2018 10:27 PM To: 'Khalil, Ghada'; Sun, Austin; Somerville, Jim; 'Rowsell, Brent'; Liu, ZhipengS; Wold, Saul; starlingx-discuss at lists.starlingx.io; Shang, Dehao; Waheed, Numan; Troyer, Dean; Jones, Bruce E; Lin, Shuicheng; Zhu, Vivian; Hu, Yong; Xie, Cindy; 'Khalil, Ghada'; Somerville, Jim; 'Rowsell, Brent'; starlingx-discuss at lists.starlingx.io; Waheed, Numan Cc: Hu, Wei W; 'Seiler, Glenn'; Gomez, Juan P; 'Chen, Jacky'; Perez Rodriguez, Humberto I; 'Young, Ken'; Cobbley, David A; 'Waines, Greg'; Arce Moreno, Abraham; 'Eslimi, Dariush'; Lara, Cesar; Perez Carranza, Jose; 'Hellmann, Gil'; Armstrong, Robert H; Martinez Landa, Hayde; Martinez Monroy, Elio; 'Seiler, Glenn'; 'Chen, Jacky'; Perez Rodriguez, Humberto I; 'Young, Ken'; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Fang, Liang A Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, January 23, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Al.Bailey at windriver.com Wed Jan 23 14:52:07 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Wed, 23 Jan 2019 14:52:07 +0000 Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul In-Reply-To: References: <6703202FD9FDFF4A8DA9ACF104AE129FBA41B46D@ALA-MBD.corp.ad.wrs.com> Message-ID: Pip released 19.0.1 about an hour ago with this fix https://github.com/pypa/pip/commit/7db266687cb6304b0708eb408c8f15efb78eedeb So the loci builds should work again, and no need to alter the -no-cache-dir settings Al From: Sun, Austin [mailto:austin.sun at intel.com] Sent: Wednesday, January 23, 2019 3:20 AM To: Penney, Don; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Heads up about probable failing tox and zuul Hi Al & Don: According to https://stackoverflow.com/questions/54315938/why-does-pipenv-fail-to-install-a-package-inside-a-docker-container " --no-cache-dir" will cause pip 19 such issue. made change https://review.openstack.org/#/c/632632/ to remove '--no-cache-dir' , tox and zuul passed. Thanks. BR Austin Sun. From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Wednesday, January 23, 2019 6:02 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Heads up about probable failing tox and zuul Note that this also impacts loci docker image builds. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, January 22, 2019 4:56 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Heads up about probable failing tox and zuul About 3 hours ago a new version of Pip was released to the pypi site. https://pypi.org/project/pip/19.0/ Tox jobs which pick up that version of pip will likely fail to install their dependencies. Docker image jobs that are using loci (which uses pip) have also been observed to fail. I don't know what the fix is, I assume many python users in many projects will be impacted. If you see an error with a signature like this, you've hit the problem Exception: Traceback (most recent call last): File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/cli/base_command.py", line 176, in main status = self.run(options, args) File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/commands/install.py", line 346, in run session=session, autobuilding=True File "/var/lib/openstack/lib/python2.7/site-packages/pip/_internal/wheel.py", line 848, in build assert building_is_possible AssertionError Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Wed Jan 23 15:16:26 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Wed, 23 Jan 2019 15:16:26 +0000 Subject: [Starlingx-discuss] Simplex STX containerized In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C8BC0A@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8BC0A@FMSMSX108.amr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D6173D0@ALA-MBD.corp.ad.wrs.com> Hi Carlos, See answers inline. Ovidiu ________________________________ From: Alonso, Juan Carlos [juan.carlos.alonso at intel.com] Sent: Tuesday, January 22, 2019 11:46 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Simplex STX containerized Hi, I am trying to deploy an STX containerized system following steps on https://wiki.openstack.org/wiki/StarlingX/Containers/Installation On Provisioning the platform section, on third bullet, to create partitions on the root disk. The first partition is for cgts volume and the second is for nova-local. The commands show that both should be applied to the same disk ID: system host-disk-list controller-0 | awk ‘/sda/{print $2} ’ Is this correct? [Ovi] system host-disk-list controller-0 | awk ‘/sda/{print $2} ’ Grabs the disk UUID where you want to create the partition (you have disks and partitions on disks): Syntax is: [root at controller-0 wrsroot(keystone_admin)]# system host-disk-partition-add usage: system host-disk-partition-add [-t ] For example, if sda has uuid cfa76d4b-f35a-4edd-806d-f25be9f5bb08 and you want to create a partition of size 10GiB on this disk, you would use: system host-disk-partition-add -t lvm_phys_vol controller-0 cfa76d4b-f35a-4edd-806d-f25be9f5bb08 10 you can then check the partitions with 'system host-disk-partition-list controller-0' cgts-vg and nova-local should be configured in the same disk partition? [Ovi] No. You need a separate partition for each. Note that 'nova-local' can be created on a partition or on an entire disk. I could not apply because size available was not enough, instead I use a different partition (sdb) [Ovi] sdb is disk, not a partition for nova-local. [Ovi] It will work, just make sure you add another disk to the vbox VM as, in your configuration, /dev/sdb is used by nova-local. Then at step: "Add an OSD (/dev/sdb)" replace sdb with sdc. Note here: /dev/sda, /dev/sdb are disks, partitions are /dev/sdb1, /dev/sdb2 and so on. To check disks use system host-disk-list controller-0, to check all the partitions use 'system host-disk-partition-list controller-0' After unlock the controller: system host-unlock controller-0, the system reboot about 4 times and then boot correctly. Is this an expected behavior? [Ovi] No, it's not expected, it should only reboot once. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ovidiu.Poncea at windriver.com Wed Jan 23 15:38:50 2019 From: Ovidiu.Poncea at windriver.com (Poncea, Ovidiu) Date: Wed, 23 Jan 2019 15:38:50 +0000 Subject: [Starlingx-discuss] check_osds_down_up check range In-Reply-To: <0D7994A90DD70040A9F5E77C4D23C57D50F4B4F2@SHSMSX103.ccr.corp.intel.com> References: <0D7994A90DD70040A9F5E77C4D23C57D50F4B4F2@SHSMSX103.ccr.corp.intel.com> Message-ID: <4C60D9C5C8176C47874FFF36647AA19E9D6173ED@ALA-MBD.corp.ad.wrs.com> Hi Liu, That code should work correctly as the first element in the list (index 0) is not of type 'host'. Two notes here: 1. That code in cgcs-root/stx/stx-metal/inventory is not executed in a normal stx install as it is part of an sysinv in-progress refactoring. You should better look in cgcs-root/stx/stx-config/sysinv/sysinv/sysinv/sysinv/common/ceph.py 2. You shouldn't bother too much with that function (check_osds_down_up) as it was used at upgrades and we don't support upgrades in stx - it may get removed. Testing its output in small script (or even the python interpreter) should be enough at this stage. Ovidiu ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Wednesday, January 23, 2019 7:37 AM To: Kung, John Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] check_osds_down_up check range Hi John, Could you help check whether there’s something wrong with below code? File: cgcs-root/stx/stx-metal/inventory/inventory/inventory/common/ceph.py 96 def check_osds_down_up(self, hostname, upgrade): 97 # check if osds from a storage are down/up 98 response, body = self._ceph_api.osd_tree(body='json') 99 osd_tree = body['output']['nodes'] 100 size = len(osd_tree) 101 for i in range(1, size): Is there something special reason to not range from 0? i.e. range(0, size) B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Jan 23 15:37:30 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 23 Jan 2019 10:37:30 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 32 - Still Failing! In-Reply-To: <622051744.291.1548205792559.JavaMail.javamailuser@localhost> References: <2005520634.282.1548196506906.JavaMail.javamailuser@localhost> <622051744.291.1548205792559.JavaMail.javamailuser@localhost> Message-ID: <661c8ba1-a952-51a6-a0d7-fcbb050ca7fe@windriver.com> A couple more issues identified. 1) It seems 'docker login' and 'docker logout' are scoped at user level, rather than the session level. We were trying to build two images in parallel yesterday. Whichever finished first would logout, causing the second to fail when we tried to push the image to docker hub.  I'll need to implement a use counter, increment on login, decrement on logout, and only do the real logout when the count drops to zero. 2) A new pip was pushed upstream that cause a lot of breakage, and not just for us.  I'm told upstream has pushed a fix, so we'll try again today.  Longer term, can we lock down pip?  Do we need to mirror pypi?  TBD. I'll re-launch the master-pike build shortly. I'll defer f/stein-master as we are looking to pull in a few key update and rebase. Scott On 2019-01-22 8:09 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_docker_images > Build #: 32 > Status: Still Failing > Timestamp: 20190123T005110Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs > -------------------------------------------------------------------------------- > Parameters > > BRANCH: master > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190122T145945Z > OS: centos > MUNGED_BRANCH: master > MY_REPO: /localdisk/designer/jenkins/master/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190122T145945Z/logs > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/logs > MY_REPO_ROOT: /localdisk/designer/jenkins/master > PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos > DOCKER_BUILD_ID: jenkins-master-20190122T145945Z-builder > OPENSTACK_RELEASE: pike > TIMESTAMP: 20190122T145945Z > OS_VERSION: 7.5.1804 > PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/inputs > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190122T145945Z/outputs > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Robert.Church at windriver.com Wed Jan 23 15:49:04 2019 From: Robert.Church at windriver.com (Church, Robert) Date: Wed, 23 Jan 2019 15:49:04 +0000 Subject: [Starlingx-discuss] check_osds_down_up check range Message-ID: If you run the command: controller-0:~$ ceph osd tree --f json | python -c 'import json,sys;print json.load(sys.stdin)["nodes"][0]' {u'children': [-2], u'type_id': 10, u'type': u'root', u'id': -1, u'name': u'storage-tier'} The first entry is always the root entry, so we skip it. ID WEIGHT TYPE NAME UP/DOWN REWEIGHT PRIMARY-AFFINITY -1 0.11620 root storage-tier -2 0.11620 chassis group-0 -4 0.11620 host controller-0 0 0.11620 osd.0 up 1.00000 1.00000 -3 0 host controller-1 It’s a minor programmatic optimization. Bob From: Ovidiu Poncea Date: Wednesday, January 23, 2019 at 9:40 AM To: "Liu, Changcheng" , "Kung, John" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] check_osds_down_up check range Hi Liu, That code should work correctly as the first element in the list (index 0) is not of type 'host'. Two notes here: 1. That code in cgcs-root/stx/stx-metal/inventory is not executed in a normal stx install as it is part of an sysinv in-progress refactoring. You should better look in cgcs-root/stx/stx-config/sysinv/sysinv/sysinv/sysinv/common/ceph.py 2. You shouldn't bother too much with that function (check_osds_down_up) as it was used at upgrades and we don't support upgrades in stx - it may get removed. Testing its output in small script (or even the python interpreter) should be enough at this stage. Ovidiu ________________________________ From: Liu, Changcheng [changcheng.liu at intel.com] Sent: Wednesday, January 23, 2019 7:37 AM To: Kung, John Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] check_osds_down_up check range Hi John, Could you help check whether there’s something wrong with below code? File: cgcs-root/stx/stx-metal/inventory/inventory/inventory/common/ceph.py 96 def check_osds_down_up(self, hostname, upgrade): 97 # check if osds from a storage are down/up 98 response, body = self._ceph_api.osd_tree(body='json') 99 osd_tree = body['output']['nodes'] 100 size = len(osd_tree) 101 for i in range(1, size): Is there something special reason to not range from 0? i.e. range(0, size) B.R. Changcheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From himanshugoyal500 at gmail.com Wed Jan 23 16:06:16 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Wed, 23 Jan 2019 21:36:16 +0530 Subject: [Starlingx-discuss] Deployment Option (error: compute boot in loop) In-Reply-To: <7010B47A-A7D7-4084-9A21-577436E10B05@intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8ADB2@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B127@FMSMSX108.amr.corp.intel.com> <8557B550001AFB46A43A0CCC314BF85153C8B61E@FMSMSX108.amr.corp.intel.com> <673DA92A-9CAD-4EF5-A2FC-4EE22D897B9D@intel.com> <7010B47A-A7D7-4084-9A21-577436E10B05@intel.com> Message-ID: Hi Yong, Yes, virtualization setting are enabled in BIOS. Below are the dmesg logs for IOMMU [ 0.000000] Policy zone: Normal [ 0.000000] Kernel command line: BOOT_IMAGE=/vmlinuz-3.10.0-862.11.6.el7.36.tis.x86_64 root=UUID=8c9ba8fa-ecae-4d66-98ac-a77fb66faae2 ro security_profile=standard module_blacklist=integrity,ima audit=0 tboot=false crashkernel=auto biosdevname=0 console=ttyS0,115200 iommu=pt usbcore.autosuspend=-1 hugepagesz=1G hugepages=2 selinux=0 enforcing=0 nmi_watchdog=panic,1 softlockup_panic=1* intel_iommu=on *user_namespace.enable=1 hugepagesz=2M hugepages=0 default_hugepagesz=2M isolcpus=1,2 rcu_nocbs=1-35 kthread_cpus=0 irqaffinity=0 nopti nospectre_v2 [ 0.000000] audit: disabled (until reboot) *[ 0.000000] DMAR: IOMMU enabled* [ 0.000000] PID hash table entries: 4096 (order: 3, 32768 bytes) Many Thanks, Himanshu Goyal On Wed, Jan 23, 2019 at 7:43 PM Hu, Yong wrote: > Hi Himanshu, > > For compute/worker node, please make sure Virtualization settings (such as > VT-X and VT-D) ENABLED in BIOS. > > They are mandatory requirements for compute node. > > > > Regards, > > Yong > > > > *From: *Himanshu Goyal > *Date: *Wednesday, 23 January 2019 at 6:33 PM > *To: *"Hu, Yong" > *Cc: *"Alonso, Juan Carlos" , " > starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] Deployment Option (error: compute boot > in loop) > > > > Thanks a lot Juan and Yong, > > > > Able to see compute host in "system-host-list" after connecting with a > hub. starlingX Compute installation has been done. > > But after unlocking the compute nodes the compute node come into an > endless boot loop. > > > > dmesg log shows below error: > > > > [ 20.967137] iTCO_vendor_support: vendor-support=0 [ 20.968785] iTCO_wdt: > Intel TCO WatchDog Timer Driver v1.11 [ 20.968840] iTCO_wdt: unable to > reset NO_REBOOT flag, device disabled by hardware/BIOS [ 21.066607] > device-mapper: uevent: version 1.0.3 [ 21.067124] device-mapper: ioctl: > 4.37.1-ioctl (2018-04-03) initialised: dm-devel at redhat.com > > > > Checked the BIOS setting those are as mentioned in installation document. > > > > I'm using ISO Image available at below path: > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ > > > > > > Regards, > > Himanshu Goyal > > > > > > > > > > > > On Tue, Jan 22, 2019 at 12:43 PM Hu, Yong wrote: > > Himanshu, > > Could you have a try with a hub which 2 mgt ports (from controller and > compute) are plugged into? > > Let’s assure the normal setup works first, and then figure out why the > direct linkage of cable doesn’t work. > > > > BTW: “worker” and “compute” are just different “personality” names in > different STX version. > > On your current setup “compute” will do, supposedly. > > *From: *Himanshu Goyal > *Date: *Tuesday, 22 January 2019 at 12:32 AM > *To: *"Alonso, Juan Carlos" > *Cc: *"Hu, Yong" , " > starlingx-discuss at lists.starlingx.io" < > starlingx-discuss at lists.starlingx.io> > *Subject: *Re: [Starlingx-discuss] Deployment Option > > > > Thanks Juan ,Yong > > > > I tried both the commands output shows as below: > > > > 1) [wrsroot at controller-0 ~(keystone_admin)]$ *system host-add -n > compute-0 -p worker -m 00:1e:67:fd:3d:fe* > > usage: system host-add [-n ] [-p ] [-s > ] > > [-m ] [-i ] [-I ] > > [-T ] [-U ] [-P