From hu.tianhao at 99cloud.net Thu Aug 1 02:04:59 2019 From: hu.tianhao at 99cloud.net (=?utf-8?B?6IOh5aSp5piK?=) Date: Thu, 1 Aug 2019 10:04:59 +0800 Subject: [Starlingx-discuss] =?utf-8?q?=5BStarlingx-disscuss=5D=5BBuild=5D?= =?utf-8?q?Fail_to_build_rpm_packages_when_build_StarlingX_ISO?= In-Reply-To: <236333bb2fd78e964ca1cddffa1ad4baa2ccc592.camel@intel.com> References: Message-ID: <78F7D960-EB43-4FC8-AC6D-9F707575839E@99cloud.net> An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Thu Aug 1 07:18:35 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Thu, 1 Aug 2019 15:18:35 +0800 Subject: [Starlingx-discuss] [starlingx-discuss][networking]Can starlingx enable networking-sfc easily in neutron? Message-ID: <095D951C-8823-450C-9264-29C82AAB955C@99cloud.net> Hi all, I want to enable networking-sfc in neutron, and I find that the networking-sfc package has been existed in neutron-server container. So can I enable SFC with this configuration[1]? are there some problems, such as when and how to upgrade neutron database? Thanks Kunpeng From chenjie.xu at intel.com Thu Aug 1 07:35:56 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Thu, 1 Aug 2019 07:35:56 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Message-ID: Hi Matt, Based on my testing, the flavor with property "pci_passthrough:alias" should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with "pci-passthrough" by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From yan.chen at intel.com Thu Aug 1 08:00:31 2019 From: yan.chen at intel.com (Chen, Yan) Date: Thu, 1 Aug 2019 08:00:31 +0000 Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? Message-ID: <72AD03D27224C74982BE13246D75B39739A3FADE@SHSMSX103.ccr.corp.intel.com> Hi, I'm trying to run the automated-pytest-suite cases under stx-test on a simplex deployment (on a vm created by qemu kvm). The controller-0 is already unlocked and stx-openstack applied successfully. I tried to make my own test.conf following the sample config file (stx-test_template.conf) as attached. And I tried to run cases with the following command: $ pytest -m platform_sanity --testcase-config=./test.conf testcases/ But I got the following error log at the setup step 1: [2019-08-01 07:44:23,466] 1477 INFO MainThread ssh.set_natbox_client:: NatBox localhost ssh client is set [2019-08-01 07:44:23,466] 1425 INFO MainThread ssh.get_natbox_client:: Getting NatBox Client... [2019-08-01 07:44:25,322] 845 INFO MainThread container_helper.is_stx_openstack_deployed:: ['applied'] [2019-08-01 07:44:25,322] 109 INFO MainThread setups.setup_keypair:: scp key file from controller to NATBox ***Failure at test setup: /home/ec/workspace/codebase/test/automated-pytest-suite/utils/clients/ssh.py:1549: utils.exceptions.ActiveControllerUnsetException: Active controller ssh client is not set! Please use ControllerClient.set_active_controller(ssh_client) to set an active controller client. ERROR I'm wondering if this NATBox configuration is a must and if I configured it correctly? Anyone can help on the test config file? Or is there anything I need to do before the test? Thanks a lot! Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: test.conf Type: application/octet-stream Size: 1146 bytes Desc: test.conf URL: From ezpeerchen at gmail.com Thu Aug 1 08:37:09 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Thu, 1 Aug 2019 16:37:09 +0800 Subject: [Starlingx-discuss] How to change OAM IP? (STX R1.0) Message-ID: Dear all, I can''t change OAM ip on STX R1.0. Error Message: *Please configure a valid IP address in range.* ===================================================================== [wrsroot at controller-0 ~(keystone_admin)]$ system oam-show +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | created_at | 2019-07-25T11:24:15.619490+00:00 | | isystem_uuid | 103e067c-16c0-43ff-91ea-bea261f33b8a | | oam_c0_ip | 10.10.10.3 | | oam_c1_ip | 10.10.10.4 | | oam_floating_ip | 10.10.10.2 | | oam_gateway_ip | 10.10.10.1 | | oam_subnet | 10.0.0.0/8 | | updated_at | None | | uuid | bd7e48fd-d29e-4ae6-be1f-a45a56250ede | +-----------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ system oam-modify oam_subnet= 10.0.0.0/8 oam_gateway_ip=10.72.72.1 oam_floating_ip=10.72.72.2 oam_c0_ip=10.72.72.3 oam_c1_ip=10.72.72.4 action=apply Invalid oam_floating_ip=10.72.72.2. Please configure a valid IP address in range [wrsroot at controller-0 ~(keystone_admin)]$ ===================================================================== Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Thu Aug 1 10:34:01 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Thu, 1 Aug 2019 10:34:01 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: References: Message-ID: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" Cc: Ghada Khalil , "Zhao, Forrest" , "starlingx-discuss at lists.starlingx.io" Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Aug 1 13:37:46 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 1 Aug 2019 13:37:46 +0000 Subject: [Starlingx-discuss] [Starlingx-disscuss][Build]Fail to build rpm packages when build StarlingX ISO In-Reply-To: <78F7D960-EB43-4FC8-AC6D-9F707575839E@99cloud.net> References: <78F7D960-EB43-4FC8-AC6D-9F707575839E@99cloud.net> Message-ID: Since sm-common does not build a debug package, those steps are not needed, so as a workaround add this line in the sm-common.spec %define debug_package %{nil} Note: this might just mean sm-common will pass, and one of the other packages that also are generating debug files will fail. @stx-ha cores: Someone could also remove the commented out dbg section at the same time “files -n sm-common-dbg” Al From: 胡天昊 [mailto:hu.tianhao at 99cloud.net] Sent: Wednesday, July 31, 2019 10:05 PM To: Cordoba Malibran, Erich Cc: starlingx-discuss Subject: Re: [Starlingx-discuss] [Starlingx-disscuss][Build]Fail to build rpm packages when build StarlingX ISO Hi Erich, Thank you so much for your reply. I have tried to run the command “$ mock -r $MY_WORKSPACE/std/configs/--tis-r5-pike-std/--tis-pike-std.b0.cfg ” and it still fails to build rpm packages. Also I clean the mock environment with the `--clean` flag and try to build again. Errors are just same as last time I tried, and I still try to figure out why dwz cannot be installed. Thanks again, Tianhao On 07/30/2019 01:04,Cordoba Malibran, Erich wrote: Hi, It seems that your mock environment is broken, see this error: BUILDSTDERR: *** ERROR: DWARF compression requested, but no dwz installed The command that you tried generates a new mock environment and that is why it builds successfully. If you want to reproduce the issue in the mock environment that is used by the build system, then use the following config file: $ mock -r $MY_WORKSPACE/std/configs/--tis-r5-pike-std/--tis-pike-std.b0.cfg And try to find why dwz cannot be installed. Alternatively you can just clean the mock environment with the ` --clean` flag and try to build again. I hope this can help. -Erich On Mon, 2019-07-29 at 14:40 +0800, 胡天昊 wrote: Hi guys, Recently I got a problem when I try to build StarlingX ISO. When I run the 'build-pkgs' command following the 'stx.2019.05 Build guide ', I can't build rpm packages successfully. Followings are errors in build.log. ENTER ['do_with_status'](['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec'], chrootPath='/localdisk/loadbuild/test/starlingx/std/mock/b0/root'env= {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8', 'BUILD_BY': 'test', 'BUILD_DATE': '2019-07-29 04:02:46 +0000', 'REPO': '/localdisk/designer/test/starlingx/cgcs-root', 'WRS_GIT_BRANCH': 'HEAD', 'CGCS_GIT_BRANCH': 'HEAD'}shell=Falselogger=timeout=0uid=1001gid=751user='mockbuild'nspawn_args=[] unshare_net=TrueprintOutput=False) Executing command: ['bash', '--login', '-c', '/usr/bin/rpmbuild -bs --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec'] with env {'TERM': 'vt100', 'SHELL': '/bin/bash', 'HOME': '/builddir', 'HOSTNAME': 'mock', 'PATH': '/usr/bin:/bin:/usr/sbin:/sbin', 'PROMPT_COMMAND': 'printf "\\033]0;\\007"', 'PS1': ' \\s-\\v\\$ ', 'LANG': 'en_US.UTF-8', 'BUILD_BY': 'test', 'BUILD_DATE': '2019-07-29 04:02:46 +0000', 'REPO': '/localdisk/designer/test/starlingx/cgcs-root', 'WRS_GIT_BRANCH': 'HEAD', 'CGCS_GIT_BRANCH': 'HEAD'} and shell False BUILDSTDERR: /etc/profile: line 45: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: /etc/profile: line 70: /dev/null: Permission denied BUILDSTDERR: warning: Macro expanded in comment on line 26: %_unitdir I think BUILDSTDERR: warning: Macro expanded in comment on line 111: %{_unitdir}/* BUILDSTDERR: warning: Macro expanded in comment on line 112: %{_bindir}/* BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 272: /dev/null: Permission denied BUILDSTDERR: /usr/lib/rpm/find-debuginfo.sh: line 500: /dev/null: Permission denied BUILDSTDERR: *** ERROR: DWARF compression requested, but no dwz installed BUILDSTDERR: error: Bad exit status from /var/tmp/rpm-tmp.VbkqZP (%install) BUILDSTDERR: Macro expanded in comment on line 26: %_unitdir I think BUILDSTDERR: Macro expanded in comment on line 111: %{_unitdir}/* BUILDSTDERR: Macro expanded in comment on line 112: %{_bindir}/* BUILDSTDERR: Bad exit status from /var/tmp/rpm-tmp.VbkqZP (%install) RPM build errors: Child return code was: 1 EXCEPTION: [Error()] Traceback (most recent call last): File "/usr/lib/python3.6/site- packages/mockbuild/trace_decorator.py", line 96, in trace result = func(*args, **kw) File "/usr/lib/python3.6/site-packages/mockbuild/util.py", line 736, in do_with_status raise exception.Error("Command failed: \n # %s\n%s" % (command, output), child.returncode) mockbuild.exception.Error: Command failed: # bash --login -c /usr/bin/rpmbuild -bb --target x86_64 --nodeps /builddir/build/SPECS/sm-common.spec But if I run the 'sudo mock -r test-starlingx-tis-r5-pike-std.cfg sm- common-1.0.0-20.tis.src.rpm' command, rpm packages can be built successfully. I think this is probably a permission problem. But even I change owner and group of these files, the 'build-pkgs' command still failed for same reason. I really can't understand this problem,can anybody give me some comments for it? Thanks Tianhao _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Thu Aug 1 16:47:45 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 1 Aug 2019 16:47:45 +0000 Subject: [Starlingx-discuss] [Build] Meeting Minutes 8/1/2019 Message-ID: Hi all, In today's meeting we talked about: - Daily build monitoring rotation - RC1 and build logistics For full details, please see our Build Etherpad: https://etherpad.openstack.org/p/stx-build From yang.liu at windriver.com Thu Aug 1 17:22:06 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Thu, 1 Aug 2019 17:22:06 +0000 Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? In-Reply-To: <72AD03D27224C74982BE13246D75B39739A3FADE@SHSMSX103.ccr.corp.intel.com> References: <72AD03D27224C74982BE13246D75B39739A3FADE@SHSMSX103.ccr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F86926D68A@ALA-MBD.corp.ad.wrs.com> Hi Yan, Could you please attach the full log? The active controller client should have been set while pytest is collecting the test cases prior to setup step 1 even started. It should be under ~/AUTOMATION_LOGS///TIS_AUTOMATION.log Also assuming we figured out above issue, seems your goal is to run platform sanity that is not dependent on stx-openstack, at the moment, the setup automatically detects stx-openstack, and if it's applied, it will also expect the tenants,users,neutron routers,networks to have been created to prepare for stx-openstack test. To work around it, you can do one of the two things for now. 1. Remove stx-openstack 2. In file /home/ec/workspace/codebase/test/automated-pytest-suite/setups.py, modify following function: def setup_natbox_ssh(natbox, con_ssh): return None To fix it properly, we can add a configurable option to the test config file to skip the setup even if stx-openstack is present. BR, yang From: Chen, Yan [mailto:yan.chen at intel.com] Sent: August-01-19 4:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? Hi, I'm trying to run the automated-pytest-suite cases under stx-test on a simplex deployment (on a vm created by qemu kvm). The controller-0 is already unlocked and stx-openstack applied successfully. I tried to make my own test.conf following the sample config file (stx-test_template.conf) as attached. And I tried to run cases with the following command: $ pytest -m platform_sanity --testcase-config=./test.conf testcases/ But I got the following error log at the setup step 1: [2019-08-01 07:44:23,466] 1477 INFO MainThread ssh.set_natbox_client:: NatBox localhost ssh client is set [2019-08-01 07:44:23,466] 1425 INFO MainThread ssh.get_natbox_client:: Getting NatBox Client... [2019-08-01 07:44:25,322] 845 INFO MainThread container_helper.is_stx_openstack_deployed:: ['applied'] [2019-08-01 07:44:25,322] 109 INFO MainThread setups.setup_keypair:: scp key file from controller to NATBox ***Failure at test setup: /home/ec/workspace/codebase/test/automated-pytest-suite/utils/clients/ssh.py:1549: utils.exceptions.ActiveControllerUnsetException: Active controller ssh client is not set! Please use ControllerClient.set_active_controller(ssh_client) to set an active controller client. ERROR I'm wondering if this NATBox configuration is a must and if I configured it correctly? Anyone can help on the test config file? Or is there anything I need to do before the test? Thanks a lot! Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Aug 1 20:10:05 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 1 Aug 2019 16:10:05 -0400 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Message-ID: StarlingX is preparing for its 2.0 release.  Expected next week. Below is a discussion of how this is to be done, and a few the the decisions that need to be taken. The release team has taken a first pass at answering the outstanding questions. Our preferred options are indicated. General Requirements - Create a release candidate branch from master, preferably originating from the context of a green sanity.   - Branch creation might require a brief halt of commit activity on the master branch.  Stay tuned.   - more on branching below - RC branch recieves cherry-picked patches from master until a final compile is declared. - Set SW_VERSION, aka PLATFORM_RELEASE, for RC branch to 19.08 (Format YY.MM)   - Retain same SW_VERSION for dot releases - Set SW_VERSION, aka PLATFORM_RELEASE, for master branch to 19.09 - Any helm chart changes to pick up release images, not the master images?   - answer appears to be no.  Helm charts will list the images we build - Create a CENGN job to build the RC branch.  Daily builds until final compile is declared.   - scripts are fairly generic.  In theory it's just need for a new master job to set customize parameters.   - A little bit of work to set docker images tags correctly vs branching startagy... more below     - Current default format would default to "r-${BRANCH}-centos-stable-${PUBLISH_TIMESTAMP}", but this can be changed.  See below   - A release branch will test some new code paths in cengn scripts.  Will have to monitor closely. - Make sure the build/image retirement scripts are doing the right thing.   - Already coded to support branching opt 2 (below)   - Support for opt 1 will require some new scripting in CENGN Branching startagy and content lock down   - Desired properties of the branch strategy.      - We can re-build the ISO release at a later date.        - The exact context of StarlingX git trees is captured in some form.        - Context of third party git repos is captured on a best effort basis.  e.g. capture tag or sha (an assume they are stable), but not cloning gits.        - Leverage the 'revision' field for all repos in the manifest.      - We can rebuild our docker images at a later date... I don't think we fully know how to do this yet.        - lock down the base centos image, and yum, if possible          - Need tooling changes for this          - Need to reference centos docker image by sha, not tag.          - Probably need to hack the yum configuration as well, point it to cengn.            - The build of stx-centos points to cengn repo for yum update. The loci build of images, however, also uses upstream sources. Otherwise, we would need to include all RPMs used for those images in the LST files        - lock down our inputs from PYPI as best we can          - Find all files named *stable_docker_image, field 'PIP_PACKAGES=' needs to use syntax like ... e.g. panko==5.0.0          - inputs can be found in piplst files   e.g. $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.piplst          - All python modules (non-starlingx) installed would need to be in the base wheels.cfg, which also updates the upper-constraints.txt in the tarball to restrict the installed version.        - Lock down rpms feeding into docker images if possible.          - Find all files named *stable_docker_image, field 'DIST_PACKAGES=' needs to use syntax like ... e.g. bash-4.2.46-31.el7.tis.4.x86_64          - inputs can be found in rpmlis files   e.g. $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.rpmlst          - ALL 'tis' packages found in rpmlst must be listed in DIST_PACKAGES   - We don't have the power to branch and tag all repos.     - Some of the work needs to be done by further locking down the manifest on specific tags/shas     - Do we store the locked down manifest as tagged copy of default.xml, or use versioned file names xml?   - What is the basic format of our branchs and tags     - YYYY.MM   i.e. date like   as use by to 2018.10 release     - 2.0       i.e. a release version     - Current CENGN scripting uses the date format, release format requires scripting changes, but nothing major.     - I'm always eager that a release branch be clearly distinguished from a dev/feature branch.  Currently CENGN looks for YYYY.MM.  Is it safe enough to look for anything starting with a number?     - Release Team recommends the release version format be as follows       - branch:  r/stx.2.0       - tag:     v2.0.0   - Opt 1 - Single RC branch.  Tags mark dot releases -- preferred by Release Team     - A single branch is used to stage commits for both initial release and all subsequent dot releases     - Branch name is r/stx.2.0 ... applies to starlingx repos.     - Tags for each dot release.       - v2.0.## ... for starlingx repos       - v.stx.2.0.## ... for starlingx-staging repos.  Note: starlingx-staging may have inherited version tags from an upstream project that we must not collide with     - Git lock down via creation of a uniquely named manifest (v2.0.##.xml) rather than default.xml.  In this manifest we specify tags or shas for each git.     - We may need to halt commits to the staging branch, or at least the manifest git, when a dot release is imminent and we are waiting on test results.     - New scripting required on CENGN for load and docker image retirement   - Opt 2 - Single RC branch. Fork a branch to lock down a dot releases     - A single branch is used to stage commits for both initial release and all subsequent dot releases     - Branch name of staging branch is rc/stx.2.0 ... applies to starlingx repos.     - When a dot release is declared, fork a release branch from the staging branch (r/stx.2.0.##).  Only commits permitted are to lock down the manifest.     - Can still tag as in opt 1, but not required.   - Opt 3 - new RC branch for each dot release (waterfall)     - Branch name is r/stx.2.0.0 ... applies to starlingx     - Final commit is to lock down the git manifest (default.xml)     - Next dot release forks from the prior dot release, using the commit prior to manifest lock down.     - Can still tag as in opt 1, but not required. Docker image labeling:   - A new set of docker images for each dot release   - Probably don't need to distinguish release from release candidate as this is hidden within the helm charts.   - Probably don't need to distinguish dot releases.  Again it is hidden by the helm charts.   - Docker image tagging options ...     r-2.0-centos-stable.0     2.0-centos-stable.0     r-2.0-centos-stable-${PUBLISH_TIMESTAMP}.0     2.0-centos-stable-${PUBLISH_TIMESTAMP}.0    <--- preferred by Release Team ?     r-2.0.##-centos-stable.0     2.0.##-centos-stable.0     r-2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0     2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0 Cengn publication path:   - Release path     .../starlingx/release/2.0.##     .../starlingx/release/2.0/2.0.##     <-- Preferred by Scott   - RC path     .../starlingx/rc/2.0/timestamp       <-- Preferred by Scott for opt 1     .../starlingx/rc/2.0.##/timestamp     .../starlingx/rc/2.0/2.0.##/timestamp From Bill.Zvonar at windriver.com Thu Aug 1 20:51:43 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 1 Aug 2019 20:51:43 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - August 1/2019 Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABF155@ALA-MBD.corp.ad.wrs.com> Notes & actions from today's release team meeting are below. In summary... - the test team has some actions around feature/regression test status - once we see those updates, we can decide on the RC1 milestone - we will sort out the logistics of branch creation in the next day or so, starting with the email that Scott just sent Bill... https://etherpad.openstack.org/p/stx-releases Release meeting notes August 1 2019 stx.2.0 Test Status - Feature Testing and Regression Testing need to be complete by the RC1 milestone. Both are still open. Need to discuss new forecast dates for both. - Two options to discuss: - Declare the milestone next week with an exception << Preferred - Move the milestone - Feature Testing - Tracker:https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Helm Overrides - still executing some of these, about 7 left to run - will try to get them done in the next couple of days - ACTION: Elio send Helm Override testcase titles to Frank in case we can help prioritize - Ironic - they were lacking the right HW to run these, they have it now, need to set it up first, so they'll need to provide a plan - ACTION: Elio/Ada provide a plan, including the number of testcases - Regression Testing - pass rate 94.82% - Tracker: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit#gid=838066175 - 29 left to run - planning to finish tomorrow - ACTION: Elio/Ada provide update on these - 27 blocked - 4 - Security on IPv6 - blocked since IPv6 not set up yet - should be ok as an exception since IPv6 - 7 - SRIOV - ran out of time - ACTION: Elio/Ada provide plan for running through these - 12 - System - still having issues pinging between instances on external configuration (2+2+2) - other 2+2+2 testcases have been run, only the ones that require pinging - LAUNCHPADS - https://bugs.launchpad.net/starlingx/+bug/1835575 - https://bugs.launchpad.net/starlingx/+bug/1830286 - Elio to reach out to networking peers (i.e. Forrest) about switch issues - likely would not hold up RC1 declaration for this set of blocked testcases RC1 Milestone -- Branch Creation Mechanics / Logistics - Request participation from Scott Little, Dean Troyer & Don Penney - Work Items - Stop code merges on master for a short period. Send an email to the community. Control through the core reviewers - Create release branch: r/stx.2.0 - Branch the main starlingx repos ...including docs (doc updates need to be cherrypicked as well - (dtroyer) did this change recently? docs was not versioned for stx.1 - What about the stx staging branches? stx-nova / libvirt / qemu - At a min, libvirt / qemu need to - Create new CENGN builds from the r/stx.2.0 branch. - Build daily between RC1 and Final Compile. - How often to build docker images? right now, it's weekly - At Final Compile / Release, label the release branch r/stx.2.0 as stx.2.0.0 - How do we make sure it's clear which ISO on CENGN corresponds to this? (also need to make sure it and the corresponding docker images are not deleted) - After Final Compile / Release (Aug 26-30) - Developers will continue to work on stx.2.0 High priority bugs - sourcing fixes in master and cherrypicking to the release branch - Candidate "maintenance" release builds will be done on demand OR perhaps once a week(?) so that sanity and mini-regression can continue (helps ensure stability) - How do we distinguish on CENGN between those and the "offical" maintenance release - When we are ready to issue a maintenance release, label the release branch as stx.2.0.x - Again, need to figure out how to distinguish the official maintenance release on CENGN - Scott's Details - General Requirements - Create a release candidate branch from master, preferably originating from the context of a green sanity. - more on branching below - RC branch recieves cherry-picked patches from master until a final compile is declared. - Set SW_VERSION, aka PLATFORM_RELEASE, for RC branch to 19.08 (Format YY.MM) - Retain same SW_VERSION for dot releases - Set SW_VERSION, aka PLATFORM_RELEASE, for master branch to 19.09 - Any helm chart changes to pick up release images, not the master images? - answer appears to be no. Helm charts will list the images we build - Create a CENGN job to build the RC branch. Daily builds until final compile is declared. - scripts are fairly generic. In theory it's just need for a new master job to set customize parameters. - A little bit of work to set docker images tags correctly vs branching startagy... more below - Current format would default to "r-${BRANCH}-centos-stable-${PUBLISH_TIMESTAMP}.0" - A release branch will test some new code paths in cengn scripts. Will have to monitor closely. - Make sure the build/image retirement scripts are doing the right thing. - Already coded to support branching opt 2 (below) - Support for opt 1 will reqiore some new scripting in CENGN - Branching startagy and content lockdown - Desired properties of the branch strategy. - We can re-build the ISO release at a later date. - The exact context of StarlingX git trees is captured in some form. - Context of third party git repos is captured on a best effort basis. e.g. capture tag or sha (an assume they are stable), but not cloning gits. - Leverage the 'revision' field for all repos in the manifest. - We can rebuild our docker images at a later date... I don't think we fully know how to do this yet. - Lock down the base centos image, and yum, if possible - Need tooling changes for this - Need to reference centos docker image by sha, not tag. - Probably need to hack the yum configuration as well, point it to cengn. - The build of stx-centos points to cengn repo for yum update. The loci build of images, however, also uses upstream sources. Otherwise, we would need to include all RPMs used for those images in the LST files - Lock down our inputs from PYPI as best we can - Find all files named *stable_docker_image, field 'PIP_PACKAGES=' needs to use syntax like ... e.g. panko==5.0.0 - Inputs can be found in piplst files e.g. $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.piplst - All python modules (non-starlingx) installed would need to be in the base wheels.cfg, which also updates the upper-constraints.txt in the tarball to restrict the installed version. - Lock down rpms feeding into docker images if possible. - Might not be possible without lock down of the base image - Find all files named *stable_docker_image, field 'DIST_PACKAGES=' needs to use syntax like ... e.g. bash-4.2.46-31.el7.tis.4.x86_64 - Inputs can be found in rpmlis files e.g. $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.rpmlst - ALL 'tis' packages found in rpmlst must be listed in DIST_PACKAGES - We don't have the power to branch and tag all repos. - Some of the work needs to be done by further locking down the manifest on specific tags/shas - Do we store the locked down manifest as tagged copy of default.xml, or use versioned file names xml? - Basic format of our branchs and tags - YYYY.MM i.e. date like - 2.0 i.e. a release number - Current CENGN scripting uses the date format, release format requires scripting changes. - I'm always eager that a release branch be clearly distinguished from a dev/feature branch. Currently CENGN looks for YYYY.MM. Is it safe enough to look for anything starting with a number? - Opt 1 - Single RC branch. Tags mark dot releases - prefered - Branch name is r/stx.2.0 ... applies to starlingx - Tags for each dot release (v2.0.##). applies to starlingx and (v.stx.2.0.##) starlingx-staging - Git lockdown via creation of a manifest (v2019.08.##.xml or v2.0.##.xml) rather than default.xml. In this manifest we specify tags or shas for each git. - New scripting required on CENGN for load and docker image retirement - we agreed on this option, with the v2.0 - SemVer format - we also agreed to tag the staging repos - may need to prefix those w/ stx if they're already using numeric tagging - Opt 2 - Single RC branch. fork a branch to lock down a dot releases - Branch name is 2019.08-rc ... or 2.0-rc ... applies to starlingx and starlingx-staging - Branch for a dot release (2019.08.## or v2.0.##). - Git lockdown occures in default.xml. In this manifest we specify tags or shas for each git. - Opt 3 - new RC branch for each dot release (waterfall) - Branch name is 2019.08.##... or 2.0.## ... applies to starlingx and starlingx-staging - Final commit is to lock down the manifest - Next dot release forks from the prior dot release (the commit prior to manifest lockdown). - Docker image labeling: - Release image options - r-2019.08.##-centos-stable.0 - r-2.0.##-centos-stable.0 - 2019.08.##-centos-stable.0 - 2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0 --- prefered <--- leaning towards this - Cengn path: - Release path - starlingx/release/2018.10.## - starlingx/release/2018.10/2018.10.## - starlingx/release/2.0.## - starlingx/release/2.0/2.0.## - RC path - starlingx/rc/2018.10/timestamp - starlingx/rc/2018.10.##/timestamp - starlingx/rc/2018.10/2018.10.##/timestamp - starlingx/rc/2.0/timestamp - starlingx/rc/2.0.##/timestamp - starlingx/rc/2.0/2.0.##/timestamp Didn't get to this... - Maintenance Release Plan - From the Release Planning wiki: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Maintenance - Formal releases are maintained for 12 months. This allows support for two previous StarlingX releases. - During the maintenance window, the release team will evaluate monthly whether a maintenance release is required. - The maintenance release will be tagged on the release branch. - The maintenance release will undergo a mini regression test cycle. - The corresponding binaries will be posted on the StarlingX CENGN mirror. - Plan for the first stx.2.0 maintenance release - recommend 4-6wks after the stx.2.0 release - Request Ada and Numan to put together the mini-regression plan - Discuss labels From maria.g.perez.ibarra at intel.com Thu Aug 1 22:18:45 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 1 Aug 2019 22:18:45 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190801 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-August-1 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Aug 1 23:09:09 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 1 Aug 2019 23:09:09 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 8/01/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190724T013000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 493 Pass = 386 Fail = 22 Blocked = 27 Not Run = 27 Obsolete = 25 Deferred = 6 Total executed = 435 Pass Rate = 94.60% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore 6 Deferrer Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 11 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 6 PASS | 1 Fail Regression - Maintenance 8 PASS | 1 FAIL Regression - Networking 115 PASS | 2 FAIL | 7 BLOCKED | 19 OBSOLETE Regression - Nova 20 PASS | 9 FAIL Regression - Security 34 PASS | 1 FAIL | 6 BLOCKED | 1 OBSOLETE Regression - Storage 23 PASS |2 BLOCKED| 2 OBSOLETE Regression - Inventory 29 PASS | 1 FAIL System Test 20 PASS | 1 FAIL | 12 BLOCKED | 1 OBSOLETE Regression - new features 61 PASS | 5 FAIL --------------------------------------------------------------------------- AUTOMATED EXECUTION - INTEL --------------------------------------------------------------------------- Overall Results: Total = 234 Pass = 178 Fail = 47 Not Run = 9 Total executed = 225 Pass Rate = 79.1% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Fault-Management 15 PASS Gnocchi 12 PASS HEAT 6 PASS High-Availability 8 PASS | 2 FAIL Horizon 2 PASS Insallation-And-Config 6 PASS | 1 FAIL Maintenance 22 PASS | 5 FAIL Networking 39 PASS | 12 FAIL Nova 14 PASS | 5 FAIL Security 16 PASS | 7 FAIL Storage 3 PASS | 11 FAIL SYSINVENTORY 29 PASS | 1 FAIL System 6 PASS |3 FAIL ---------------------------------------------------------------------- AUTOMATED EXECUTION - Wind River ---------------------------------------------------------------------- Overall Results: Pass = 619 Fail = 127 Total executed = 746 Pass Rate = 83.0% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Horizon 46 PASS |4 FAIL MTC General 27 PASS | 16 FAIL Networking 30 PASS | 8 FAIL Nova 171 PASS | 78 FAIL REST API 220 PASS | 4 FAIL Security 49 PASS | 4 FAIL Storage 59 PASS | 10 FAIL Sysinv 17 PASS | 3 FAIL ------------------------------------------------- Bugs: user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 VM fail to live migrate after evacuation https://bugs.launchpad.net/starlingx/+bug/1836402 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 CirrOS VM login takes too much time, and throw different log errors https://bugs.launchpad.net/starlingx/+bug/1835575 Live Migration Error: Failed to live migrate instance to host "AUTO_SCHEDULE". https://bugs.launchpad.net/starlingx/+bug/1837256 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 Create Volume dialog opens (from image panel in Horizon) but getting error default volume type can not be found https://bugs.launchpad.net/starlingx/+bug/1826259 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 Containers: openstack pods failed after force rebooting active controller https://bugs.launchpad.net/starlingx/+bug/1816842 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 VM filesystem is not RW when attached the 2nd volume https://bugs.launchpad.net/starlingx/+bug/1838546 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 host-unlock compute node rejected: Total allocated memory exceeds the total memory https://bugs.launchpad.net/starlingx/+bug/1837749 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From tingjie.chen at intel.com Fri Aug 2 11:09:36 2019 From: tingjie.chen at intel.com (Chen, Tingjie) Date: Fri, 2 Aug 2019 11:09:36 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F360033F2@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F360033F2@SHSMSX104.ccr.corp.intel.com> Message-ID: Hello, About the opens in Ceph containerization proposal, comments as following: 3. Ceph containerization plan review (Tingjie) - story: https://storyboard.openstack.org/#!/story/2005527 Tingjie presented the design and his plan for 4 milestones. Two more opens: - if we do not cut-over to containerized Ceph in 3.0; how to have some feature in but not to break the current ceph functionality. [Tingjie] if not cut-over to stx.3.0, the Ceph version is still for Mimic 13.2.2, and other modules depend on Ceph may has issues, some issues from outside Ceph, some from internal Ceph, we can backport patch or investigate if Ceph Mimic need to resolve and maintain the patch list, and the external interface of Ceph always keep stable, so no big gap for functionality. For the develop mode, we can maintain several big gerrit patches and update periodically, there are several project impact and no much patch number needed. - Brent has one review comments in Tingjie's spec regarding SW upgrade (backward compatibility for containerzied Ceph and mimic). [Tingjie] Currently containerized Ceph vesion is Mimic 13.2.2, it is the same with native deployment in stx.2.0, I have update the spec: https://review.opendev.org/#/c/656371/ for the SW upgrade concern in Upgrade impact section. Upgrade from current implementation to Rook, it is a big gap since deploy model changed completely, and I list checkpoints for that. AR: Tingjie will continue working on spec and design to address the concern. the plan will be refined further. Thanks, Tingjie -----Original Message----- From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, July 31, 2019 10:07 PM To: Wold, Saul ; 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' Subject: Re: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting Agenda & Notes for 7/31 meeting: 1. Bug triage & review, re-prioritize LPs (Cindy/Saul/Brent) - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage Suggestion from Tingjie: to collect log file from test system. Standard process required: test report template or must-have log? AR: Tingjie to send a list of "incomplete" LP with info missing to Bill. 4 LP are currently under storage domain. - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other Team reviewed LP in both domains, Priority has been updated lively in Launchpad. 2. Sanity test status for kernel minor upgrade (Shuai) deployment testing in Dalian ODC; passed on both RT and STD kernel. sanity auto testing in Shanghai lab. AIO-SX VE sanity testing auto-test failed but run it manually is passing. Debugging the scripts. AIO-DX sanity auto-testing failed suspect the test scripts issue. Multi-node auto sanity WIP. 3. Ceph containerization plan review (Tingjie) - story: https://storyboard.openstack.org/#!/story/2005527 Tingjie presented the design and his plan for 4 milestones. Two more opens: - if we do not cut-over to containerized Ceph in 3.0; how to have some feature in but not to break the current ceph functionality. - Brent has one review comments in Tingjie's spec regarding SW upgrade (backward compatibility for containerzied Ceph and mimic). AR: Tingjie will continue working on spec and design to address the concern. the plan will be refined further. 4. Opens (all) - None -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, July 31, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Fri Aug 2 13:47:18 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Fri, 2 Aug 2019 21:47:18 +0800 Subject: [Starlingx-discuss] [starlingx-discuss][pci-passthrough] GPU passthrough failed Message-ID: Hi all, There are some ERRORs when launching VM with NVIDIA GPU pci-passthrough in Stx1.0. Anyone have got this problem? 2019-08-02 17:17:09.916 52231 ERROR nova.scheduler.utils [req-aeb6ab2f-5c3e-4ca1-b4dd-45e47b3fc06c b9b0f2dcd0fc4bdab8239c5aaeb24c39 2f6c58b2e2e942b58acbedb73a5cb474 - default default] [instance: a40ef34a-0c47-4a6b-aa4b-e54c02dc61e4] Error from last host: controller-0 (node controller-0): [u'Traceback (most recent call last):\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 1997, in _do_build_and_run_instance\n filter_properties)\n', u' File "/usr/lib/python2.7/site-packages/nova/compute/manager.py", line 2244, in _build_and_run_instance\n instance_uuid=instance.uuid, reason=six.text_type(e))\n', u"RescheduledException: Build of instance a40ef34a-0c47-4a6b-aa4b-e54c02dc61e4 was re-scheduled: internal error: process exited while connecting to monitor: 2019-08-02T17:17:01.845173Z qemu-kvm: -chardev socket,id=charnet0,path=/var/run/openvswitch/vhu43f09376-ca,server: info: QEMU waiting for connection on: disconnected:unix:/var/run/openvswitch/vhu43f09376-ca,server\nwarning: host doesn't support requested feature: CPUID.80000001H:EDX.fxsr-opt [bit 25]\nwarning: host doesn't support requested feature: CPUID.80000001H:EDX.fxsr-opt [bit 25]\n2019-08-02T17:17:03.344966Z qemu-kvm: -device vfio-pci,host=04:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio error: 0000:04:00.0: failed to setup container for group 23: failed to set iommu for container: Operation not permitted\n"] Thanks Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Fri Aug 2 13:57:23 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 2 Aug 2019 13:57:23 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: References: Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Hi Folks, On the release call yesterday, we agreed to have a short follow-up meeting today to close on the steps for the release branch. I believe we agreed on 6pm UTC (11am PST, 2pm EDT): https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190802T1800. I'm going to assume we can use the usual Zoom bridge: https://zoom.us/j/342730236. Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. Bill... -----Original Message----- From: Scott Little Sent: Thursday, August 1, 2019 4:10 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 StarlingX is preparing for its 2.0 release.  Expected next week. Below is a discussion of how this is to be done, and a few the the decisions that need to be taken. The release team has taken a first pass at answering the outstanding questions. Our preferred options are indicated. General Requirements - Create a release candidate branch from master, preferably originating from the context of a green sanity.   - Branch creation might require a brief halt of commit activity on the master branch.  Stay tuned.   - more on branching below - RC branch recieves cherry-picked patches from master until a final compile is declared. - Set SW_VERSION, aka PLATFORM_RELEASE, for RC branch to 19.08 (Format YY.MM)   - Retain same SW_VERSION for dot releases - Set SW_VERSION, aka PLATFORM_RELEASE, for master branch to 19.09 - Any helm chart changes to pick up release images, not the master images?   - answer appears to be no.  Helm charts will list the images we build - Create a CENGN job to build the RC branch.  Daily builds until final compile is declared.   - scripts are fairly generic.  In theory it's just need for a new master job to set customize parameters.   - A little bit of work to set docker images tags correctly vs branching startagy... more below     - Current default format would default to "r-${BRANCH}-centos-stable-${PUBLISH_TIMESTAMP}", but this can be changed.  See below   - A release branch will test some new code paths in cengn scripts.  Will have to monitor closely. - Make sure the build/image retirement scripts are doing the right thing.   - Already coded to support branching opt 2 (below)   - Support for opt 1 will require some new scripting in CENGN Branching startagy and content lock down   - Desired properties of the branch strategy.      - We can re-build the ISO release at a later date.        - The exact context of StarlingX git trees is captured in some form.        - Context of third party git repos is captured on a best effort basis.  e.g. capture tag or sha (an assume they are stable), but not cloning gits.        - Leverage the 'revision' field for all repos in the manifest.      - We can rebuild our docker images at a later date... I don't think we fully know how to do this yet.        - lock down the base centos image, and yum, if possible          - Need tooling changes for this          - Need to reference centos docker image by sha, not tag.          - Probably need to hack the yum configuration as well, point it to cengn.            - The build of stx-centos points to cengn repo for yum update. The loci build of images, however, also uses upstream sources. Otherwise, we would need to include all RPMs used for those images in the LST files        - lock down our inputs from PYPI as best we can          - Find all files named *stable_docker_image, field 'PIP_PACKAGES=' needs to use syntax like ... e.g. panko==5.0.0          - inputs can be found in piplst files   e.g. $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.piplst          - All python modules (non-starlingx) installed would need to be in the base wheels.cfg, which also updates the upper-constraints.txt in the tarball to restrict the installed version.        - Lock down rpms feeding into docker images if possible.          - Find all files named *stable_docker_image, field 'DIST_PACKAGES=' needs to use syntax like ... e.g. bash-4.2.46-31.el7.tis.4.x86_64          - inputs can be found in rpmlis files   e.g. $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.rpmlst          - ALL 'tis' packages found in rpmlst must be listed in DIST_PACKAGES   - We don't have the power to branch and tag all repos.     - Some of the work needs to be done by further locking down the manifest on specific tags/shas     - Do we store the locked down manifest as tagged copy of default.xml, or use versioned file names xml?   - What is the basic format of our branchs and tags     - YYYY.MM   i.e. date like   as use by to 2018.10 release     - 2.0       i.e. a release version     - Current CENGN scripting uses the date format, release format requires scripting changes, but nothing major.     - I'm always eager that a release branch be clearly distinguished from a dev/feature branch.  Currently CENGN looks for YYYY.MM.  Is it safe enough to look for anything starting with a number?     - Release Team recommends the release version format be as follows       - branch:  r/stx.2.0       - tag:     v2.0.0   - Opt 1 - Single RC branch.  Tags mark dot releases -- preferred by Release Team     - A single branch is used to stage commits for both initial release and all subsequent dot releases     - Branch name is r/stx.2.0 ... applies to starlingx repos.     - Tags for each dot release.       - v2.0.## ... for starlingx repos       - v.stx.2.0.## ... for starlingx-staging repos.  Note: starlingx-staging may have inherited version tags from an upstream project that we must not collide with     - Git lock down via creation of a uniquely named manifest (v2.0.##.xml) rather than default.xml.  In this manifest we specify tags or shas for each git.     - We may need to halt commits to the staging branch, or at least the manifest git, when a dot release is imminent and we are waiting on test results.     - New scripting required on CENGN for load and docker image retirement   - Opt 2 - Single RC branch. Fork a branch to lock down a dot releases     - A single branch is used to stage commits for both initial release and all subsequent dot releases     - Branch name of staging branch is rc/stx.2.0 ... applies to starlingx repos.     - When a dot release is declared, fork a release branch from the staging branch (r/stx.2.0.##).  Only commits permitted are to lock down the manifest.     - Can still tag as in opt 1, but not required.   - Opt 3 - new RC branch for each dot release (waterfall)     - Branch name is r/stx.2.0.0 ... applies to starlingx     - Final commit is to lock down the git manifest (default.xml)     - Next dot release forks from the prior dot release, using the commit prior to manifest lock down.     - Can still tag as in opt 1, but not required. Docker image labeling:   - A new set of docker images for each dot release   - Probably don't need to distinguish release from release candidate as this is hidden within the helm charts.   - Probably don't need to distinguish dot releases.  Again it is hidden by the helm charts.   - Docker image tagging options ...     r-2.0-centos-stable.0     2.0-centos-stable.0     r-2.0-centos-stable-${PUBLISH_TIMESTAMP}.0     2.0-centos-stable-${PUBLISH_TIMESTAMP}.0    <--- preferred by Release Team ?     r-2.0.##-centos-stable.0     2.0.##-centos-stable.0     r-2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0     2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0 Cengn publication path:   - Release path     .../starlingx/release/2.0.##     .../starlingx/release/2.0/2.0.##     <-- Preferred by Scott   - RC path     .../starlingx/rc/2.0/timestamp       <-- Preferred by Scott for opt 1     .../starlingx/rc/2.0.##/timestamp     .../starlingx/rc/2.0/2.0.##/timestamp _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Fri Aug 2 14:09:27 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 2 Aug 2019 16:09:27 +0200 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Message-ID: <33421600-5F04-4CF9-BF52-94757F9AD6E7@gmail.com> Hi, The regular Zoom bridge is available, feel free to use it. Thanks, Ildikó Sent from my iPhone > On 2019. Aug 2., at 15:57, Zvonar, Bill wrote: > > Hi Folks, > > On the release call yesterday, we agreed to have a short follow-up meeting today to close on the steps for the release branch. > > I believe we agreed on 6pm UTC (11am PST, 2pm EDT): https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190802T1800. > > I'm going to assume we can use the usual Zoom bridge: https://zoom.us/j/342730236. > > Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. > > Bill... > > -----Original Message----- > From: Scott Little > Sent: Thursday, August 1, 2019 4:10 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 > > StarlingX is preparing for its 2.0 release. Expected next week. > > Below is a discussion of how this is to be done, and a few the the decisions that need to be taken. > > The release team has taken a first pass at answering the outstanding questions. Our preferred options are indicated. > > > General Requirements > - Create a release candidate branch from master, preferably originating > from the context of a green sanity. > - Branch creation might require a brief halt of commit activity on > the master branch. Stay tuned. > - more on branching below > - RC branch recieves cherry-picked patches from master until a final > compile is declared. > - Set SW_VERSION, aka PLATFORM_RELEASE, for RC branch to 19.08 (Format > YY.MM) > - Retain same SW_VERSION for dot releases > - Set SW_VERSION, aka PLATFORM_RELEASE, for master branch to 19.09 > - Any helm chart changes to pick up release images, not the master images? > - answer appears to be no. Helm charts will list the images we build > - Create a CENGN job to build the RC branch. Daily builds until final > compile is declared. > - scripts are fairly generic. In theory it's just need for a new > master job to set customize parameters. > - A little bit of work to set docker images tags correctly vs > branching startagy... more below > - Current default format would default to > "r-${BRANCH}-centos-stable-${PUBLISH_TIMESTAMP}", but this can be > changed. See below > - A release branch will test some new code paths in cengn scripts. > Will have to monitor closely. > - Make sure the build/image retirement scripts are doing the right thing. > - Already coded to support branching opt 2 (below) > - Support for opt 1 will require some new scripting in CENGN > > Branching startagy and content lock down > - Desired properties of the branch strategy. > - We can re-build the ISO release at a later date. > - The exact context of StarlingX git trees is captured in some form. > - Context of third party git repos is captured on a best effort > basis. e.g. capture tag or sha (an assume they are stable), but not > cloning gits. > - Leverage the 'revision' field for all repos in the manifest. > - We can rebuild our docker images at a later date... I don't > think we fully know how to do this yet. > - lock down the base centos image, and yum, if possible > - Need tooling changes for this > - Need to reference centos docker image by sha, not tag. > - Probably need to hack the yum configuration as well, point > it to cengn. > - The build of stx-centos points to cengn repo for yum > update. The loci build of images, however, also uses upstream sources. > Otherwise, we would need to include all RPMs used for those images in > the LST files > - lock down our inputs from PYPI as best we can > - Find all files named *stable_docker_image, field > 'PIP_PACKAGES=' needs to use syntax like ... e.g. panko==5.0.0 > - inputs can be found in piplst files e.g. > $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.piplst > - All python modules (non-starlingx) installed would need to > be in the base wheels.cfg, which also updates the upper-constraints.txt > in the tarball to restrict the installed version. > - Lock down rpms feeding into docker images if possible. > - Find all files named *stable_docker_image, field > 'DIST_PACKAGES=' needs to use syntax like ... e.g. > bash-4.2.46-31.el7.tis.4.x86_64 > - inputs can be found in rpmlis files e.g. > $MY_WORKSPACE/std/build-images/tis-networking-avs-heat-centos-stable.rpmlst > - ALL 'tis' packages found in rpmlst must be listed in > DIST_PACKAGES > > - We don't have the power to branch and tag all repos. > - Some of the work needs to be done by further locking down the > manifest on specific tags/shas > - Do we store the locked down manifest as tagged copy of > default.xml, or use versioned file names xml? > > - What is the basic format of our branchs and tags > - YYYY.MM i.e. date like as use by to 2018.10 release > - 2.0 i.e. a release version > - Current CENGN scripting uses the date format, release format > requires scripting changes, but nothing major. > - I'm always eager that a release branch be clearly distinguished > from a dev/feature branch. Currently CENGN looks for YYYY.MM. Is it > safe enough to look for anything starting with a number? > - Release Team recommends the release version format be as follows > - branch: r/stx.2.0 > - tag: v2.0.0 > > - Opt 1 - Single RC branch. Tags mark dot releases -- preferred by > Release Team > - A single branch is used to stage commits for both initial release > and all subsequent dot releases > - Branch name is r/stx.2.0 ... applies to starlingx repos. > - Tags for each dot release. > - v2.0.## ... for starlingx repos > - v.stx.2.0.## ... for starlingx-staging repos. Note: > starlingx-staging may have inherited version tags from an upstream > project that we must not collide with > - Git lock down via creation of a uniquely named manifest > (v2.0.##.xml) rather than default.xml. In this manifest we specify tags > or shas for each git. > - We may need to halt commits to the staging branch, or at least > the manifest git, when a dot release is imminent and we are waiting on > test results. > - New scripting required on CENGN for load and docker image retirement > > - Opt 2 - Single RC branch. Fork a branch to lock down a dot releases > - A single branch is used to stage commits for both initial release > and all subsequent dot releases > - Branch name of staging branch is rc/stx.2.0 ... applies to > starlingx repos. > - When a dot release is declared, fork a release branch from the > staging branch (r/stx.2.0.##). Only commits permitted are to lock down > the manifest. > - Can still tag as in opt 1, but not required. > > - Opt 3 - new RC branch for each dot release (waterfall) > - Branch name is r/stx.2.0.0 ... applies to starlingx > - Final commit is to lock down the git manifest (default.xml) > - Next dot release forks from the prior dot release, using the > commit prior to manifest lock down. > - Can still tag as in opt 1, but not required. > > > > Docker image labeling: > - A new set of docker images for each dot release > - Probably don't need to distinguish release from release candidate > as this is hidden within the helm charts. > - Probably don't need to distinguish dot releases. Again it is > hidden by the helm charts. > - Docker image tagging options ... > > r-2.0-centos-stable.0 > 2.0-centos-stable.0 > r-2.0-centos-stable-${PUBLISH_TIMESTAMP}.0 > 2.0-centos-stable-${PUBLISH_TIMESTAMP}.0 <--- preferred by > Release Team ? > r-2.0.##-centos-stable.0 > 2.0.##-centos-stable.0 > r-2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0 > 2.0.##-centos-stable-${PUBLISH_TIMESTAMP}.0 > > > Cengn publication path: > - Release path > .../starlingx/release/2.0.## > .../starlingx/release/2.0/2.0.## <-- Preferred by Scott > > - RC path > .../starlingx/rc/2.0/timestamp <-- Preferred by Scott for opt 1 > .../starlingx/rc/2.0.##/timestamp > .../starlingx/rc/2.0/2.0.##/timestamp > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From gaosong_1250 at 163.com Fri Aug 2 15:52:55 2019 From: gaosong_1250 at 163.com (gao.song) Date: Fri, 2 Aug 2019 23:52:55 +0800 (CST) Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node Message-ID: <21deb544.a196.16c5308b6ae.Coremail.gaosong_1250@163.com> Hi community: Since we encounter some difficult problems using PXE to boot a compute node,it just get a pxeboot subnet ip(169.254.202.xxx) instead of a management subnet(192.168.xx.xx). As a result installation procedure will hold after load initrd file. So we wonder if there is another way to install a compute node with a standard mode controller, like install the compute using ISO manually and then issue configure commands to just enable compute sub-function. Any help will be appreciated! -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 2 17:18:40 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 2 Aug 2019 12:18:40 -0500 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Message-ID: On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill wrote: > Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. I ran a dry-run of the branching process this morning using the following: Branch: r/stx.2.0 Tag: v2.0.0.rc0 Tagging the branch point makes it easier later to pull a list of changes for the next RC or the release tag... I did find a couple of tweaks required in the branch-stx.sh script to account for the OpenDev change and only branching the Gerrit repos: https://review.opendev.org/#/c/674342/ The wiki pages [0] and [1] have been updated to match my current understanding (above) of the release naming and process. dt [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process -- Dean Troyer dtroyer at gmail.com From Bill.Zvonar at windriver.com Fri Aug 2 17:36:23 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 2 Aug 2019 17:36:23 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABF47C@ALA-MBD.corp.ad.wrs.com> Proposed next steps (to be discussed in meeting today... 1. Release Team announces a freeze on master for TBD time (see Note) 2. Scott/Dean create RC1 branch and make required build changes. 3. Scott triggers RC1 build, both ISO and Docker images. 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for High priority LPs also cherry pick their commit to the RC1 branch. -----Original Message----- From: Dean Troyer Sent: Friday, August 2, 2019 1:19 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill wrote: > Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. I ran a dry-run of the branching process this morning using the following: Branch: r/stx.2.0 Tag: v2.0.0.rc0 Tagging the branch point makes it easier later to pull a list of changes for the next RC or the release tag... I did find a couple of tweaks required in the branch-stx.sh script to account for the OpenDev change and only branching the Gerrit repos: https://review.opendev.org/#/c/674342/ The wiki pages [0] and [1] have been updated to match my current understanding (above) of the release naming and process. dt [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Aug 2 18:06:13 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 2 Aug 2019 11:06:13 -0700 Subject: [Starlingx-discuss] [build][multi-os]Proposal for new starlingx repo - stx-zuul-jobs Message-ID: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> Folks, As I am doing the work for multi-os, I have created some Zuul ansible playbooks and some new roles to go along with that. We have also enabled (or will enable) some additional zuul jobs that use various scripts. These are currently being tested in the starlingx/fault repo, but need to be in a more generic location. In the past we have been putting scripts into stx-integ, which is not the most ideal location. I see that the "openstack" repo name space has an openstack-zuul-jobs, I think this is because zuul itself has a zuul-jobs which might conflict in the zuul namespace (this is just a guess). StarlingX would need a stx-zuul-jobs and it would contain new StarlingX specific Zuul playbooks and associated roles, along with other tools/scripts to support these playbooks and other StarlingX Zuul jobs. For example the recently added spec-tools to stx-integ could be moved this new repo. I have heard that the Infra team might be looking at some restructuring so that might play a role in the direction or creation of the repo. Maybe somebody from infra will pipe in here, or I will ping later. Thoughts? Thanks Sau! From fungi at yuggoth.org Fri Aug 2 18:15:30 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 2 Aug 2019 18:15:30 +0000 Subject: [Starlingx-discuss] [build][multi-os]Proposal for new starlingx repo - stx-zuul-jobs In-Reply-To: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> References: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> Message-ID: <20190802181530.choblqwe3zip2mqp@yuggoth.org> On 2019-08-02 11:06:13 -0700 (-0700), Saul Wold wrote: [...] > I see that the "openstack" repo name space has an > openstack-zuul-jobs, I think this is because zuul itself has a > zuul-jobs which might conflict in the zuul namespace (this is just > a guess). [...] Close. At one time those repositories were called openstack-infra/zuul-jobs and openstack-infra/openstack-zuul-jobs (the former was used for generic Zuul job building blocks while the latter housed reusable OpenStack-specific test bits). Over time those repositories moved into the zuul and openstack namespaces respectively. Two repositories having the same short name in different namespaces is perfectly fine; for example, we have opendev/project-config and openstack/project-config repos. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Bill.Zvonar at windriver.com Fri Aug 2 18:55:46 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Fri, 2 Aug 2019 18:55:46 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ABF516@ALA-MBD.corp.ad.wrs.com> From the meeting just now, Scott & Dean agreed that they can base the branch off of a SHA, so there is no need for a freeze. They will finish some script changes for this (thanks guys) and, assuming Monday's sanity is green, will start creating the branch on Tuesday. After the sanity on the RC1 branch is done, we'll announce that the branch is ready to use. More details here and at [0] on the modified sequence... 1 Start (no freeze required since we're branching from a SHA, not from Head) - on Tuesday - Dean will start at ~9:30 his time (CDT) (10:30 EDT) - assuming sanity is Green - Dean will branch from the SHA for that sanity's build 2. Scott/Dean create RC1 branch and make required build changes. - Dean will make sure he's able to do the SHA thing - this will be based on Monday's sanity, which will be based on the commits up to Sunday evening - i.e. UTC 0130 am Monday https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190805T0130 3. Scott triggers RC1 build, both ISO and Docker images. - Scott has a few script changes to do, will work to knock those off today - we agreed on these build paths - Release path .../starlingx/release/2.0.##/centos .../starlingx/release/2.0/2.0.##/centos - RC path .../starlingx/rc/2.0/centos/timestamp 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for Medium & High priority LPs also cherry pick their commit to the RC1 branch. [0] https://etherpad.openstack.org/p/stx-releases -----Original Message----- From: Zvonar, Bill Sent: Friday, August 2, 2019 1:36 PM To: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Proposed next steps (to be discussed in meeting today... 1. Release Team announces a freeze on master for TBD time (see Note) 2. Scott/Dean create RC1 branch and make required build changes. 3. Scott triggers RC1 build, both ISO and Docker images. 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for High priority LPs also cherry pick their commit to the RC1 branch. -----Original Message----- From: Dean Troyer Sent: Friday, August 2, 2019 1:19 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill wrote: > Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. I ran a dry-run of the branching process this morning using the following: Branch: r/stx.2.0 Tag: v2.0.0.rc0 Tagging the branch point makes it easier later to pull a list of changes for the next RC or the release tag... I did find a couple of tweaks required in the branch-stx.sh script to account for the OpenDev change and only branching the Gerrit repos: https://review.opendev.org/#/c/674342/ The wiki pages [0] and [1] have been updated to match my current understanding (above) of the release naming and process. dt [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From David.Sullivan at windriver.com Fri Aug 2 19:37:09 2019 From: David.Sullivan at windriver.com (Sullivan, David) Date: Fri, 2 Aug 2019 19:37:09 +0000 Subject: [Starlingx-discuss] [DOCS] Ability to specify custom certificates for the kubernetes root CA Message-ID: As part of this Launchpad there are changes that can impact documentation. Let me know if you require further details. Thanks, David --- Ability to specify custom certificates for the kubernetes root CA This change allows the administrator to specify the certificate and key for the kubernetes root CA. Additionally the administrator can also provide values to add to the kubernetes apiserver certificate Subject Alternative Name list. IMPORTANT: The default length for the generated kubernetes root CA certificate is 10 years. Replacing the root CA certificate is an involved process so the custom certificate expiry should be as long as possible. We recommend ensuring root CA certificate has an expiry of at least 5-10 years. These changes must be made via the ansible bootstrap playbook. Three new optional parameters are provided. k8s_root_ca_cert k8s_root_ca_key apiserver_cert_sans k8s_root_ca_cert and k8s_root_ca_key must specify the full path to the certificate and key respectively. The root CA certificate must be in PEM format. These two values must be provided as a pair. The playbook will not proceed if only one value is provided. The apiserver_cert_sans parameter can be used to specify a list of Subject Alternative Name entries that will be added to the kubernetes apiserver certificate. Each entry in the list must be an IP address or domain name. eg apiserver_cert_sans: - hostname.domain - 198.51.100.75 Reviews: https://review.opendev.org/#/c/671561 https://review.opendev.org/#/c/671559 Launchpad: https://bugs.launchpad.net/starlingx/+bug/1837079 -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Fri Aug 2 19:41:25 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Fri, 2 Aug 2019 19:41:25 +0000 Subject: [Starlingx-discuss] [DOCS] Ability to specify custom certificates for the kubernetes root CA In-Reply-To: References: Message-ID: <3808363B39586544A6839C76CF81445EA1B9F163@ORSMSX104.amr.corp.intel.com> Thanks David. We’ll discuss this in our upcoming docs meeting and will get back to you if we have further questions. -- Mike From: Sullivan, David Sent: Friday, August 2, 2019 1:37 PM To: starlingx-discuss at lists.starlingx.io; Tullis, Michael L Subject: [DOCS] Ability to specify custom certificates for the kubernetes root CA As part of this Launchpad there are changes that can impact documentation. Let me know if you require further details. Thanks, David --- Ability to specify custom certificates for the kubernetes root CA This change allows the administrator to specify the certificate and key for the kubernetes root CA. Additionally the administrator can also provide values to add to the kubernetes apiserver certificate Subject Alternative Name list. IMPORTANT: The default length for the generated kubernetes root CA certificate is 10 years. Replacing the root CA certificate is an involved process so the custom certificate expiry should be as long as possible. We recommend ensuring root CA certificate has an expiry of at least 5-10 years. These changes must be made via the ansible bootstrap playbook. Three new optional parameters are provided. k8s_root_ca_cert k8s_root_ca_key apiserver_cert_sans k8s_root_ca_cert and k8s_root_ca_key must specify the full path to the certificate and key respectively. The root CA certificate must be in PEM format. These two values must be provided as a pair. The playbook will not proceed if only one value is provided. The apiserver_cert_sans parameter can be used to specify a list of Subject Alternative Name entries that will be added to the kubernetes apiserver certificate. Each entry in the list must be an IP address or domain name. eg apiserver_cert_sans: - hostname.domain - 198.51.100.75 Reviews: https://review.opendev.org/#/c/671561 https://review.opendev.org/#/c/671559 Launchpad: https://bugs.launchpad.net/starlingx/+bug/1837079 -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Aug 2 20:45:36 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 2 Aug 2019 13:45:36 -0700 Subject: [Starlingx-discuss] [build][multi-os]Proposal for new starlingx repo - stx-zuul-jobs In-Reply-To: <20190802181530.choblqwe3zip2mqp@yuggoth.org> References: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> <20190802181530.choblqwe3zip2mqp@yuggoth.org> Message-ID: Jeremy: On 8/2/19 11:15 AM, Jeremy Stanley wrote: > On 2019-08-02 11:06:13 -0700 (-0700), Saul Wold wrote: > [...] >> I see that the "openstack" repo name space has an >> openstack-zuul-jobs, I think this is because zuul itself has a >> zuul-jobs which might conflict in the zuul namespace (this is just >> a guess). > [...] > > Close. At one time those repositories were called > openstack-infra/zuul-jobs and openstack-infra/openstack-zuul-jobs > (the former was used for generic Zuul job building blocks while the > latter housed reusable OpenStack-specific test bits). Over time > those repositories moved into the zuul and openstack namespaces > respectively. Two repositories having the same short name in > different namespaces is perfectly fine; for example, we have > opendev/project-config and openstack/project-config repos. > Thanks for the clarification. Then my proposed new repo name can be: starlingx/zuul-jobs Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From fungi at yuggoth.org Fri Aug 2 21:47:03 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Fri, 2 Aug 2019 21:47:03 +0000 Subject: [Starlingx-discuss] [build][multi-os]Proposal for new starlingx repo - stx-zuul-jobs In-Reply-To: References: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> <20190802181530.choblqwe3zip2mqp@yuggoth.org> Message-ID: <20190802214702.k4exw3cit2hxpuxq@yuggoth.org> On 2019-08-02 13:45:36 -0700 (-0700), Saul Wold wrote: [...] > > Two repositories having the same short name in different > > namespaces is perfectly fine; for example, we have > > opendev/project-config and openstack/project-config repos. > > > Thanks for the clarification. > > Then my proposed new repo name can be: starlingx/zuul-jobs Yes, that should "just work." -- Jeremy Stanley From maria.g.perez.ibarra at intel.com Fri Aug 2 22:18:35 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 2 Aug 2019 22:18:35 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190802 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-02 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sat Aug 3 01:31:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 2 Aug 2019 21:31:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync - Build # 403 - Failure! Message-ID: <1045877781.27.1564795892271.JavaMail.javamailuser@localhost> Project: STX_repo_sync Build #: 403 Status: Failure Timestamp: 20190803T013007Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190803T013000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190803T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190803T013000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Sat Aug 3 01:31:34 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 2 Aug 2019 21:31:34 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 1 - Failure! Message-ID: <1010961795.30.1564795895376.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 1 Status: Failure Timestamp: 20190803T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190803T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From austin.sun at intel.com Sat Aug 3 01:59:29 2019 From: austin.sun at intel.com (Sun, Austin) Date: Sat, 3 Aug 2019 01:59:29 +0000 Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node In-Reply-To: <21deb544.a196.16c5308b6ae.Coremail.gaosong_1250@163.com> References: <21deb544.a196.16c5308b6ae.Coremail.gaosong_1250@163.com> Message-ID: Hi Gao Song:  Your issue might be same as we met before [1]. We just change mgmt. subnet to 192.178.xx.xx to solve this issue. could you try this? [1] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004485.html Thanks. BR Austin Sun. From: gao.song [mailto:gaosong_1250 at 163.com] Sent: Friday, August 2, 2019 11:53 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node Hi community:      Since we encounter some difficult problems using PXE to boot a compute node,it just get a pxeboot subnet ip(169.254.202.xxx) instead of a management subnet(192.168.xx.xx).       As a result installation procedure will hold after load initrd file.      So we wonder if there  is another way to install a compute node with a standard mode controller, like install the compute using ISO manually and then issue  configure commands  to just enable compute sub-function.      Any help will be appreciated!   From gaosong_1250 at 163.com Sat Aug 3 03:07:59 2019 From: gaosong_1250 at 163.com (gao.song) Date: Sat, 3 Aug 2019 11:07:59 +0800 (CST) Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node In-Reply-To: References: <21deb544.a196.16c5308b6ae.Coremail.gaosong_1250@163.com> Message-ID: <1f826d52.279f.16c5572c013.Coremail.gaosong_1250@163.com> Hi Sun: It seems similar, but there is a tiny difference but worth a shot following your suggestion. As we can see from /var/log/daemon.log,the compute node assigned an IP not belong to the mgmt subnet, while your second node got an right ip. 2019-07-31T16:12:47.000 controller-0 dnsmasq-dhcp[3416]: info DHCPDISCOVER(eno3) 6c:92:bf:2a:43:1e 2019-07-31T16:12:47.000 controller-0 dnsmasq-dhcp[3416]: info DHCPOFFER(eno3) 169.254.202.216 6c:92:bf:2a:43:1e 2019-07-31T16:12:48.000 controller-0 dnsmasq-dhcp[3416]: info DHCPREQUEST(eno3) 169.254.202.216 6c:92:bf:2a:43:1e 2019-07-31T16:12:48.000 controller-0 dnsmasq-dhcp[3416]: info DHCPACK(eno3) 169.254.202.216 6c:92:bf:2a:43:1e 2019-07-31T16:12:48.000 controller-0 dnsmasq-tftp[3416]: err error 0 TFTP Aborted received from 169.254.202.216 2019-07-31T16:12:48.000 controller-0 dnsmasq-tftp[3416]: info failed sending /pxeboot/pxelinux.0 to 169.254.202.216 Thanks! At 2019-08-03 09:59:29,"Sun, Austin" Wrote: >Hi Gao Song: > Your issue might be same as we met before [1]. We just change mgmt. subnet to 192.178.xx.xx to solve this issue. could you try this? > >[1] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004485.html > >Thanks. >BR >Austin Sun. > >From: gao.song [mailto:gaosong_1250 at 163.com] >Sent: Friday, August 2, 2019 11:53 PM >To: starlingx-discuss at lists.starlingx.io >Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node > >Hi community: > Since we encounter some difficult problems using PXE to boot a compute node,it just get a pxeboot subnet ip(169.254.202.xxx) instead of a management subnet(192.168.xx.xx). > As a result installation procedure will hold after load initrd file. > So we wonder if there is another way to install a compute node with a standard mode controller, like install the compute using ISO manually and then issue configure commands to just enable compute sub-function. > Any help will be appreciated! > > > >_______________________________________________ >Starlingx-discuss mailing list >Starlingx-discuss at lists.starlingx.io >http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Akshay.346 at hsc.com Thu Aug 1 13:49:02 2019 From: Akshay.346 at hsc.com (Akshay 346) Date: Thu, 1 Aug 2019 13:49:02 +0000 Subject: [Starlingx-discuss] SFC support in starlingX Message-ID: Hello Team, Hope you all are doing good. I am trying to test weather sfc works in starlingX or not. I made a flow classifier. Now when I am creating a simple port pair of a launched instance, it fails stating following error: "InternalServerError: create_port_pair_postcommit failed." I want to confirm weather SFC works in starlingX or not. Please help. Best Regards, [cid:image001.jpg at 01D46BAD.8F199640] DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3428 bytes Desc: image001.jpg URL: From build.starlingx at gmail.com Sun Aug 4 01:31:38 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 3 Aug 2019 21:31:38 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync - Build # 405 - Failure! Message-ID: <33848709.38.1564882299339.JavaMail.javamailuser@localhost> Project: STX_repo_sync Build #: 405 Status: Failure Timestamp: 20190804T013009Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190804T013000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190804T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190804T013000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Sun Aug 4 01:31:41 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 3 Aug 2019 21:31:41 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 2 - Still Failing! In-Reply-To: <1121267178.28.1564795893015.JavaMail.javamailuser@localhost> References: <1121267178.28.1564795893015.JavaMail.javamailuser@localhost> Message-ID: <1629200241.41.1564882302358.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 2 Status: Still Failing Timestamp: 20190804T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190804T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From build.starlingx at gmail.com Mon Aug 5 01:31:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 4 Aug 2019 21:31:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync - Build # 408 - Failure! Message-ID: <1035438178.49.1564968695991.JavaMail.javamailuser@localhost> Project: STX_repo_sync Build #: 408 Status: Failure Timestamp: 20190805T013008Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T013000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190805T013000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Mon Aug 5 01:31:38 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 4 Aug 2019 21:31:38 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 3 - Still Failing! In-Reply-To: <329079875.39.1564882300086.JavaMail.javamailuser@localhost> References: <329079875.39.1564882300086.JavaMail.javamailuser@localhost> Message-ID: <1166027139.52.1564968699215.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 3 Status: Still Failing Timestamp: 20190805T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From ezpeerchen at gmail.com Mon Aug 5 02:44:47 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 5 Aug 2019 10:44:47 +0800 Subject: [Starlingx-discuss] How to change OAM IP? (STX R1.0) In-Reply-To: References: Message-ID: Dear all, Any suggestion about this issue? Thanks Ezpeer Chen 於 2019年8月1日 週四 下午4:37寫道: > Dear all, > > I can''t change OAM ip on STX R1.0. > > Error Message: > *Please configure a valid IP address in range.* > > > ===================================================================== > [wrsroot at controller-0 ~(keystone_admin)]$ system oam-show > +-----------------+--------------------------------------+ > | Property | Value | > +-----------------+--------------------------------------+ > | created_at | 2019-07-25T11:24:15.619490+00:00 | > | isystem_uuid | 103e067c-16c0-43ff-91ea-bea261f33b8a | > | oam_c0_ip | 10.10.10.3 | > | oam_c1_ip | 10.10.10.4 | > | oam_floating_ip | 10.10.10.2 | > | oam_gateway_ip | 10.10.10.1 | > | oam_subnet | 10.0.0.0/8 | > | updated_at | None | > | uuid | bd7e48fd-d29e-4ae6-be1f-a45a56250ede | > +-----------------+--------------------------------------+ > [wrsroot at controller-0 ~(keystone_admin)]$ system oam-modify oam_subnet= > 10.0.0.0/8 oam_gateway_ip=10.72.72.1 oam_floating_ip=10.72.72.2 > oam_c0_ip=10.72.72.3 oam_c1_ip=10.72.72.4 action=apply > Invalid oam_floating_ip=10.72.72.2. Please configure a valid IP address in > range > [wrsroot at controller-0 ~(keystone_admin)]$ > > ===================================================================== > > > Thanks > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Mon Aug 5 02:49:08 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 5 Aug 2019 10:49:08 +0800 Subject: [Starlingx-discuss] Some system commands not found (STX R1.0) Message-ID: Dear all, >From stx.2018.10_Testplan_Instructions: https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions Some system commands not found. example: system config-list system config-section-list . . . I can't use these commands. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong.lc at inspur.com Mon Aug 5 02:55:21 2019 From: gaosong.lc at inspur.com (=?utf-8?B?U29uZyBHYW8gc29uZyAo6auY5p2+KQ==?=) Date: Mon, 5 Aug 2019 02:55:21 +0000 Subject: [Starlingx-discuss] =?utf-8?b?562U5aSNOiBbbGlzdHMuc3Rhcmxpbmd4?= =?utf-8?b?Lmlv5Luj5Y+RXVJlOiAgSG93IHRvIGNoYW5nZSBPQU0gSVA/IChTVFggUjEu?= =?utf-8?q?0=29?= In-Reply-To: References: <702613aea09a08542e8421f27d332078@sslemail.net> Message-ID: <3df3e56f119446c5a51cda6d619bbbb1@inspur.com> “Invalid oam_floating_ip=10.72.72.2. Please configure a valid IP address in range” means you set an floating IP(10.72.72.2)not in oam_subnet 10.0.0.0/8 , you should modify oam_subnet or change the floating ip. 发件人: Ezpeer Chen [mailto:ezpeerchen at gmail.com] 发送时间: 2019年8月5日 10:45 收件人: starlingx-discuss at lists.starlingx.io 主题: [lists.starlingx.io代发]Re: [Starlingx-discuss] How to change OAM IP? (STX R1.0) Dear all, Any suggestion about this issue? Thanks Ezpeer Chen > 於 2019年8月1日 週四 下午4:37寫道: Dear all, I can''t change OAM ip on STX R1.0. Error Message: Please configure a valid IP address in range. ===================================================================== [wrsroot at controller-0 ~(keystone_admin)]$ system oam-show +-----------------+--------------------------------------+ | Property | Value | +-----------------+--------------------------------------+ | created_at | 2019-07-25T11:24:15.619490+00:00 | | isystem_uuid | 103e067c-16c0-43ff-91ea-bea261f33b8a | | oam_c0_ip | 10.10.10.3 | | oam_c1_ip | 10.10.10.4 | | oam_floating_ip | 10.10.10.2 | | oam_gateway_ip | 10.10.10.1 | | oam_subnet | 10.0.0.0/8 | | updated_at | None | | uuid | bd7e48fd-d29e-4ae6-be1f-a45a56250ede | +-----------------+--------------------------------------+ [wrsroot at controller-0 ~(keystone_admin)]$ system oam-modify oam_subnet=10.0.0.0/8 oam_gateway_ip=10.72.72.1 oam_floating_ip=10.72.72.2 oam_c0_ip=10.72.72.3 oam_c1_ip=10.72.72.4 action=apply Invalid oam_floating_ip=10.72.72.2. Please configure a valid IP address in range [wrsroot at controller-0 ~(keystone_admin)]$ ===================================================================== Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: smime.p7s Type: application/pkcs7-signature Size: 3603 bytes Desc: not available URL: From ezpeerchen at gmail.com Mon Aug 5 03:17:03 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 5 Aug 2019 11:17:03 +0800 Subject: [Starlingx-discuss] =?utf-8?b?W2xpc3RzLnN0YXJsaW5neC5pb+S7ow==?= =?utf-8?b?5Y+RXVJlOiAgSG93IHRvIGNoYW5nZSBPQU0gSVA/IChTVFggUjEuMCk=?= In-Reply-To: <3df3e56f119446c5a51cda6d619bbbb1@inspur.com> References: <702613aea09a08542e8421f27d332078@sslemail.net> <3df3e56f119446c5a51cda6d619bbbb1@inspur.com> Message-ID: 10.72.72.2 is in oam_subnet 10.0.0.0/8. Why i can't configure it ? Song Gao song (高松) 於 2019年8月5日 週一 上午10:55寫道: > “Invalid oam_floating_ip=10.72.72.2. Please configure a valid IP address > in range” means you set an floating IP(10.72.72.2)not in oam_subnet > 10.0.0.0/8, you should modify oam_subnet or change the floating ip. > > > > *发件人:* Ezpeer Chen [mailto:ezpeerchen at gmail.com] > *发送时间:* 2019年8月5日 10:45 > *收件人:* starlingx-discuss at lists.starlingx.io > *主题:* [lists.starlingx.io代发]Re: [Starlingx-discuss] How to change OAM IP? > (STX R1.0) > > > > Dear all, > > > > Any suggestion about this issue? > > > > Thanks > > > > > > Ezpeer Chen 於 2019年8月1日 週四 下午4:37寫道: > > Dear all, > > > > I can''t change OAM ip on STX R1.0. > > > > Error Message: > > *Please configure a valid IP address in range.* > > > > > > ===================================================================== > > [wrsroot at controller-0 ~(keystone_admin)]$ system oam-show > +-----------------+--------------------------------------+ > | Property | Value | > +-----------------+--------------------------------------+ > | created_at | 2019-07-25T11:24:15.619490+00:00 | > | isystem_uuid | 103e067c-16c0-43ff-91ea-bea261f33b8a | > | oam_c0_ip | 10.10.10.3 | > | oam_c1_ip | 10.10.10.4 | > | oam_floating_ip | 10.10.10.2 | > | oam_gateway_ip | 10.10.10.1 | > | oam_subnet | 10.0.0.0/8 | > | updated_at | None | > | uuid | bd7e48fd-d29e-4ae6-be1f-a45a56250ede | > +-----------------+--------------------------------------+ > [wrsroot at controller-0 ~(keystone_admin)]$ system oam-modify oam_subnet= > 10.0.0.0/8 oam_gateway_ip=10.72.72.1 oam_floating_ip=10.72.72.2 > oam_c0_ip=10.72.72.3 oam_c1_ip=10.72.72.4 action=apply > Invalid oam_floating_ip=10.72.72.2. Please configure a valid IP address in > range > [wrsroot at controller-0 ~(keystone_admin)]$ > > > > ===================================================================== > > > > > > Thanks > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Aug 5 14:30:24 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 5 Aug 2019 09:30:24 -0500 Subject: [Starlingx-discuss] Multi OS meeting notes Aug 5 Message-ID: Multi-OS team meeting This meeting will be rescheduled for next week due to Canada Holliday and Marcela on vacations The topics for next week meeting will be - Opens: - Talk about: Proposal for new StarlingX repo - stx-zuul-jobs from Saul - Review with Marcela the need of branch for systemd/ multi os changes - Presentation from Yocto team about what they are doing and where they are. They are right now on the job of porting ~300 packages. - Open Suse Next steps - Top important next steps in order (some could be in parallel, like learning more about the flock services to develop testing): - Have all dependencies and configuration needed for each service in its spec file ( clean ) - Standardize all the services to systemd ( Do not have tarballs on OBS, it's better to use a git branch for the patches. Use _service.) - Build the OpenSUSE kernel with functional patches for STX patches. - Make the Kernel install installed automatically - Removed hardwired libraries versions from spec files - Keep learning about the services and develop per service test cases (white box testing) - Do automatic testing - Do security rules - Automatic image generation (using kiwi for example) - Topics we talk today during the short (but effective) meeting: - We would like to start sharing more the work we both teams are doing to help each other more. The Yocto team is doing some discovering on the need of an update to latest versions packages ( like haproxy) and the Open Suse team might already have an analysis for this problem before starting the straightforward porting. Maybe the Yocto team is head on the installation problems with systemd and can together we can make things go faster. - Victor would like to see more patches on the mailing list. ( git send-mail ) so we can have proper discussions on the mailing list. - Victor will put on the wiki the link to the spreadsheet of patches analysis made by Intel team months ago https://etherpad.openstack.org/p/stx-multios -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Aug 5 16:58:37 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 5 Aug 2019 16:58:37 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007ABF516@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007ABF516@ALA-MBD.corp.ad.wrs.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE745CB@FMSMSX112.amr.corp.intel.com> Nice to see this happening. One question: would the two ISOs (master, RC1) will be available at similar times? Or how is the build process to be scheduled? A. > -----Original Message----- > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > Sent: Friday, August 2, 2019 1:56 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion > about StarlingX release 2.0 > > From the meeting just now, Scott & Dean agreed that they can base the > branch off of a SHA, so there is no need for a freeze. > > They will finish some script changes for this (thanks guys) and, assuming > Monday's sanity is green, will start creating the branch on Tuesday. > > After the sanity on the RC1 branch is done, we'll announce that the branch is > ready to use. > > More details here and at [0] on the modified sequence... > > 1 Start (no freeze required since we're branching from a SHA, not from Head) > - on Tuesday > - Dean will start at ~9:30 his time (CDT) (10:30 EDT) > - assuming sanity is Green > - Dean will branch from the SHA for that sanity's build 2. Scott/Dean create > RC1 branch and make required build changes. > - Dean will make sure he's able to do the SHA thing > - this will be based on Monday's sanity, which will be based on the commits > up to Sunday evening > - i.e. UTC 0130 am Monday > https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190805T01 > 30 > 3. Scott triggers RC1 build, both ISO and Docker images. > - Scott has a few script changes to do, will work to knock those off today > - we agreed on these build paths > - Release path > .../starlingx/release/2.0.##/centos > .../starlingx/release/2.0/2.0.##/centos > - RC path > .../starlingx/rc/2.0/centos/timestamp > 4. Ada's team runs sanity and confirm RC1 build passes sanity. > 5. Release Team announces that the RC1 branch is now available. > > Note: Developers push changes to master and for Medium & High priority LPs > also cherry pick their commit to the RC1 branch. > > [0] https://etherpad.openstack.org/p/stx-releases > > -----Original Message----- > From: Zvonar, Bill > Sent: Friday, August 2, 2019 1:36 PM > To: starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion > about StarlingX release 2.0 > > Proposed next steps (to be discussed in meeting today... > > 1. Release Team announces a freeze on master for TBD time (see Note) 2. > Scott/Dean create RC1 branch and make required build changes. > 3. Scott triggers RC1 build, both ISO and Docker images. > 4. Ada's team runs sanity and confirm RC1 build passes sanity. > 5. Release Team announces that the RC1 branch is now available. > > Note: Developers push changes to master and for High priority LPs also cherry > pick their commit to the RC1 branch. > > -----Original Message----- > From: Dean Troyer > Sent: Friday, August 2, 2019 1:19 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion > about StarlingX release 2.0 > > On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill > wrote: > > Scott - if you can provide a brief summary here of what you think the steps > are beforehand, that'd be great. > > I ran a dry-run of the branching process this morning using the following: > > Branch: r/stx.2.0 > Tag: v2.0.0.rc0 > > Tagging the branch point makes it easier later to pull a list of changes for the > next RC or the release tag... > > I did find a couple of tweaks required in the branch-stx.sh script to account > for the OpenDev change and only branching the Gerrit repos: > https://review.opendev.org/#/c/674342/ > > The wiki pages [0] and [1] have been updated to match my current > understanding (above) of the release naming and process. > > dt > > [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan > [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yan.chen at intel.com Mon Aug 5 03:07:07 2019 From: yan.chen at intel.com (Chen, Yan) Date: Mon, 5 Aug 2019 03:07:07 +0000 Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? In-Reply-To: <19C65A6E92EA384D809B1772130CD7F86926D68A@ALA-MBD.corp.ad.wrs.com> References: <72AD03D27224C74982BE13246D75B39739A3FADE@SHSMSX103.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F86926D68A@ALA-MBD.corp.ad.wrs.com> Message-ID: <72AD03D27224C74982BE13246D75B39739A40B42@SHSMSX103.ccr.corp.intel.com> Thanks for reply! I tried to remove and delete stx-openstack application but natbox will still be configured if I don't remove the natbox config from my test.conf. And if I removed natbox config, I got another error said "Active controller ssh client is not set!" I'm wondering if I configured the test correctly, is there any document for this test suite? I tried to follow the readme under the stx-test repo, but I'm not sure that's updated. Yan From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Friday, August 2, 2019 01:22 To: Chen, Yan ; starlingx-discuss at lists.starlingx.io Subject: RE: How to run stx-test: automated-pytest-suite? Hi Yan, Could you please attach the full log? The active controller client should have been set while pytest is collecting the test cases prior to setup step 1 even started. It should be under ~/AUTOMATION_LOGS///TIS_AUTOMATION.log Also assuming we figured out above issue, seems your goal is to run platform sanity that is not dependent on stx-openstack, at the moment, the setup automatically detects stx-openstack, and if it's applied, it will also expect the tenants,users,neutron routers,networks to have been created to prepare for stx-openstack test. To work around it, you can do one of the two things for now. 1. Remove stx-openstack 2. In file /home/ec/workspace/codebase/test/automated-pytest-suite/setups.py, modify following function: def setup_natbox_ssh(natbox, con_ssh): return None To fix it properly, we can add a configurable option to the test config file to skip the setup even if stx-openstack is present. BR, yang From: Chen, Yan [mailto:yan.chen at intel.com] Sent: August-01-19 4:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? Hi, I'm trying to run the automated-pytest-suite cases under stx-test on a simplex deployment (on a vm created by qemu kvm). The controller-0 is already unlocked and stx-openstack applied successfully. I tried to make my own test.conf following the sample config file (stx-test_template.conf) as attached. And I tried to run cases with the following command: $ pytest -m platform_sanity --testcase-config=./test.conf testcases/ But I got the following error log at the setup step 1: [2019-08-01 07:44:23,466] 1477 INFO MainThread ssh.set_natbox_client:: NatBox localhost ssh client is set [2019-08-01 07:44:23,466] 1425 INFO MainThread ssh.get_natbox_client:: Getting NatBox Client... [2019-08-01 07:44:25,322] 845 INFO MainThread container_helper.is_stx_openstack_deployed:: ['applied'] [2019-08-01 07:44:25,322] 109 INFO MainThread setups.setup_keypair:: scp key file from controller to NATBox ***Failure at test setup: /home/ec/workspace/codebase/test/automated-pytest-suite/utils/clients/ssh.py:1549: utils.exceptions.ActiveControllerUnsetException: Active controller ssh client is not set! Please use ControllerClient.set_active_controller(ssh_client) to set an active controller client. ERROR I'm wondering if this NATBox configuration is a must and if I configured it correctly? Anyone can help on the test config file? Or is there anything I need to do before the test? Thanks a lot! Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TIS_AUTOMATION_natbox_issue.log Type: application/octet-stream Size: 105331 bytes Desc: TIS_AUTOMATION_natbox_issue.log URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: TIS_AUTOMATION_active_controller_error.log Type: application/octet-stream Size: 98579 bytes Desc: TIS_AUTOMATION_active_controller_error.log URL: From maria.g.perez.ibarra at intel.com Mon Aug 5 22:12:22 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 5 Aug 2019 22:12:22 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190805 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-05 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Aug 5 23:31:17 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 5 Aug 2019 19:31:17 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync - Build # 409 - Still Failing! In-Reply-To: <946833870.47.1564968692424.JavaMail.javamailuser@localhost> References: <946833870.47.1564968692424.JavaMail.javamailuser@localhost> Message-ID: <894537085.59.1565047878230.JavaMail.javamailuser@localhost> Project: STX_repo_sync Build #: 409 Status: Still Failing Timestamp: 20190805T233007Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T233000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T233000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190805T233000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Mon Aug 5 23:31:20 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 5 Aug 2019 19:31:20 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 4 - Still Failing! In-Reply-To: <2058452876.50.1564968696689.JavaMail.javamailuser@localhost> References: <2058452876.50.1564968696689.JavaMail.javamailuser@localhost> Message-ID: <1864685548.62.1565047881411.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 4 Status: Still Failing Timestamp: 20190805T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From yang.liu at windriver.com Tue Aug 6 00:17:06 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Tue, 6 Aug 2019 00:17:06 +0000 Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? In-Reply-To: <72AD03D27224C74982BE13246D75B39739A40B42@SHSMSX103.ccr.corp.intel.com> References: <72AD03D27224C74982BE13246D75B39739A3FADE@SHSMSX103.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F86926D68A@ALA-MBD.corp.ad.wrs.com> <72AD03D27224C74982BE13246D75B39739A40B42@SHSMSX103.ccr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F86927BFFF@ALA-MBD.corp.ad.wrs.com> Hi Yan, You seems to set the test config file correctly. I was able to reproduce the issue in my environment, basically it's a bug in the test framework where the example given in consts/lab.py happened to have the same controller-0 IP as yours while the rest of the OAM IPs are different, so it got confused. To work around it, you need to comment out the example in class Labs in consts/lab.py as following: class Labs: # EXAMPLE = { # 'short_name': 'my_server', # 'name': 'my_server.com', # 'floating ip': '10.10.10.2', # 'controller-0 ip': '10.10.10.3', # 'controller-1 ip': '10.10.10.4', # } pass I will upload a fix for this issue. BR, Yang From: Chen, Yan [mailto:yan.chen at intel.com] Sent: August-04-19 11:07 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: How to run stx-test: automated-pytest-suite? Thanks for reply! I tried to remove and delete stx-openstack application but natbox will still be configured if I don't remove the natbox config from my test.conf. And if I removed natbox config, I got another error said "Active controller ssh client is not set!" I'm wondering if I configured the test correctly, is there any document for this test suite? I tried to follow the readme under the stx-test repo, but I'm not sure that's updated. Yan From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Friday, August 2, 2019 01:22 To: Chen, Yan ; starlingx-discuss at lists.starlingx.io Subject: RE: How to run stx-test: automated-pytest-suite? Hi Yan, Could you please attach the full log? The active controller client should have been set while pytest is collecting the test cases prior to setup step 1 even started. It should be under ~/AUTOMATION_LOGS///TIS_AUTOMATION.log Also assuming we figured out above issue, seems your goal is to run platform sanity that is not dependent on stx-openstack, at the moment, the setup automatically detects stx-openstack, and if it's applied, it will also expect the tenants,users,neutron routers,networks to have been created to prepare for stx-openstack test. To work around it, you can do one of the two things for now. 1. Remove stx-openstack 2. In file /home/ec/workspace/codebase/test/automated-pytest-suite/setups.py, modify following function: def setup_natbox_ssh(natbox, con_ssh): return None To fix it properly, we can add a configurable option to the test config file to skip the setup even if stx-openstack is present. BR, yang From: Chen, Yan [mailto:yan.chen at intel.com] Sent: August-01-19 4:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How to run stx-test: automated-pytest-suite? Hi, I'm trying to run the automated-pytest-suite cases under stx-test on a simplex deployment (on a vm created by qemu kvm). The controller-0 is already unlocked and stx-openstack applied successfully. I tried to make my own test.conf following the sample config file (stx-test_template.conf) as attached. And I tried to run cases with the following command: $ pytest -m platform_sanity --testcase-config=./test.conf testcases/ But I got the following error log at the setup step 1: [2019-08-01 07:44:23,466] 1477 INFO MainThread ssh.set_natbox_client:: NatBox localhost ssh client is set [2019-08-01 07:44:23,466] 1425 INFO MainThread ssh.get_natbox_client:: Getting NatBox Client... [2019-08-01 07:44:25,322] 845 INFO MainThread container_helper.is_stx_openstack_deployed:: ['applied'] [2019-08-01 07:44:25,322] 109 INFO MainThread setups.setup_keypair:: scp key file from controller to NATBox ***Failure at test setup: /home/ec/workspace/codebase/test/automated-pytest-suite/utils/clients/ssh.py:1549: utils.exceptions.ActiveControllerUnsetException: Active controller ssh client is not set! Please use ControllerClient.set_active_controller(ssh_client) to set an active controller client. ERROR I'm wondering if this NATBox configuration is a must and if I configured it correctly? Anyone can help on the test config file? Or is there anything I need to do before the test? Thanks a lot! Yan -------------- next part -------------- An HTML attachment was scrubbed... URL: From gaosong_1250 at 163.com Tue Aug 6 03:25:26 2019 From: gaosong_1250 at 163.com (gao.song) Date: Tue, 6 Aug 2019 11:25:26 +0800 (CST) Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node In-Reply-To: <1f826d52.279f.16c5572c013.Coremail.gaosong_1250@163.com> References: <21deb544.a196.16c5308b6ae.Coremail.gaosong_1250@163.com> <1f826d52.279f.16c5572c013.Coremail.gaosong_1250@163.com> Message-ID: <7f4fce99.499c.16c64f5ce5c.Coremail.gaosong_1250@163.com> Just FYI. Finally, we solve the problem, but, not quite sure about the root cause. Maybe it relates to pxe in vlan, which caused some confusing problem. And we disable uefi mode of the compute node. 在 2019-08-03 11:07:59,"gao.song" 写道: Hi Sun: It seems similar, but there is a tiny difference but worth a shot following your suggestion. As we can see from /var/log/daemon.log,the compute node assigned an IP not belong to the mgmt subnet, while your second node got an right ip. 2019-07-31T16:12:47.000 controller-0 dnsmasq-dhcp[3416]: info DHCPDISCOVER(eno3) 6c:92:bf:2a:43:1e 2019-07-31T16:12:47.000 controller-0 dnsmasq-dhcp[3416]: info DHCPOFFER(eno3) 169.254.202.216 6c:92:bf:2a:43:1e 2019-07-31T16:12:48.000 controller-0 dnsmasq-dhcp[3416]: info DHCPREQUEST(eno3) 169.254.202.216 6c:92:bf:2a:43:1e 2019-07-31T16:12:48.000 controller-0 dnsmasq-dhcp[3416]: info DHCPACK(eno3) 169.254.202.216 6c:92:bf:2a:43:1e 2019-07-31T16:12:48.000 controller-0 dnsmasq-tftp[3416]: err error 0 TFTP Aborted received from 169.254.202.216 2019-07-31T16:12:48.000 controller-0 dnsmasq-tftp[3416]: info failed sending /pxeboot/pxelinux.0 to 169.254.202.216 Thanks! At 2019-08-03 09:59:29,"Sun, Austin" Wrote: >Hi Gao Song: > Your issue might be same as we met before [1]. We just change mgmt. subnet to 192.178.xx.xx to solve this issue. could you try this? > >[1] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-May/004485.html > >Thanks. >BR >Austin Sun. > >From: gao.song [mailto:gaosong_1250 at 163.com] >Sent: Friday, August 2, 2019 11:53 PM >To: starlingx-discuss at lists.starlingx.io >Subject: [Starlingx-discuss] [STX 1.0] Seek for alternative way to install compute node > >Hi community: > Since we encounter some difficult problems using PXE to boot a compute node,it just get a pxeboot subnet ip(169.254.202.xxx) instead of a management subnet(192.168.xx.xx). > As a result installation procedure will hold after load initrd file. > So we wonder if there is another way to install a compute node with a standard mode controller, like install the compute using ISO manually and then issue configure commands to just enable compute sub-function. > Any help will be appreciated! > > > >_______________________________________________ >Starlingx-discuss mailing list >Starlingx-discuss at lists.starlingx.io >http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Aug 6 11:13:30 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 6 Aug 2019 11:13:30 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> Hi all - yesterday's sanity was green, so we'll go ahead with the plan below (assuming Dean & Scott haven't run into any roadblocks). Bill... -----Original Message----- From: Zvonar, Bill Sent: Friday, August 2, 2019 2:56 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 From the meeting just now, Scott & Dean agreed that they can base the branch off of a SHA, so there is no need for a freeze. They will finish some script changes for this (thanks guys) and, assuming Monday's sanity is green, will start creating the branch on Tuesday. After the sanity on the RC1 branch is done, we'll announce that the branch is ready to use. More details here and at [0] on the modified sequence... 1 Start (no freeze required since we're branching from a SHA, not from Head) - on Tuesday - Dean will start at ~9:30 his time (CDT) (10:30 EDT) - assuming sanity is Green - Dean will branch from the SHA for that sanity's build 2. Scott/Dean create RC1 branch and make required build changes. - Dean will make sure he's able to do the SHA thing - this will be based on Monday's sanity, which will be based on the commits up to Sunday evening - i.e. UTC 0130 am Monday https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190805T0130 3. Scott triggers RC1 build, both ISO and Docker images. - Scott has a few script changes to do, will work to knock those off today - we agreed on these build paths - Release path .../starlingx/release/2.0.##/centos .../starlingx/release/2.0/2.0.##/centos - RC path .../starlingx/rc/2.0/centos/timestamp 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for Medium & High priority LPs also cherry pick their commit to the RC1 branch. [0] https://etherpad.openstack.org/p/stx-releases -----Original Message----- From: Zvonar, Bill Sent: Friday, August 2, 2019 1:36 PM To: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Proposed next steps (to be discussed in meeting today... 1. Release Team announces a freeze on master for TBD time (see Note) 2. Scott/Dean create RC1 branch and make required build changes. 3. Scott triggers RC1 build, both ISO and Docker images. 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for High priority LPs also cherry pick their commit to the RC1 branch. -----Original Message----- From: Dean Troyer Sent: Friday, August 2, 2019 1:19 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill wrote: > Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. I ran a dry-run of the branching process this morning using the following: Branch: r/stx.2.0 Tag: v2.0.0.rc0 Tagging the branch point makes it easier later to pull a list of changes for the next RC or the release tag... I did find a couple of tweaks required in the branch-stx.sh script to account for the OpenDev change and only branching the Gerrit repos: https://review.opendev.org/#/c/674342/ The wiki pages [0] and [1] have been updated to match my current understanding (above) of the release naming and process. dt [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Aug 6 11:44:01 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 6 Aug 2019 11:44:01 +0000 Subject: [Starlingx-discuss] Community Call (August 7, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC892C@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity update - any reds? - reviews in need of attention - RC1 declaration - branch logistics, etc. - high/medium launchpads - docs update - opens Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190807T1400 From cindy.xie at intel.com Tue Aug 6 12:27:04 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 6 Aug 2019 12:27:04 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36008359@SHSMSX104.ccr.corp.intel.com> All, I am out of office tomorrow and not able to host this call tomorrow. I will cancel this occurrence and we will continue the meeting next week. Sorry about this. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 7, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ian.Jolliffe at windriver.com Tue Aug 6 12:35:58 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Tue, 6 Aug 2019 12:35:58 +0000 Subject: [Starlingx-discuss] [Packet-SIG] Meeting today Message-ID: <2F149BCD-5596-4E3D-89AA-EB3FA2FF91C7@windriver.com> There is a meeting today on the calendar – but, I can’t make it. I will catch up on the etherpad, if there is quorum today. Regards; Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Aug 6 12:47:59 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 6 Aug 2019 12:47:59 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC89D4@ALA-MBD.corp.ad.wrs.com> Ada/Numan - FYI re: build frequency... According the build team notes from last week's meeting [0], they will do 2 builds daily during the 4 week (RC1 - Release) period. [0] https://etherpad.openstack.org/p/stx-build -----Original Message----- From: Zvonar, Bill Sent: Tuesday, August 6, 2019 7:13 AM To: 'starlingx-discuss at lists.starlingx.io' ; 'Dean Troyer' ; Little, Scott Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Hi all - yesterday's sanity was green, so we'll go ahead with the plan below (assuming Dean & Scott haven't run into any roadblocks). Bill... -----Original Message----- From: Zvonar, Bill Sent: Friday, August 2, 2019 2:56 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 From the meeting just now, Scott & Dean agreed that they can base the branch off of a SHA, so there is no need for a freeze. They will finish some script changes for this (thanks guys) and, assuming Monday's sanity is green, will start creating the branch on Tuesday. After the sanity on the RC1 branch is done, we'll announce that the branch is ready to use. More details here and at [0] on the modified sequence... 1 Start (no freeze required since we're branching from a SHA, not from Head) - on Tuesday - Dean will start at ~9:30 his time (CDT) (10:30 EDT) - assuming sanity is Green - Dean will branch from the SHA for that sanity's build 2. Scott/Dean create RC1 branch and make required build changes. - Dean will make sure he's able to do the SHA thing - this will be based on Monday's sanity, which will be based on the commits up to Sunday evening - i.e. UTC 0130 am Monday https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190805T0130 3. Scott triggers RC1 build, both ISO and Docker images. - Scott has a few script changes to do, will work to knock those off today - we agreed on these build paths - Release path .../starlingx/release/2.0.##/centos .../starlingx/release/2.0/2.0.##/centos - RC path .../starlingx/rc/2.0/centos/timestamp 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for Medium & High priority LPs also cherry pick their commit to the RC1 branch. [0] https://etherpad.openstack.org/p/stx-releases -----Original Message----- From: Zvonar, Bill Sent: Friday, August 2, 2019 1:36 PM To: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Proposed next steps (to be discussed in meeting today... 1. Release Team announces a freeze on master for TBD time (see Note) 2. Scott/Dean create RC1 branch and make required build changes. 3. Scott triggers RC1 build, both ISO and Docker images. 4. Ada's team runs sanity and confirm RC1 build passes sanity. 5. Release Team announces that the RC1 branch is now available. Note: Developers push changes to master and for High priority LPs also cherry pick their commit to the RC1 branch. -----Original Message----- From: Dean Troyer Sent: Friday, August 2, 2019 1:19 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill wrote: > Scott - if you can provide a brief summary here of what you think the steps are beforehand, that'd be great. I ran a dry-run of the branching process this morning using the following: Branch: r/stx.2.0 Tag: v2.0.0.rc0 Tagging the branch point makes it easier later to pull a list of changes for the next RC or the release tag... I did find a couple of tweaks required in the branch-stx.sh script to account for the OpenDev change and only branching the Gerrit repos: https://review.opendev.org/#/c/674342/ The wiki pages [0] and [1] have been updated to match my current understanding (above) of the release naming and process. dt [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Aug 6 13:48:44 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 09:48:44 -0400 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CE745CB@FMSMSX112.amr.corp.intel.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007ABF516@ALA-MBD.corp.ad.wrs.com> <4F6AACE4B0F173488D033B02A8BB5B7E7CE745CB@FMSMSX112.amr.corp.intel.com> Message-ID: Good point. I don't think we closed on that issue during the last release meeting.  Or at least it wasn't captured in the minutes.  Was there an action to talk it over with the testers? CENGN doesn't really have capacity to efficiently build both loads in parallel.  They need to be serial.  I would assume the testing priority will shift RC1, and we would want that load delivering at midnight EST (4 am UTC), with the master build shifting to 4am EST (8 am UTC).  Call this option a. Option b is to leave master delivering at midnight EST (4 am UTC), and RC1 delivers at 4 am. Thoughts? Scott On 2019-08-05 12:58 p.m., Cabrales, Ada wrote: > Nice to see this happening. > > One question: would the two ISOs (master, RC1) will be available at similar times? Or how is the build process to be scheduled? > > A. > >> -----Original Message----- >> From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] >> Sent: Friday, August 2, 2019 1:56 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion >> about StarlingX release 2.0 >> >> From the meeting just now, Scott & Dean agreed that they can base the >> branch off of a SHA, so there is no need for a freeze. >> >> They will finish some script changes for this (thanks guys) and, assuming >> Monday's sanity is green, will start creating the branch on Tuesday. >> >> After the sanity on the RC1 branch is done, we'll announce that the branch is >> ready to use. >> >> More details here and at [0] on the modified sequence... >> >> 1 Start (no freeze required since we're branching from a SHA, not from Head) >> - on Tuesday >> - Dean will start at ~9:30 his time (CDT) (10:30 EDT) >> - assuming sanity is Green >> - Dean will branch from the SHA for that sanity's build 2. Scott/Dean create >> RC1 branch and make required build changes. >> - Dean will make sure he's able to do the SHA thing >> - this will be based on Monday's sanity, which will be based on the commits >> up to Sunday evening >> - i.e. UTC 0130 am Monday >> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190805T01 >> 30 >> 3. Scott triggers RC1 build, both ISO and Docker images. >> - Scott has a few script changes to do, will work to knock those off today >> - we agreed on these build paths >> - Release path >> .../starlingx/release/2.0.##/centos >> .../starlingx/release/2.0/2.0.##/centos >> - RC path >> .../starlingx/rc/2.0/centos/timestamp >> 4. Ada's team runs sanity and confirm RC1 build passes sanity. >> 5. Release Team announces that the RC1 branch is now available. >> >> Note: Developers push changes to master and for Medium & High priority LPs >> also cherry pick their commit to the RC1 branch. >> >> [0] https://etherpad.openstack.org/p/stx-releases >> >> -----Original Message----- >> From: Zvonar, Bill >> Sent: Friday, August 2, 2019 1:36 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and discussion >> about StarlingX release 2.0 >> >> Proposed next steps (to be discussed in meeting today... >> >> 1. Release Team announces a freeze on master for TBD time (see Note) 2. >> Scott/Dean create RC1 branch and make required build changes. >> 3. Scott triggers RC1 build, both ISO and Docker images. >> 4. Ada's team runs sanity and confirm RC1 build passes sanity. >> 5. Release Team announces that the RC1 branch is now available. >> >> Note: Developers push changes to master and for High priority LPs also cherry >> pick their commit to the RC1 branch. >> >> -----Original Message----- >> From: Dean Troyer >> Sent: Friday, August 2, 2019 1:19 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion >> about StarlingX release 2.0 >> >> On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill >> wrote: >>> Scott - if you can provide a brief summary here of what you think the steps >> are beforehand, that'd be great. >> >> I ran a dry-run of the branching process this morning using the following: >> >> Branch: r/stx.2.0 >> Tag: v2.0.0.rc0 >> >> Tagging the branch point makes it easier later to pull a list of changes for the >> next RC or the release tag... >> >> I did find a couple of tweaks required in the branch-stx.sh script to account >> for the OpenDev change and only branching the Gerrit repos: >> https://review.opendev.org/#/c/674342/ >> >> The wiki pages [0] and [1] have been updated to match my current >> understanding (above) of the release naming and process. >> >> dt >> >> [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan >> [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process >> >> -- >> Dean Troyer >> dtroyer at gmail.com >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Aug 6 13:54:40 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 09:54:40 -0400 Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 4 - Still Failing! In-Reply-To: <1864685548.62.1565047881411.JavaMail.javamailuser@localhost> References: <2058452876.50.1564968696689.JavaMail.javamailuser@localhost> <1864685548.62.1565047881411.JavaMail.javamailuser@localhost> Message-ID: Oops.  Forgot to disable timer based builds for the V2.0 RC1 build jobs before I left work on Friday. Without a valid branch to pull and compile, it failed as one might expect. My apologies for the noise. Scott On 2019-08-05 7:31 p.m., build.starlingx at gmail.com wrote: > Project: STX_BUILD_2.0 > Build #: 4 > Status: Still Failing > Timestamp: 20190805T233000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190805T233000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Aug 6 14:10:47 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 6 Aug 2019 09:10:47 -0500 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> Message-ID: On Tue, Aug 6, 2019 at 6:16 AM Zvonar, Bill wrote: > Hi all - yesterday's sanity was green, so we'll go ahead with the plan below (assuming Dean & Scott haven't run into any roadblocks). It all looks good here. I'm running one last check then we'll kick it all off... And remember, No Freeze Necessary. dt -- Dean Troyer dtroyer at gmail.com From scott.little at windriver.com Tue Aug 6 14:26:01 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 10:26:01 -0400 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007ABF516@ALA-MBD.corp.ad.wrs.com> <4F6AACE4B0F173488D033B02A8BB5B7E7CE745CB@FMSMSX112.amr.corp.intel.com> Message-ID: <248ba14e-c68e-6f21-3c56-6ec7ffe1109e@windriver.com> If I don't hear any objections by end of day (8 pm UTC), I'll assume that RC1 is the testing priority. I'll implement option a.  The build of V2.0 RC1 completes at midnight (4 am UTC), and the master build gets pushed back 4 hours from it's current slot. Scott On 2019-08-06 9:48 a.m., Scott Little wrote: > Good point. > > I don't think we closed on that issue during the last release > meeting.  Or at least it wasn't captured in the minutes.  Was there an > action to talk it over with the testers? > > CENGN doesn't really have capacity to efficiently build both loads in > parallel.  They need to be serial.  I would assume the testing > priority will shift RC1, and we would want that load delivering at > midnight EST (4 am UTC), with the master build shifting to 4am EST (8 > am UTC).  Call this option a. > > Option b is to leave master delivering at midnight EST (4 am UTC), and > RC1 delivers at 4 am. > > Thoughts? > > Scott > > > On 2019-08-05 12:58 p.m., Cabrales, Ada wrote: >> Nice to see this happening. >> >> One question: would the two ISOs (master, RC1) will be available at >> similar times? Or how is the build process to be scheduled? >> >> A. >> >>> -----Original Message----- >>> From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] >>> Sent: Friday, August 2, 2019 1:56 PM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >>> discussion >>> about StarlingX release 2.0 >>> >>>  From the meeting just now, Scott & Dean agreed that they can base the >>> branch off of a SHA, so there is no need for a freeze. >>> >>> They will finish some script changes for this (thanks guys) and, >>> assuming >>> Monday's sanity is green, will start creating the branch on Tuesday. >>> >>> After the sanity on the RC1 branch is done, we'll announce that the >>> branch is >>> ready to use. >>> >>> More details here and at [0] on the modified sequence... >>> >>> 1 Start (no freeze required since we're branching from a SHA, not >>> from Head) >>>      - on Tuesday >>>      - Dean will start at ~9:30 his time (CDT) (10:30 EDT) >>>      - assuming sanity is Green >>>      - Dean will branch from the SHA for that sanity's build 2. >>> Scott/Dean create >>> RC1 branch and make required build changes. >>>      - Dean will make sure he's able to do the SHA thing >>>      - this will be based on Monday's sanity, which will be based on >>> the commits >>> up to Sunday evening >>>      - i.e. UTC 0130 am Monday >>> https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190805T01 >>> 30 >>> 3. Scott triggers RC1 build, both ISO and Docker images. >>>      - Scott has a few script changes to do, will work to knock >>> those off today >>>      - we agreed on these build paths >>>      - Release path >>>              .../starlingx/release/2.0.##/centos >>>              .../starlingx/release/2.0/2.0.##/centos >>>      - RC path >>>              .../starlingx/rc/2.0/centos/timestamp >>> 4. Ada's team runs sanity and confirm RC1 build passes sanity. >>> 5. Release Team announces that the RC1 branch is now available. >>> >>> Note: Developers push changes to master and for Medium & High >>> priority LPs >>> also cherry pick their commit to the RC1 branch. >>> >>> [0] https://etherpad.openstack.org/p/stx-releases >>> >>> -----Original Message----- >>> From: Zvonar, Bill >>> Sent: Friday, August 2, 2019 1:36 PM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: RE: [Starlingx-discuss] [Release] [Build] Preparation and >>> discussion >>> about StarlingX release 2.0 >>> >>> Proposed next steps (to be discussed in meeting today... >>> >>> 1. Release Team announces a freeze on master for TBD time (see Note) 2. >>> Scott/Dean create RC1 branch and make required build changes. >>> 3. Scott triggers RC1 build, both ISO and Docker images. >>> 4. Ada's team runs sanity and confirm RC1 build passes sanity. >>> 5. Release Team announces that the RC1 branch is now available. >>> >>> Note: Developers push changes to master and for High priority LPs >>> also cherry >>> pick their commit to the RC1 branch. >>> >>> -----Original Message----- >>> From: Dean Troyer >>> Sent: Friday, August 2, 2019 1:19 PM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >>> discussion >>> about StarlingX release 2.0 >>> >>> On Fri, Aug 2, 2019 at 8:58 AM Zvonar, Bill >>> wrote: >>>> Scott - if you can provide a brief summary here of what you think >>>> the steps >>> are beforehand, that'd be great. >>> >>> I ran a dry-run of the branching process this morning using the >>> following: >>> >>> Branch: r/stx.2.0 >>> Tag: v2.0.0.rc0 >>> >>> Tagging the branch point makes it easier later to pull a list of >>> changes for the >>> next RC or the release tag... >>> >>> I did find a couple of tweaks required in the branch-stx.sh script >>> to account >>> for the OpenDev change and only branching the Gerrit repos: >>> https://review.opendev.org/#/c/674342/ >>> >>> The wiki pages [0] and [1] have been updated to match my current >>> understanding (above) of the release naming and process. >>> >>> dt >>> >>> [0] https://wiki.openstack.org/wiki/StarlingX/Release_Plan >>> [1] https://wiki.openstack.org/wiki/StarlingX/Release_Process >>> >>> -- >>> Dean Troyer >>> dtroyer at gmail.com >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chenjie.xu at intel.com Tue Aug 6 15:13:45 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Tue, 6 Aug 2019 15:13:45 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> References: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> Message-ID: Hi Matt, I find another way to pass through SR-IOV capable physical NIC to VM. This new way doesn't require to configure "PCI alias". The key point is to create a port whose vnic_type is direct-physical. The following link can be referenced: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/sr-iov-support-for-virtual-networking However if we try to pass through a physical NIC which doesn't support SR-IOV, we may still need to configure "PCI alias". Because I don't have a physical NIC which doesn't support SR-IOV on my server, I can't test passing such NIC to VM by creating port whose vnic_type is direct-physical. Do you think StarlingX needs to configure “PCI alias” automatically for physical NIC which doesn’t support SR-IOV or not? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 1, 2019 6:34 PM To: Xu, Chenjie ; Webster, Steven ; Kopec, Gerald (Gerry) Cc: Khalil, Ghada ; Zhao, Forrest ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" > Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Tue Aug 6 15:46:47 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 6 Aug 2019 10:46:47 -0500 Subject: [Starlingx-discuss] [Release] Branch r/stx.2.0 created Message-ID: The StarlingX release branch 'r/stx.2.0' has been created as of the latest GREEN sanity build 20190804T233000Z. We branched on the exact state used in that build so no merge freeze was required. Nor were any reviews merged on master since that build so no initial backports are required. As a reminder, from now until the 2.0 release bugs marked HIGH and MEDIUM are eligible to be backported to r/stx.2.0 after merging in master. After the 2.0 release, only those marked HIGH will continue to be eligible to be backported to this branch. Exceptions must be approved by the Release Team. Also, a tag 'v2.0.0.rc0' was created at the branch point to simplify listing the changes since then on both the release branch and on master. Each new branch also has a new review to update the .gitreview file, these should be the first things merged into the new branch, approvals are already underway, please look for any in repos where you may be on the core team to ensure they get approved. If you see any problems related to this do not hesitate to ask here, contact me or any of the release team members or ask in IRC #starlingx. Thanks dt -- Dean Troyer dtroyer at gmail.com From dtroyer at gmail.com Tue Aug 6 15:49:37 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 6 Aug 2019 10:49:37 -0500 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> Message-ID: On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: > It all looks good here. I'm running one last check then we'll kick it > all off... And the branch is complete. No reviews were merged since the build so there are no immediate backports required. The holiday gave us a de facto freeze :) We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. Thanks dt -- Dean Troyer dtroyer at gmail.com From Bill.Zvonar at windriver.com Tue Aug 6 16:00:03 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 6 Aug 2019 16:00:03 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> Cool Dean, good stuff. Scott - over to you - when do you (roughly) think the build will be ready? Ada/Numan - please stand by for sanity. Bill... -----Original Message----- From: Dean Troyer Sent: Tuesday, August 6, 2019 11:50 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: > It all looks good here. I'm running one last check then we'll kick it > all off... And the branch is complete. No reviews were merged since the build so there are no immediate backports required. The holiday gave us a de facto freeze :) We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. Thanks dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Aug 6 16:05:04 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 12:05:04 -0400 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> Message-ID: Actually....over to the code reviewers! Then Zull. Then me. Scott On 2019-08-06 12:00 p.m., Zvonar, Bill wrote: > Cool Dean, good stuff. > > Scott - over to you - when do you (roughly) think the build will be ready? > > Ada/Numan - please stand by for sanity. > > Bill... > > -----Original Message----- > From: Dean Troyer > Sent: Tuesday, August 6, 2019 11:50 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 > > On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: >> It all looks good here. I'm running one last check then we'll kick it >> all off... > And the branch is complete. No reviews were merged since the build so there are no immediate backports required. The holiday gave us a de facto freeze :) > > We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. > > Thanks > dt > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Aug 6 16:12:05 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 6 Aug 2019 16:12:05 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC8C59@ALA-MBD.corp.ad.wrs.com> Aha - so who are these code reviewers? Just want to make sure they're aware that they need to do something... -----Original Message----- From: Little, Scott Sent: Tuesday, August 6, 2019 12:05 PM To: Zvonar, Bill ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Actually....over to the code reviewers! Then Zull. Then me. Scott On 2019-08-06 12:00 p.m., Zvonar, Bill wrote: > Cool Dean, good stuff. > > Scott - over to you - when do you (roughly) think the build will be ready? > > Ada/Numan - please stand by for sanity. > > Bill... > > -----Original Message----- > From: Dean Troyer > Sent: Tuesday, August 6, 2019 11:50 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and > discussion about StarlingX release 2.0 > > On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: >> It all looks good here. I'm running one last check then we'll kick >> it all off... > And the branch is complete. No reviews were merged since the build so > there are no immediate backports required. The holiday gave us a de > facto freeze :) > > We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. > > Thanks > dt > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Aug 6 16:15:56 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 6 Aug 2019 16:15:56 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AC8C59@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C59@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC8C73@ALA-MBD.corp.ad.wrs.com> Sorry, just saw Don's email now... -----Original Message----- From: Zvonar, Bill Sent: Tuesday, August 6, 2019 12:12 PM To: Little, Scott ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Aha - so who are these code reviewers? Just want to make sure they're aware that they need to do something... -----Original Message----- From: Little, Scott Sent: Tuesday, August 6, 2019 12:05 PM To: Zvonar, Bill ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Actually....over to the code reviewers! Then Zull. Then me. Scott On 2019-08-06 12:00 p.m., Zvonar, Bill wrote: > Cool Dean, good stuff. > > Scott - over to you - when do you (roughly) think the build will be ready? > > Ada/Numan - please stand by for sanity. > > Bill... > > -----Original Message----- > From: Dean Troyer > Sent: Tuesday, August 6, 2019 11:50 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and > discussion about StarlingX release 2.0 > > On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: >> It all looks good here. I'm running one last check then we'll kick >> it all off... > And the branch is complete. No reviews were merged since the build so > there are no immediate backports required. The holiday gave us a de > facto freeze :) > > We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. > > Thanks > dt > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Aug 6 16:17:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 12:17:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_repo_sync - Build # 411 - Failure! Message-ID: <2066537970.70.1565108227252.JavaMail.javamailuser@localhost> Project: STX_repo_sync Build #: 411 Status: Failure Timestamp: 20190806T161545Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T161539Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T161539Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T161539Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Tue Aug 6 16:17:09 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 12:17:09 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 5 - Still Failing! In-Reply-To: <911475887.60.1565047878980.JavaMail.javamailuser@localhost> References: <911475887.60.1565047878980.JavaMail.javamailuser@localhost> Message-ID: <1603874141.73.1565108230523.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 5 Status: Still Failing Timestamp: 20190806T161539Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T161539Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From Matt.Peters at windriver.com Tue Aug 6 16:54:59 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Tue, 6 Aug 2019 16:54:59 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: References: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> Message-ID: <2D45BE88-FE4A-4B96-9934-48E24B0AAB7C@windriver.com> Hello Chenjie, We typically configure the PCI-PT devices using the vnic-type option for manually created ports. This replaces the older mechanism of being able to specify the vif-type (which was a StarlingX specific extension that was dropped). For your question about the NIC type that does not support SR-IOV, do you mean a port that does not report itself as a PF (from a libvirt/nova perspective that would be device with Type-PCI vs Type-PF)? -Matt From: "Xu, Chenjie" Date: Tuesday, August 6, 2019 at 11:14 AM To: "Peters, Matt" , "Webster, Steven" , "Kopec, Gerald (Gerry)" Cc: Ghada Khalil , "Zhao, Forrest" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, I find another way to pass through SR-IOV capable physical NIC to VM. This new way doesn't require to configure "PCI alias". The key point is to create a port whose vnic_type is direct-physical. The following link can be referenced: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/sr-iov-support-for-virtual-networking However if we try to pass through a physical NIC which doesn't support SR-IOV, we may still need to configure "PCI alias". Because I don't have a physical NIC which doesn't support SR-IOV on my server, I can't test passing such NIC to VM by creating port whose vnic_type is direct-physical. Do you think StarlingX needs to configure “PCI alias” automatically for physical NIC which doesn’t support SR-IOV or not? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 1, 2019 6:34 PM To: Xu, Chenjie ; Webster, Steven ; Kopec, Gerald (Gerry) Cc: Khalil, Ghada ; Zhao, Forrest ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" > Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Aug 6 17:16:40 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 6 Aug 2019 17:16:40 +0000 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AC8C73@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C59@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C73@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC154FB45@ALA-MBD.corp.ad.wrs.com> Final two reviews are in Zuul's hands now. -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 06, 2019 12:16 PM To: Little, Scott; Dean Troyer; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Sorry, just saw Don's email now... -----Original Message----- From: Zvonar, Bill Sent: Tuesday, August 6, 2019 12:12 PM To: Little, Scott ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Aha - so who are these code reviewers? Just want to make sure they're aware that they need to do something... -----Original Message----- From: Little, Scott Sent: Tuesday, August 6, 2019 12:05 PM To: Zvonar, Bill ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 Actually....over to the code reviewers! Then Zull. Then me. Scott On 2019-08-06 12:00 p.m., Zvonar, Bill wrote: > Cool Dean, good stuff. > > Scott - over to you - when do you (roughly) think the build will be ready? > > Ada/Numan - please stand by for sanity. > > Bill... > > -----Original Message----- > From: Dean Troyer > Sent: Tuesday, August 6, 2019 11:50 AM > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and > discussion about StarlingX release 2.0 > > On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: >> It all looks good here. I'm running one last check then we'll kick >> it all off... > And the branch is complete. No reviews were merged since the build so > there are no immediate backports required. The holiday gave us a de > facto freeze :) > > We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. > > Thanks > dt > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Aug 6 18:11:22 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 14:11:22 -0400 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC154FB45@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C59@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C73@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FC154FB45@ALA-MBD.corp.ad.wrs.com> Message-ID: <4d26ed53-e623-4311-85ce-5061d0e60d04@windriver.com> I haven't seen a review for the manifest as of yet.  Don't see it when I pull. Dean.  Was this an oversight, or were you waiting for the others to merge before creating it? Scott On 2019-08-06 1:16 p.m., Penney, Don wrote: > Final two reviews are in Zuul's hands now. > > -----Original Message----- > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > Sent: Tuesday, August 06, 2019 12:16 PM > To: Little, Scott; Dean Troyer; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 > > Sorry, just saw Don's email now... > > -----Original Message----- > From: Zvonar, Bill > Sent: Tuesday, August 6, 2019 12:12 PM > To: Little, Scott ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 > > Aha - so who are these code reviewers? > > Just want to make sure they're aware that they need to do something... > > > -----Original Message----- > From: Little, Scott > Sent: Tuesday, August 6, 2019 12:05 PM > To: Zvonar, Bill ; Dean Troyer ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan > Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 > > Actually....over to the code reviewers! > > Then Zull. > > Then me. > > Scott > > > On 2019-08-06 12:00 p.m., Zvonar, Bill wrote: >> Cool Dean, good stuff. >> >> Scott - over to you - when do you (roughly) think the build will be ready? >> >> Ada/Numan - please stand by for sanity. >> >> Bill... >> >> -----Original Message----- >> From: Dean Troyer >> Sent: Tuesday, August 6, 2019 11:50 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >> discussion about StarlingX release 2.0 >> >> On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: >>> It all looks good here. I'm running one last check then we'll kick >>> it all off... >> And the branch is complete. No reviews were merged since the build so >> there are no immediate backports required. The holiday gave us a de >> facto freeze :) >> >> We did see a couple of the .gitreview updates have errors in the check queue, a recheck has cleared them so far. If that does not work, please contact me or Scott or Don and we'll try to sort it out. >> >> Thanks >> dt >> >> -- >> Dean Troyer >> dtroyer at gmail.com >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue Aug 6 18:36:12 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 14:36:12 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_container_setup - Build # 366 - Failure! Message-ID: <1184392022.77.1565116572978.JavaMail.javamailuser@localhost> Project: STX_BUILD_container_setup Build #: 366 Status: Failure Timestamp: 20190806T183610Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T183504Z/logs -------------------------------------------------------------------------------- Parameters PROJECT: rc-2.0 MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190806T183504Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190806T183504Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T183504Z/logs DOCKER_BUILD_TAG: rc-2.0-20190806T183504Z-builder-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T183504Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Tue Aug 6 18:36:15 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 14:36:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 6 - Still Failing! In-Reply-To: <1234515382.71.1565108227996.JavaMail.javamailuser@localhost> References: <1234515382.71.1565108227996.JavaMail.javamailuser@localhost> Message-ID: <1556144674.80.1565116576477.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 6 Status: Still Failing Timestamp: 20190806T183504Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T183504Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From scott.little at windriver.com Tue Aug 6 18:35:27 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 14:35:27 -0400 Subject: [Starlingx-discuss] [Release] [Build] Preparation and discussion about StarlingX release 2.0 In-Reply-To: <4d26ed53-e623-4311-85ce-5061d0e60d04@windriver.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007ABF2AF@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC7893@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C38@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C59@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AC8C73@ALA-MBD.corp.ad.wrs.com> <6703202FD9FDFF4A8DA9ACF104AE129FC154FB45@ALA-MBD.corp.ad.wrs.com> <4d26ed53-e623-4311-85ce-5061d0e60d04@windriver.com> Message-ID: <7ab0241c-99a7-f4ae-adee-8fb7e26858d3@windriver.com> I have created the r/stx.2.0 branch for manifest. Please review https://review.opendev.org/#/c/674904/1 Scott On 2019-08-06 2:11 p.m., Scott Little wrote: > I haven't seen a review for the manifest as of yet.  Don't see it when > I pull. > > Dean.  Was this an oversight, or were you waiting for the others to > merge before creating it? > > Scott > > > On 2019-08-06 1:16 p.m., Penney, Don wrote: >> Final two reviews are in Zuul's hands now. >> >> -----Original Message----- >> From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] >> Sent: Tuesday, August 06, 2019 12:16 PM >> To: Little, Scott; Dean Troyer; starlingx-discuss at lists.starlingx.io; >> Cabrales, Ada; Waheed, Numan >> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >> discussion about StarlingX release 2.0 >> >> Sorry, just saw Don's email now... >> >> -----Original Message----- >> From: Zvonar, Bill >> Sent: Tuesday, August 6, 2019 12:12 PM >> To: Little, Scott ; Dean Troyer >> ; starlingx-discuss at lists.starlingx.io; Cabrales, >> Ada ; Waheed, Numan >> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >> discussion about StarlingX release 2.0 >> >> Aha - so who are these code reviewers? >> >> Just want to make sure they're aware that they need to do something... >> >> >> -----Original Message----- >> From: Little, Scott >> Sent: Tuesday, August 6, 2019 12:05 PM >> To: Zvonar, Bill ; Dean Troyer >> ; starlingx-discuss at lists.starlingx.io; Cabrales, >> Ada ; Waheed, Numan >> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >> discussion about StarlingX release 2.0 >> >> Actually....over to the code reviewers! >> >> Then Zull. >> >> Then me. >> >> Scott >> >> >> On 2019-08-06 12:00 p.m., Zvonar, Bill wrote: >>> Cool Dean, good stuff. >>> >>> Scott - over to you - when do you (roughly) think the build will be >>> ready? >>> >>> Ada/Numan - please stand by for sanity. >>> >>> Bill... >>> >>> -----Original Message----- >>> From: Dean Troyer >>> Sent: Tuesday, August 6, 2019 11:50 AM >>> To: starlingx-discuss at lists.starlingx.io >>> Subject: Re: [Starlingx-discuss] [Release] [Build] Preparation and >>> discussion about StarlingX release 2.0 >>> >>> On Tue, Aug 6, 2019 at 9:10 AM Dean Troyer wrote: >>>> It all looks good here.  I'm running one last check then we'll kick >>>> it all off... >>> And the branch is complete.  No reviews were merged since the build so >>> there are no immediate backports required.  The holiday gave us a de >>> facto freeze :) >>> >>> We did see a couple of the .gitreview updates have errors in the >>> check queue, a recheck has cleared them so far.  If that does not >>> work, please contact me or Scott or Don and we'll try to sort it out. >>> >>> Thanks >>> dt >>> >>> -- >>> Dean Troyer >>> dtroyer at gmail.com >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Aug 6 18:52:48 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 14:52:48 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_container_setup - Build # 367 - Still Failing! In-Reply-To: <1537687160.75.1565116570459.JavaMail.javamailuser@localhost> References: <1537687160.75.1565116570459.JavaMail.javamailuser@localhost> Message-ID: <334226254.83.1565117569375.JavaMail.javamailuser@localhost> Project: STX_BUILD_container_setup Build #: 367 Status: Still Failing Timestamp: 20190806T185246Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T184524Z/logs -------------------------------------------------------------------------------- Parameters PROJECT: rc-2.0 MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190806T184524Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190806T184524Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T184524Z/logs DOCKER_BUILD_TAG: rc-2.0-20190806T184524Z-builder-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T184524Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Tue Aug 6 18:52:51 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 14:52:51 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 7 - Still Failing! In-Reply-To: <1392090639.78.1565116573755.JavaMail.javamailuser@localhost> References: <1392090639.78.1565116573755.JavaMail.javamailuser@localhost> Message-ID: <2108106043.86.1565117572265.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 7 Status: Still Failing Timestamp: 20190806T184524Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T184524Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From ildiko.vancsa at gmail.com Tue Aug 6 19:05:46 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 6 Aug 2019 21:05:46 +0200 Subject: [Starlingx-discuss] Edge Hacking Days - August 9, 16 Message-ID: <70C5D245-ACB1-4025-AF22-9C81D9008E4C@gmail.com> Hi, Based on the Doodle[1] poll results __August 9 and August 16__ got the most votes. You can find the dial in details on this etherpad: https://etherpad.openstack.org/p/osf-edge-hacking-days If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. Potential topics to work on: * Building and testing edge reference architectures * Keystone testing and bug fixing Please let me know if you have any questions. See you on Friday! :) Thanks and Best Regards, Ildikó [1] August: https://doodle.com/poll/ucfc9w7iewe6gdp4 September: https://doodle.com/poll/3cyqxzr9vd82pwtr October: https://doodle.com/poll/6nzziuihs65hwt7b From scott.little at windriver.com Tue Aug 6 20:58:33 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Aug 2019 16:58:33 -0400 Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 7 - Still Failing! In-Reply-To: <2108106043.86.1565117572265.JavaMail.javamailuser@localhost> References: <1392090639.78.1565116573755.JavaMail.javamailuser@localhost> <2108106043.86.1565117572265.JavaMail.javamailuser@localhost> Message-ID: Several bugs that only impact a 'first build on branch' have crept into the CENGN build scripts.  The main culprit was the switch to a single container for both download and build.  Took a few iterations to get them all. Build job 8 is progressing nicely, but the build time will be long due to all the first time rpm downloads. Heading home for dinner.  I'll monitor the job from there. Scott On 2019-08-06 2:52 p.m., build.starlingx at gmail.com wrote: > Project: STX_BUILD_2.0 > Build #: 7 > Status: Still Failing > Timestamp: 20190806T184524Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T184524Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Aug 6 12:27:13 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 6 Aug 2019 12:27:13 +0000 Subject: [Starlingx-discuss] Canceled: Weekly StarlingX non-OpenStack distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36008385@SHSMSX104.ccr.corp.intel.com> * Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) * Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ * Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4054 bytes Desc: not available URL: From shaxiaoz_7443 at qq.com Tue Aug 6 01:15:35 2019 From: shaxiaoz_7443 at qq.com (=?ISO-8859-1?B?NTA0NjI2Njg0?=) Date: Tue, 6 Aug 2019 09:15:35 +0800 Subject: [Starlingx-discuss] what's the difference of the starlingx with Openstack Message-ID: Hi, From now on, I read some projects and documents of Starlingx, but I haven't learned the real difference with the Openstack yet. Who have the infomation of this question, please tell me. Thank you. -- Liu zheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Aug 6 22:11:51 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 6 Aug 2019 22:11:51 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 8/06/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID="20190802T013000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 493 Pass = 404 Fail = 23 Blocked = 15 Not Run = 10 Obsolete = 23 Deferred = 18 Total executed = 442 Pass Rate = 94.61% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore 6 Deferrer Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 11 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 6 PASS | 1 Fail Regression - Maintenance 8 PASS | 1 FAIL Regression - Networking 118 PASS | 2 FAIL | 5 BLOCKED | 19 OBSOLETE Regression - Nova 23 PASS | 10 FAIL Regression - Security 34 PASS | 1 FAIL | 2 BLOCKED | 1 OBSOLETE | 4 Deferrer Regression - Storage 23 PASS |2 BLOCKED| 2 Deferrer Regression - Inventory 30 PASS System Test 21 PASS | 2 FAIL | 6 BLOCKED | 1 OBSOLETE | 6 Deferrer Regression - new features 71 PASS | 5 FAIL --------------------------------------------------------------------------- AUTOMATED EXECUTION - INTEL --------------------------------------------------------------------------- Overall Results: Pass = 197 Fail = 38 Total executed = 235 Pass Rate = 83.82% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Fault-Management 15 PASS Gnocchi 12 PASS HEAT 6 PASS High-Availability 9 PASS | 2 FAIL Horizon 2 PASS Insallation-And-Config 8 PASS Maintenance 26 PASS | 3 FAIL Networking 45 PASS | 7 FAIL Nova 17 PASS | 2 FAIL Security 18 PASS | 5 FAIL Storage 5 PASS | 11 FAIL SYSINVENTORY 26 PASS | 5 FAIL System 8 PASS |3 FAIL ---------------------------------------------------------------------- AUTOMATED EXECUTION - Wind River ---------------------------------------------------------------------- "Pending results" ---------------------------------------------------------------------- user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 CirrOS VM login takes too much time, and throw different log errors https://bugs.launchpad.net/starlingx/+bug/1835575 Live Migration Error: Failed to live migrate instance to host "AUTO_SCHEDULE". https://bugs.launchpad.net/starlingx/+bug/1837256 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 Create Volume dialog opens (from image panel in Horizon) but getting error default volume type cannot be found https://bugs.launchpad.net/starlingx/+bug/1826259 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 Containers: openstack pods failed after force rebooting active controller https://bugs.launchpad.net/starlingx/+bug/1816842 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 VM filesystem is not RW when attached the 2nd volume https://bugs.launchpad.net/starlingx/+bug/1838546 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 vSwitch 1G Hugepage available size cannot be changed https://bugs.launchpad.net/starlingx/+bug/1834530 Unable to start PTP4L services https://bugs.launchpad.net/starlingx/+bug/1839001 Validate re-purposing worker is getting degraded state https://bugs.launchpad.net/starlingx/+bug/1839018 hypervisor stays down after force lock and unlock due to pci-irq-affinity-agent process failure https://bugs.launchpad.net/starlingx/+bug/1839160 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Aug 6 22:18:07 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 18:18:07 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 8 - Still Failing! In-Reply-To: <1836569126.84.1565117570162.JavaMail.javamailuser@localhost> References: <1836569126.84.1565117570162.JavaMail.javamailuser@localhost> Message-ID: <489564088.91.1565129888887.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 8 Status: Still Failing Timestamp: 20190806T190628Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T190628Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From maria.g.perez.ibarra at intel.com Tue Aug 6 22:33:08 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 6 Aug 2019 22:33:08 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190806 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-06 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Aug 6 23:26:05 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 19:26:05 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 363 - Failure! Message-ID: <1117055304.95.1565133966547.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 363 Status: Failure Timestamp: 20190806T232320Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190806T230026Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190806T230026Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Tue Aug 6 23:26:08 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 19:26:08 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 9 - Still Failing! In-Reply-To: <566006783.89.1565129886472.JavaMail.javamailuser@localhost> References: <566006783.89.1565129886472.JavaMail.javamailuser@localhost> Message-ID: <152274602.98.1565133969555.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 9 Status: Still Failing Timestamp: 20190806T230026Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From build.starlingx at gmail.com Tue Aug 6 23:26:34 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 19:26:34 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 46 - Failure! Message-ID: <439155105.101.1565133995202.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 46 Status: Failure Timestamp: 20190806T232329Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190806T230026Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190806T230026Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T230026Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos From ada.cabrales at intel.com Tue Aug 6 23:50:22 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 6 Aug 2019 23:50:22 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 08/06/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CE7570D@FMSMSX112.amr.corp.intel.com> Agenda for 08/06 Attendees: Elio, Cristopher, Fernando, Jose, JP, Maria P, JC, Ada, Richo, Numan, Yang, Yong 1. Sanity status - Cristopher * Has been green for the last days. There are some issues with duplex - under review (might be the automation). * Some OpenStack commands are taking their time for finishing - there's a bug open - https://bugs.launchpad.net/starlingx/+bug/1837686 Also happening in WR lab. * After lock/unlock compute hosts instances cannot be created due to DHCP - https://bugs.launchpad.net/starlingx/+bug/1836252 (High) * Neutron dhcp not coming up after lock unlock compute host - already being worked. Elio to check if this issue is the same we've been having related to namespaces. * Configuration out-of-date alarms on storage nodes since fresh install - https://bugs.launchpad.net/starlingx/+bug/1838652 * Two branches to test for the next weeks - master and RC1 Intel to run sanity daily on the two branches WR will run sanity daily on RC1 and rotate the configurations for master branch periodically * Release path .../starlingx/release/2.0.##/centos .../starlingx/release/2.0/2.0.##/centos * RC path .../starlingx/rc/2.0/centos/timestamp 2. Regression status * Total / Pass / Fail / Blocked / Obsolete / Deferred 493 / 403 / 24 / 15 / 23 / 18 10 not run - WIP Let's focus on testing bugs that are fixed. Intel - Automated regression - fixing scripts - pass rate 83.82% 197 pass - 38 fail WR will send report later. * Issues Intermittently the openstack server show indicates that the server does not exist (in live migration tests) - https://bugs.launchpad.net/starlingx/+bug/1838676 Pinging between instances is fixed using debian images We are not able to change storage backend - https://bugs.launchpad.net/starlingx/+bug/1837464 The way to verify the change is different than the one stated. This is not happening in WR lab - Elio to check 3. Feature testing * Mostly done, only latest additions are pending * Ironic and helm override Helm overrides is almost done - a question sent to Bob Church For ironic - checking resources for setting the ironic config. Expected to be finished on Aug 13 4. Next on * stx.2.0 (dates) Bug retest from Aug 5 - Aug 16 Final regression is on Aug 12 - Aug 23 This will be split in the same way as the regression testing - WR taking part of the domains and Intel the other part. Do a cross check offline of the test - include Yang * stx.3.0 Feature testing to be done on Oct 18 Regression Sep 23 - Nov 15 This is coming soon Ada and Numan to work on sharing the testing for this release. 5. Opens * Numan - Now that stx.2.0 is reaching its final phase, it's time to start working on the unified sanity Ada and Numan to work on overall strategy and then set a meeting with the team to work on it. Ada to set a meeting and send the invitation. * Elio - what happened with IPv6? WR team working on it right now. Getting good progress. Please share the results. Plan is to set it and do basic testing on it. From build.starlingx at gmail.com Tue Aug 6 23:54:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 19:54:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 364 - Still Failing! In-Reply-To: <2039560365.93.1565133964174.JavaMail.javamailuser@localhost> References: <2039560365.93.1565133964174.JavaMail.javamailuser@localhost> Message-ID: <537468341.104.1565135647483.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 364 Status: Still Failing Timestamp: 20190806T235239Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190806T233000Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190806T233000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Tue Aug 6 23:54:09 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 19:54:09 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 10 - Still Failing! In-Reply-To: <1435679283.96.1565133967268.JavaMail.javamailuser@localhost> References: <1435679283.96.1565133967268.JavaMail.javamailuser@localhost> Message-ID: <1129627746.107.1565135650612.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 10 Status: Still Failing Timestamp: 20190806T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From build.starlingx at gmail.com Tue Aug 6 23:54:28 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 19:54:28 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 47 - Still Failing! In-Reply-To: <349804990.99.1565133992611.JavaMail.javamailuser@localhost> References: <349804990.99.1565133992611.JavaMail.javamailuser@localhost> Message-ID: <1860231292.110.1565135669693.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 47 Status: Still Failing Timestamp: 20190806T235247Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190806T233000Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190806T233000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190806T233000Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos From chenjie.xu at intel.com Wed Aug 7 00:17:29 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 7 Aug 2019 00:17:29 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: <2D45BE88-FE4A-4B96-9934-48E24B0AAB7C@windriver.com> References: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> <2D45BE88-FE4A-4B96-9934-48E24B0AAB7C@windriver.com> Message-ID: Hi Matt, Yes, I mean the port that does not report itself as a PF. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 7, 2019 12:55 AM To: Xu, Chenjie ; Webster, Steven ; Kopec, Gerald (Gerry) Cc: Khalil, Ghada ; Zhao, Forrest ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hello Chenjie, We typically configure the PCI-PT devices using the vnic-type option for manually created ports. This replaces the older mechanism of being able to specify the vif-type (which was a StarlingX specific extension that was dropped). For your question about the NIC type that does not support SR-IOV, do you mean a port that does not report itself as a PF (from a libvirt/nova perspective that would be device with Type-PCI vs Type-PF)? -Matt From: "Xu, Chenjie" > Date: Tuesday, August 6, 2019 at 11:14 AM To: "Peters, Matt" >, "Webster, Steven" >, "Kopec, Gerald (Gerry)" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, I find another way to pass through SR-IOV capable physical NIC to VM. This new way doesn't require to configure "PCI alias". The key point is to create a port whose vnic_type is direct-physical. The following link can be referenced: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/sr-iov-support-for-virtual-networking However if we try to pass through a physical NIC which doesn't support SR-IOV, we may still need to configure "PCI alias". Because I don't have a physical NIC which doesn't support SR-IOV on my server, I can't test passing such NIC to VM by creating port whose vnic_type is direct-physical. Do you think StarlingX needs to configure “PCI alias” automatically for physical NIC which doesn’t support SR-IOV or not? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 1, 2019 6:34 PM To: Xu, Chenjie >; Webster, Steven >; Kopec, Gerald (Gerry) > Cc: Khalil, Ghada >; Zhao, Forrest >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" > Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Aug 7 01:01:16 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Tue, 06 Aug 2019 20:01:16 -0500 Subject: [Starlingx-discuss] Shanghai Summit Schedule is Live! Message-ID: <5D4A22DC.9090104@openstack.org> Hi everyone, The agenda for the Open Infrastructure Summit (formerly the OpenStack Summit) is now live! If you need a reason to join the Summit in Shanghai, November 4-6, here’s what you can expect: * Breakout sessions spanning 30+ open source projects from technical community leaders and organizations including ARM, WalmartLabs, China Mobile, China Railway, Shanghai Electric Power Company, China UnionPay, Haitong Securities Company, CERN, and more. * Project updates and onboarding from OSF projects: Airship, Kata Containers, OpenStack, StarlingX, and Zuul. * Join collaborative sessions at the Forum , where open infrastructure operators and upstream developers will gather to jointly chart the future of open source infrastructure, discussing topics ranging from upgrades to networking models and how to get started contributing. * Get hands on training around open source technologies directly from the developers and operators building the software. Now what? * Register before prices increase on August 14 at 11:59pm PT (August 15 at 2:59pm China Standard Time). * Recruiting new talent? Pitching a new product? Enhance the visibility of your organization by sponsoring the Summit ! Questions? Reach out to summit at openstack.org Cheers, Jimmy -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Aug 7 01:56:29 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 21:56:29 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 365 - Still Failing! In-Reply-To: <1029576278.102.1565135644559.JavaMail.javamailuser@localhost> References: <1029576278.102.1565135644559.JavaMail.javamailuser@localhost> Message-ID: <529707249.113.1565142990003.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 365 Status: Still Failing Timestamp: 20190807T015342Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190807T013000Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190807T013000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 From build.starlingx at gmail.com Wed Aug 7 01:56:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 21:56:35 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 11 - Still Failing! In-Reply-To: <188104583.105.1565135648300.JavaMail.javamailuser@localhost> References: <188104583.105.1565135648300.JavaMail.javamailuser@localhost> Message-ID: <774518580.116.1565142996517.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 11 Status: Still Failing Timestamp: 20190807T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From build.starlingx at gmail.com Wed Aug 7 01:56:55 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 6 Aug 2019 21:56:55 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 48 - Still Failing! In-Reply-To: <1713436855.108.1565135667504.JavaMail.javamailuser@localhost> References: <1713436855.108.1565135667504.JavaMail.javamailuser@localhost> Message-ID: <158900648.119.1565143016776.JavaMail.javamailuser@localhost> Project: STX_build_lst_audit Build #: 48 Status: Still Failing Timestamp: 20190807T015352Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190807T013000Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190807T013000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190807T013000Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos From Akshay.346 at hsc.com Wed Aug 7 05:01:00 2019 From: Akshay.346 at hsc.com (Akshay 346) Date: Wed, 7 Aug 2019 05:01:00 +0000 Subject: [Starlingx-discuss] Query about adding a new OpenStack service to StarlingX Message-ID: Hello Team, I hope you all are doing good. I need to ask that can I install any other OpenStack service ( like zun or any other which is not installed ) on StarlingX 18.10 ? Please guide me if there is a way to add any additional OpenStack service to StarlingX 18.10 release. Best Regards, [cid:image001.jpg at 01D46BAD.8F199640] DISCLAIMER: This electronic message and all of its contents, contains information which is privileged, confidential or otherwise protected from disclosure. The information contained in this electronic mail transmission is intended for use only by the individual or entity to which it is addressed. If you are not the intended recipient or may have received this electronic mail transmission in error, please notify the sender immediately and delete / destroy all copies of this electronic mail transmission without disclosing, copying, distributing, forwarding, printing or retaining any part of it. Hughes Systique accepts no responsibility for loss or damage arising from the use of the information transmitted by this email including damage from virus. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.jpg Type: image/jpeg Size: 3428 bytes Desc: image001.jpg URL: From build.starlingx at gmail.com Wed Aug 7 05:58:04 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 7 Aug 2019 01:58:04 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 327 - Failure! Message-ID: <370487921.124.1565157485234.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 327 Status: Failure Timestamp: 20190807T055801Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190807T032530Z OS: centos DOCKER_BUILD_ID: jenkins-rc-2.0-20190807T032530Z-builder MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos From build.starlingx at gmail.com Wed Aug 7 05:58:07 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 7 Aug 2019 01:58:07 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 12 - Still Failing! In-Reply-To: <1338375180.114.1565142990951.JavaMail.javamailuser@localhost> References: <1338375180.114.1565142990951.JavaMail.javamailuser@localhost> Message-ID: <413965113.127.1565157488606.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 12 Status: Still Failing Timestamp: 20190807T032530Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From Matt.Peters at windriver.com Wed Aug 7 10:52:25 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Wed, 7 Aug 2019 10:52:25 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: References: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> <2D45BE88-FE4A-4B96-9934-48E24B0AAB7C@windriver.com> Message-ID: <8CB39FEB-D0AC-46B4-97B8-60CEA4E95E24@windriver.com> Hi Chenjie, The latest openstack release should support using the port vnic_type for type-PCI devices. The only device I know of that doesn’t support PF/VF is the i210, which is what I think was used in reporting the bug. I think your approach of having them retest with this method is the correct thing to do given you don’t have access to the hardware. -Matt From: "Xu, Chenjie" Date: Tuesday, August 6, 2019 at 8:17 PM To: "Peters, Matt" , "Webster, Steven" , "Kopec, Gerald (Gerry)" Cc: Ghada Khalil , "Zhao, Forrest" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Yes, I mean the port that does not report itself as a PF. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 7, 2019 12:55 AM To: Xu, Chenjie ; Webster, Steven ; Kopec, Gerald (Gerry) Cc: Khalil, Ghada ; Zhao, Forrest ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hello Chenjie, We typically configure the PCI-PT devices using the vnic-type option for manually created ports. This replaces the older mechanism of being able to specify the vif-type (which was a StarlingX specific extension that was dropped). For your question about the NIC type that does not support SR-IOV, do you mean a port that does not report itself as a PF (from a libvirt/nova perspective that would be device with Type-PCI vs Type-PF)? -Matt From: "Xu, Chenjie" > Date: Tuesday, August 6, 2019 at 11:14 AM To: "Peters, Matt" >, "Webster, Steven" >, "Kopec, Gerald (Gerry)" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, I find another way to pass through SR-IOV capable physical NIC to VM. This new way doesn't require to configure "PCI alias". The key point is to create a port whose vnic_type is direct-physical. The following link can be referenced: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/sr-iov-support-for-virtual-networking However if we try to pass through a physical NIC which doesn't support SR-IOV, we may still need to configure "PCI alias". Because I don't have a physical NIC which doesn't support SR-IOV on my server, I can't test passing such NIC to VM by creating port whose vnic_type is direct-physical. Do you think StarlingX needs to configure “PCI alias” automatically for physical NIC which doesn’t support SR-IOV or not? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 1, 2019 6:34 PM To: Xu, Chenjie >; Webster, Steven >; Kopec, Gerald (Gerry) > Cc: Khalil, Ghada >; Zhao, Forrest >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" > Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Aug 7 13:54:52 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 7 Aug 2019 09:54:52 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 329 - Failure! Message-ID: <1047919891.131.1565186092969.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 329 Status: Failure Timestamp: 20190807T135448Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190807T032530Z OS: centos DOCKER_BUILD_ID: jenkins-rc-2.0-20190807T032530Z-builder MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190807T032530Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos From sgw at linux.intel.com Wed Aug 7 16:21:45 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 7 Aug 2019 09:21:45 -0700 Subject: [Starlingx-discuss] [build][multi-os]Proposal for new starlingx repo - stx-zuul-jobs In-Reply-To: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> References: <07d6c5af-dd52-96b4-aba9-12b04f56247f@linux.intel.com> Message-ID: <172e52eb-daa1-fb7e-8d3f-03f2993e2a97@linux.intel.com> Bring this back up, as there has not been alot of discussion, I will work with Dean to create a new repo: zuul-jobs We can figure out who the reviewers need to be, likely start with Dean, Saul, Scott, Don and/or Al. Sau! On 8/2/19 11:06 AM, Saul Wold wrote: > > Folks, > > As I am doing the work for multi-os, I have created some Zuul ansible > playbooks and some new roles to go along with that.  We have also > enabled (or will enable) some additional zuul jobs that use various > scripts. These are currently being tested in the starlingx/fault repo, > but need to be in a more generic location.  In the past we have been > putting scripts into stx-integ, which is not the most ideal location. > > > I see that the "openstack" repo name space has an openstack-zuul-jobs, I > think this is because zuul itself has a zuul-jobs which might conflict > in the zuul namespace (this is just a guess). > > StarlingX would need a stx-zuul-jobs and it would contain new StarlingX > specific Zuul playbooks and associated roles, along with other > tools/scripts to support these playbooks and other StarlingX Zuul jobs. > For example the recently added spec-tools to stx-integ could be moved > this new repo. > > I have heard that the Infra team might be looking at some restructuring > so that might play a role in the direction or creation of the repo. > Maybe somebody from infra will pipe in here, or I will ping later. > > Thoughts? > > Thanks >    Sau! > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Aug 7 17:08:56 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 7 Aug 2019 10:08:56 -0700 Subject: [Starlingx-discuss] Help understanding an application-appy failure Message-ID: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> Folks, I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend. It appears to be a timeout (see the attached log). Thoughts, suggestions to debug this further and if a bug is needed Thanks Sau! > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: > status = StatusCode.UNKNOWN > details = "release osh-openstack-libvirt failed: timed out waiting for the condition" > debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >> > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last): > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller metadata=self.metadata) > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller details = "release osh-openstack-libvirt failed: timed out waiting for the condition" > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller > > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m > 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: Helm getting release status for release=osh-openstack-libvirt, version=0 get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531^[[00m > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-] [chart=openstack-openvswitch]: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`?^[[00m > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last): > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = get_result() > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada return self.__get_result() > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise self._exception > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada prefix, known_releases) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada chart_wait.wait(timer) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada wait.wait(timeout=timeout) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada modified = self._wait(deadline) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m > 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt" > info { > status { > code: FAILED > } > first_deployed { > seconds: 1565148757 > nanos: 968267140 > } > last_deployed { > seconds: 1565148757 > nanos: 968267140 > } > Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition" > } > namespace: "openstack" > get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539^[[00m -------------- next part -------------- A non-text attachment was scrubbed... Name: stx-openstack-apply.log Type: text/x-log Size: 118413 bytes Desc: not available URL: From Joseph.Richard at windriver.com Wed Aug 7 18:25:49 2019 From: Joseph.Richard at windriver.com (Richard, Joseph) Date: Wed, 7 Aug 2019 18:25:49 +0000 Subject: [Starlingx-discuss] Help understanding an application-appy failure In-Reply-To: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> References: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> Message-ID: <7B6A2AE64F40F245AE81F245059F0C3672F15AA8@ALA-MBD.corp.ad.wrs.com> The issue is with openvswitch not coming up, so the helm charts that depend on it (neutron, nova, libvirt) can't come up. If you run `kubectl -n openstack get pods`, do you see the openvswitch(not neutron-ovs-agent) pod? What state is it in? Did you remove and then reapply the application? If you run `helm list`, do you see osh-openstack-openvswitch? If you run `helm delete osh-openstack-openvswitch --purge` and then reapply the stx-openstack application, does it come up? -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, August 7, 2019 1:09 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Help understanding an application-appy failure Folks, I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend. It appears to be a timeout (see the attached log). Thoughts, suggestions to debug this further and if a bug is needed Thanks Sau! > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: > status = StatusCode.UNKNOWN > details = "release osh-openstack-libvirt failed: timed out waiting for the condition" > debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >> > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last): > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller metadata=self.metadata) > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller details = "release osh-openstack-libvirt failed: timed out waiting for the condition" > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller > > 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m > 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-] > [chart=openstack-libvirt]: Helm getting release status for > release=osh-openstack-libvirt, version=0 get_release_status > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531^[ > [00m > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-] > [chart=openstack-openvswitch]: Timed out waiting for pods > (namespace=openstack, > labels=(release_group=osh-openstack-openvswitch)). None found! Are > `wait.labels` correct? Does `wait.resources` need to exclude `type: > pod`?^[[00m > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last): > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = get_result() > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada return self.__get_result() > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise self._exception > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada prefix, known_releases) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada chart_wait.wait(timer) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada wait.wait(timeout=timeout) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada modified = self._wait(deadline) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? > 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m > 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt" > info { > status { > code: FAILED > } > first_deployed { > seconds: 1565148757 > nanos: 968267140 > } > last_deployed { > seconds: 1565148757 > nanos: 968267140 > } > Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition" > } > namespace: "openstack" > get_release_status > /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539^[ > [00m From scott.little at windriver.com Wed Aug 7 19:13:26 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 7 Aug 2019 15:13:26 -0400 Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Message-ID: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> Yahoo! The first successful build of STX 2.0 RC1 is available. http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ Scott From david.a.cobbley at intel.com Wed Aug 7 19:24:49 2019 From: david.a.cobbley at intel.com (Cobbley, David A) Date: Wed, 7 Aug 2019 19:24:49 +0000 Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! In-Reply-To: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> References: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> Message-ID: Congrats, great work by all! --David C -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Wednesday, August 7, 2019 12:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Yahoo! The first successful build of STX 2.0 RC1 is available. http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michael.l.tullis at intel.com Wed Aug 7 20:27:13 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 7 Aug 2019 20:27:13 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 8/7/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1BA0AF0@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Wed Aug 7 21:28:51 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 7 Aug 2019 14:28:51 -0700 Subject: [Starlingx-discuss] Help understanding an application-appy failure In-Reply-To: <7B6A2AE64F40F245AE81F245059F0C3672F15AA8@ALA-MBD.corp.ad.wrs.com> References: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> <7B6A2AE64F40F245AE81F245059F0C3672F15AA8@ALA-MBD.corp.ad.wrs.com> Message-ID: <0d5394dd-a8a5-410b-8e97-994ac23f8dc8@linux.intel.com> On 8/7/19 11:25 AM, Richard, Joseph wrote: > The issue is with openvswitch not coming up, so the helm charts that depend on it (neutron, nova, libvirt) can't come up. > > If you run `kubectl -n openstack get pods`, do you see the openvswitch(not neutron-ovs-agent) pod? What state is it in? > I do not see openvswitch at all in kubectl output. > Did you remove and then reapply the application? > If you run `helm list`, do you see osh-openstack-openvswitch? osh-openstack-openvswitch 1 Tue Aug 6 23:30:15 2019 DEPLOYED openvswitch-0.1.0 openstack > If you run `helm delete osh-openstack-openvswitch --purge` and then reapply the stx-openstack application, does it come up? > I tried this just now and it's again stopped at 58%, it did restart the osh-openstack-openvswitch helm chart, but when I tried kubectl, there was still no openvswitch. More thoughts, suggestions? Sau! > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, August 7, 2019 1:09 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Help understanding an application-appy failure > > > Folks, > > I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend. > > It appears to be a timeout (see the attached log). > > Thoughts, suggestions to debug this further and if a bug is needed > > Thanks > Sau! > > >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >> status = StatusCode.UNKNOWN >> details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >> debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >>> >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last): >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller metadata=self.metadata) >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller > >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m >> 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-] >> [chart=openstack-libvirt]: Helm getting release status for >> release=osh-openstack-libvirt, version=0 get_release_status >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531^[ >> [00m >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-] >> [chart=openstack-openvswitch]: Timed out waiting for pods >> (namespace=openstack, >> labels=(release_group=osh-openstack-openvswitch)). None found! Are >> `wait.labels` correct? Does `wait.resources` need to exclude `type: >> pod`?^[[00m >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last): >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = get_result() >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada return self.__get_result() >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise self._exception >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada prefix, known_releases) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada chart_wait.wait(timer) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada wait.wait(timeout=timeout) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada modified = self._wait(deadline) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m >> 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt" >> info { >> status { >> code: FAILED >> } >> first_deployed { >> seconds: 1565148757 >> nanos: 968267140 >> } >> last_deployed { >> seconds: 1565148757 >> nanos: 968267140 >> } >> Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition" >> } >> namespace: "openstack" >> get_release_status >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539^[ >> [00m > From maria.g.perez.ibarra at intel.com Wed Aug 7 21:30:26 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 7 Aug 2019 21:30:26 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190807 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-07 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Wed Aug 7 22:25:41 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 7 Aug 2019 22:25:41 +0000 Subject: [Starlingx-discuss] Help understanding an application-appy failure In-Reply-To: <0d5394dd-a8a5-410b-8e97-994ac23f8dc8@linux.intel.com> References: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> <7B6A2AE64F40F245AE81F245059F0C3672F15AA8@ALA-MBD.corp.ad.wrs.com> <0d5394dd-a8a5-410b-8e97-994ac23f8dc8@linux.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC260F4B3@ALA-MBD.corp.ad.wrs.com> kubectl get nodes --show-labels source /etc/platform/openrc system show -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, August 7, 2019 5:29 PM To: Richard, Joseph ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Help understanding an application-appy failure On 8/7/19 11:25 AM, Richard, Joseph wrote: > The issue is with openvswitch not coming up, so the helm charts that depend on it (neutron, nova, libvirt) can't come up. > > If you run `kubectl -n openstack get pods`, do you see the openvswitch(not neutron-ovs-agent) pod? What state is it in? > I do not see openvswitch at all in kubectl output. > Did you remove and then reapply the application? > If you run `helm list`, do you see osh-openstack-openvswitch? osh-openstack-openvswitch 1 Tue Aug 6 23:30:15 2019 DEPLOYED openvswitch-0.1.0 openstack > If you run `helm delete osh-openstack-openvswitch --purge` and then reapply the stx-openstack application, does it come up? > I tried this just now and it's again stopped at 58%, it did restart the osh-openstack-openvswitch helm chart, but when I tried kubectl, there was still no openvswitch. More thoughts, suggestions? Sau! > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, August 7, 2019 1:09 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Help understanding an application-appy > failure > > > Folks, > > I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend. > > It appears to be a timeout (see the attached log). > > Thoughts, suggestions to debug this further and if a bug is needed > > Thanks > Sau! > > >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >> status = StatusCode.UNKNOWN >> details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >> debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >>> >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last): >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller metadata=self.metadata) >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller > >> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m >> 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-] >> [chart=openstack-libvirt]: Helm getting release status for >> release=osh-openstack-libvirt, version=0 get_release_status >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531^ >> [ >> [00m >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-] >> [chart=openstack-openvswitch]: Timed out waiting for pods >> (namespace=openstack, >> labels=(release_group=osh-openstack-openvswitch)). None found! Are >> `wait.labels` correct? Does `wait.resources` need to exclude `type: >> pod`?^[[00m >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last): >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = get_result() >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada return self.__get_result() >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise self._exception >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada prefix, known_releases) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada chart_wait.wait(timer) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada wait.wait(timeout=timeout) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada modified = self._wait(deadline) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m >> 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt" >> info { >> status { >> code: FAILED >> } >> first_deployed { >> seconds: 1565148757 >> nanos: 968267140 >> } >> last_deployed { >> seconds: 1565148757 >> nanos: 968267140 >> } >> Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition" >> } >> namespace: "openstack" >> get_release_status >> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539^ >> [ >> [00m > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Wed Aug 7 22:43:20 2019 From: sgw at linux.intel.com (Saul Wold) Date: Wed, 7 Aug 2019 15:43:20 -0700 Subject: [Starlingx-discuss] Help understanding an application-appy failure In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC260F4B3@ALA-MBD.corp.ad.wrs.com> References: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> <7B6A2AE64F40F245AE81F245059F0C3672F15AA8@ALA-MBD.corp.ad.wrs.com> <0d5394dd-a8a5-410b-8e97-994ac23f8dc8@linux.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC260F4B3@ALA-MBD.corp.ad.wrs.com> Message-ID: <3286a78c-b0c9-e2d6-579c-3d1ca93d6b70@linux.intel.com> On 8/7/19 3:25 PM, Rowsell, Brent wrote: > kubectl get nodes --show-labels > source /etc/platform/openrc > system show > controller-0:~$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS controller-0 Ready master 33h v1.13.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=controller-0,node-role.kubernetes.io/master=,openstack-compute-node=enabled,openstack-control-plane=enabled,sriov=enabled controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system show +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | contact | None | | created_at | 2019-08-06T13:16:31.064956+00:00 | | description | None | | https_enabled | False | | location | None | | name | bd080471-05ba-444e-a43a-f571916f9736 | | region_name | RegionOne | | sdn_enabled | False | | security_feature | spectre_meltdown_v1 | | service_project_name | services | | software_version | 19.01 | | system_mode | simplex | | system_type | All-in-one | | timezone | UTC | | updated_at | 2019-08-06T13:17:30.333232+00:00 | | uuid | 8a10d157-eb91-4a14-9661-42f449d3d6da | | vswitch_type | none | +----------------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ Sau! > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, August 7, 2019 5:29 PM > To: Richard, Joseph ; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Help understanding an application-appy failure > > > > On 8/7/19 11:25 AM, Richard, Joseph wrote: >> The issue is with openvswitch not coming up, so the helm charts that depend on it (neutron, nova, libvirt) can't come up. >> >> If you run `kubectl -n openstack get pods`, do you see the openvswitch(not neutron-ovs-agent) pod? What state is it in? >> > I do not see openvswitch at all in kubectl output. > >> Did you remove and then reapply the application? >> If you run `helm list`, do you see osh-openstack-openvswitch? > > osh-openstack-openvswitch 1 Tue Aug 6 23:30:15 2019 > DEPLOYED openvswitch-0.1.0 openstack > >> If you run `helm delete osh-openstack-openvswitch --purge` and then reapply the stx-openstack application, does it come up? >> > I tried this just now and it's again stopped at 58%, it did restart the osh-openstack-openvswitch helm chart, but when I tried kubectl, there was still no openvswitch. > > More thoughts, suggestions? > > Sau! > >> -----Original Message----- >> From: Saul Wold [mailto:sgw at linux.intel.com] >> Sent: Wednesday, August 7, 2019 1:09 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] Help understanding an application-appy >> failure >> >> >> Folks, >> >> I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend. >> >> It appears to be a timeout (see the attached log). >> >> Thoughts, suggestions to debug this further and if a bug is needed >> >> Thanks >> Sau! >> >> >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >>> status = StatusCode.UNKNOWN >>> details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >>> debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >>>> >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last): >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller metadata=self.metadata) >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller > >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m >>> 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-] >>> [chart=openstack-libvirt]: Helm getting release status for >>> release=osh-openstack-libvirt, version=0 get_release_status >>> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531^ >>> [ >>> [00m >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-] >>> [chart=openstack-openvswitch]: Timed out waiting for pods >>> (namespace=openstack, >>> labels=(release_group=osh-openstack-openvswitch)). None found! Are >>> `wait.labels` correct? Does `wait.resources` need to exclude `type: >>> pod`?^[[00m >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last): >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = get_result() >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada return self.__get_result() >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise self._exception >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada prefix, known_releases) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada chart_wait.wait(timer) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada wait.wait(timeout=timeout) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada modified = self._wait(deadline) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m >>> 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt" >>> info { >>> status { >>> code: FAILED >>> } >>> first_deployed { >>> seconds: 1565148757 >>> nanos: 968267140 >>> } >>> last_deployed { >>> seconds: 1565148757 >>> nanos: 968267140 >>> } >>> Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition" >>> } >>> namespace: "openstack" >>> get_release_status >>> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539^ >>> [ >>> [00m >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Brent.Rowsell at windriver.com Wed Aug 7 22:51:24 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 7 Aug 2019 22:51:24 +0000 Subject: [Starlingx-discuss] Help understanding an application-appy failure In-Reply-To: <3286a78c-b0c9-e2d6-579c-3d1ca93d6b70@linux.intel.com> References: <929eaa66-f823-0708-d98d-9a31350a37db@linux.intel.com> <7B6A2AE64F40F245AE81F245059F0C3672F15AA8@ALA-MBD.corp.ad.wrs.com> <0d5394dd-a8a5-410b-8e97-994ac23f8dc8@linux.intel.com> <2588653EBDFFA34B982FAF00F1B4844EC260F4B3@ALA-MBD.corp.ad.wrs.com> <3286a78c-b0c9-e2d6-579c-3d1ca93d6b70@linux.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC260F5A2@ALA-MBD.corp.ad.wrs.com> Need to add openvswitch=enabled. system host-lock controller-0 system host-label-assign controller-0 openvswitch=enabled system host-unlock controller-0 Brent -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Wednesday, August 7, 2019 6:43 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Help understanding an application-appy failure On 8/7/19 3:25 PM, Rowsell, Brent wrote: > kubectl get nodes --show-labels > source /etc/platform/openrc > system show > controller-0:~$ kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS controller-0 Ready master 33h v1.13.5 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=controller-0,node-role.kubernetes.io/master=,openstack-compute-node=enabled,openstack-control-plane=enabled,sriov=enabled controller-0:~$ source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system show +----------------------+--------------------------------------+ | Property | Value | +----------------------+--------------------------------------+ | contact | None | | created_at | 2019-08-06T13:16:31.064956+00:00 | | description | None | | https_enabled | False | | location | None | | name | bd080471-05ba-444e-a43a-f571916f9736 | | region_name | RegionOne | | sdn_enabled | False | | security_feature | spectre_meltdown_v1 | | service_project_name | services | | software_version | 19.01 | | system_mode | simplex | | system_type | All-in-one | | timezone | UTC | | updated_at | 2019-08-06T13:17:30.333232+00:00 | | uuid | 8a10d157-eb91-4a14-9661-42f449d3d6da | | vswitch_type | none | +----------------------+--------------------------------------+ [sysadmin at controller-0 ~(keystone_admin)]$ Sau! > -----Original Message----- > From: Saul Wold [mailto:sgw at linux.intel.com] > Sent: Wednesday, August 7, 2019 5:29 PM > To: Richard, Joseph ; > starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Help understanding an > application-appy failure > > > > On 8/7/19 11:25 AM, Richard, Joseph wrote: >> The issue is with openvswitch not coming up, so the helm charts that depend on it (neutron, nova, libvirt) can't come up. >> >> If you run `kubectl -n openstack get pods`, do you see the openvswitch(not neutron-ovs-agent) pod? What state is it in? >> > I do not see openvswitch at all in kubectl output. > >> Did you remove and then reapply the application? >> If you run `helm list`, do you see osh-openstack-openvswitch? > > osh-openstack-openvswitch 1 Tue Aug 6 23:30:15 2019 > DEPLOYED openvswitch-0.1.0 openstack > >> If you run `helm delete osh-openstack-openvswitch --purge` and then reapply the stx-openstack application, does it come up? >> > I tried this just now and it's again stopped at 58%, it did restart the osh-openstack-openvswitch helm chart, but when I tried kubectl, there was still no openvswitch. > > More thoughts, suggestions? > > Sau! > >> -----Original Message----- >> From: Saul Wold [mailto:sgw at linux.intel.com] >> Sent: Wednesday, August 7, 2019 1:09 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] Help understanding an application-appy >> failure >> >> >> Folks, >> >> I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend. >> >> It appears to be a timeout (see the attached log). >> >> Thoughts, suggestions to debug this further and if a bug is needed >> >> Thanks >> Sau! >> >> >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >>> status = StatusCode.UNKNOWN >>> details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >>> debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >>>> >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last): >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller metadata=self.metadata) >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__ >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller return _end_unary_response_blocking(state, call, False, None) >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller raise _Rendezvous(state, None, None, deadline) >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with: >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller status = StatusCode.UNKNOWN >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller details = "release osh-openstack-libvirt failed: timed out waiting for the condition" >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}" >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller > >>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m >>> 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-] >>> [chart=openstack-libvirt]: Helm getting release status for >>> release=osh-openstack-libvirt, version=0 get_release_status >>> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531 >>> ^ >>> [ >>> [00m >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-] >>> [chart=openstack-openvswitch]: Timed out waiting for pods >>> (namespace=openstack, >>> labels=(release_group=osh-openstack-openvswitch)). None found! Are >>> `wait.labels` correct? Does `wait.resources` need to exclude `type: >>> pod`?^[[00m >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last): >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = get_result() >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada return self.__get_result() >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise self._exception >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada result = self.fn(*self.args, **self.kwargs) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada prefix, known_releases) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada chart_wait.wait(timer) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada wait.wait(timeout=timeout) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada modified = self._wait(deadline) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? >>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m >>> 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt" >>> info { >>> status { >>> code: FAILED >>> } >>> first_deployed { >>> seconds: 1565148757 >>> nanos: 968267140 >>> } >>> last_deployed { >>> seconds: 1565148757 >>> nanos: 968267140 >>> } >>> Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition" >>> } >>> namespace: "openstack" >>> get_release_status >>> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539 >>> ^ >>> [ >>> [00m >> > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Aug 7 23:15:07 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 7 Aug 2019 23:15:07 +0000 Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! In-Reply-To: References: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3600A537@SHSMSX104.ccr.corp.intel.com> Great to see that we are so steady to making RC1 branch successful! Cheers to all! Thx. - cindy -----Original Message----- From: Cobbley, David A [mailto:david.a.cobbley at intel.com] Sent: Thursday, August 8, 2019 3:25 AM To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Congrats, great work by all! --David C -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Wednesday, August 7, 2019 12:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Yahoo! The first successful build of STX 2.0 RC1 is available. http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Aug 7 23:19:37 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 7 Aug 2019 23:19:37 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190807 In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3600A563@SHSMSX104.ccr.corp.intel.com> Maria, As we are going to run daily sanity on both master and RC1, can we add "Master" or "RC1" notation in your report title so that people can know without click the ISO link to know where it comes from? Thanks. - cindy From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, August 8, 2019 5:30 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190807 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-07 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Aug 8 00:12:28 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 8 Aug 2019 00:12:28 +0000 Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3600A537@SHSMSX104.ccr.corp.intel.com> References: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> <2FD5DDB5A04D264C80D42CA35194914F3600A537@SHSMSX104.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC99C0@ALA-MBD.corp.ad.wrs.com> Good stuff Scott, thanks! @Ada - looking forward to a green sanity tomorrow : ) -----Original Message----- From: Xie, Cindy Sent: Wednesday, August 7, 2019 7:15 PM To: Cobbley, David A ; Little, Scott ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Great to see that we are so steady to making RC1 branch successful! Cheers to all! Thx. - cindy -----Original Message----- From: Cobbley, David A [mailto:david.a.cobbley at intel.com] Sent: Thursday, August 8, 2019 3:25 AM To: Scott Little ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Congrats, great work by all! --David C -----Original Message----- From: Scott Little [mailto:scott.little at windriver.com] Sent: Wednesday, August 7, 2019 12:13 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! Yahoo! The first successful build of STX 2.0 RC1 is available. http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ Scott _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Thu Aug 8 00:12:49 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 8 Aug 2019 00:12:49 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190807 In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3600A563@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F3600A563@SHSMSX104.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC99D1@ALA-MBD.corp.ad.wrs.com> +1 From: Xie, Cindy Sent: Wednesday, August 7, 2019 7:20 PM To: Perez Ibarra, Maria G ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190807 Maria, As we are going to run daily sanity on both master and RC1, can we add "Master" or "RC1" notation in your report title so that people can know without click the ISO link to know where it comes from? Thanks. - cindy From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, August 8, 2019 5:30 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190807 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-07 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn.seiler at windriver.com Thu Aug 8 00:20:07 2019 From: glenn.seiler at windriver.com (Seiler, Glenn) Date: Thu, 8 Aug 2019 00:20:07 +0000 Subject: [Starlingx-discuss] [Release] [Build] StarlingX release 2.0: We have a load! In-Reply-To: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> References: <8753fbb0-345f-46b8-4a2e-ee7eb6b0062e@windriver.com> Message-ID: <26B81D12-AF24-434C-8AA7-9AD1FD6B0D2A@windriver.com> Excellent. That is great news. Great effort by everyone. Sent from my iPhone > On Aug 7, 2019, at 12:19 PM, Scott Little wrote: > > Yahoo! > > The first successful build of STX 2.0 RC1 is available. > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Thu Aug 8 01:23:51 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 8 Aug 2019 01:23:51 +0000 Subject: [Starlingx-discuss] Incomplete Launchpads Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC9A97@ALA-MBD.corp.ad.wrs.com> Lately, I've heard the comment "there are too many 'Incomplete' launchpads" more than a few times. Looking at the current list [0] of 'incomplete' bugs that are gating for stx.2.0, they're not all incomplete in quite the same way, but are all 'stuck' somehow. The people who are trying to resolve these Launchpads will continue to try to get the info they need, but the reporters also need to be aware & take *prompt* action when a Launchpad they've opened is stuck at Incomplete. Comments welcome - on the general topic of incomplete launchpads, or on any of these ones in particular (if you want to suggest how to unstick it). The current list is here too... 1830108 Unable to connect to router after host reboot 1836378 stx-openstack application stuck at applying status by processing chart: osh-kube-system-ingress 1836974 radosgw coredump files generated after unexpected host swact 1838472 Unlocking a compute host leads to failure loop 1826886 cinder cmd not working intermittently 1827063 Volume was observed to go into error state on creation 1828300 System shouldn't allow to add host when controller is locked using GUI 1829288 Both controllers went for second reboot after DOR 1830421 AIO-DX Host controller compute services failed to get openstack token from keystone after reboot 1830736 Ceph osd process was not recovered after lock and unlock on storage node with journal disk 1837242 OSD failure on storage node never recovered 1837243 Volume cmd failed by Service Unavailable [0] https://bugs.launchpad.net/starlingx/+bugs?search=Search&field.status=Incomplete&field.tag=stx.2.0 From cristopher.j.lemus.contreras at intel.com Thu Aug 8 15:48:05 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 8 Aug 2019 15:48:05 +0000 Subject: [Starlingx-discuss] Helm Charts Message-ID: Hello, Quick question, were helm-charts moved to a new location? Today, the sanity execution failed because the folders, where they are usually located, are empty: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190808T013000Z/outputs/helm-charts/ http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190808T053000Z/outputs/helm-charts/ Thanks in advance, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Aug 8 16:18:07 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 8 Aug 2019 16:18:07 +0000 Subject: [Starlingx-discuss] Community Call (August 7, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AC9FF5@ALA-MBD.corp.ad.wrs.com> Notes from yesterday's meeting... - sanity update - any reds? - all green since last meeting - reviews in need of attention - nothing this week - RC1 declaration - the build is in progress - ISO's done, working on the Docker images & Helm charts - hopefully will be done by EOD - branch logistics, etc. - prep email: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005532.html - Dean's "branch ready" email: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005569.html - feature/regression test update - updates on actions from last week's release team meeting - feature test: ironic & helm overrides testing - forecast Aug 13 - Ada: in progress, 1 left for Helm, doing the setup for Ironic - fcast Aug 13 - regression test: update on the 10 that haven't been run - Ada: 7 are for Nova, 1 for install/config, 2 for new features (containers) - in Numan's shop - Numan: should be able to finish by EOW - Ada: final regression will start next week - fcast to complete by Aug 23 - Nova CVE: http://lists.openstack.org/pipermail/openstack-discuss/2019-August/008307.html - Dean: this fix may warrant us to pick up another Nova update - Yong to ask the Security team for a decision on whether this needs to be picked up or not - pending that, the Distro OpenStack will consider when's best to pick this (and potentially other stuff) up - high/medium launchpads - https://docs.google.com/spreadsheets/d/1DZZgqrCIL6wxv51_yFBk6Lfmtf1AqPD6z7e5hEs3prU/edit#gid=300550657 - opens - ACTION: Numan/Yang arrange an automation framework info session for the Community - Numan: it's in progress, plan to do it after 2.0 (or right near the end) - any Q&A before then will be appreciated - Ada noted that they've started to look at it a bit - ACTION: Ada/Numan - combining sanity runs/reports - Numan: will discuss w/ Ada this week, won't start until after 2.0 - Ada: Ada's team will do both, Numan will do RC1 and occasionally Master - it'll be run daily on both, for now anyways - if there are issues running on the virtualized environment, which branch should we focus on - - Bill will spend time cleaning up the action list & organize it better, like the docs team wiki https://etherpad.openstack.org/p/stx-documentation -----Original Message----- From: Zvonar, Bill Sent: Tuesday, August 6, 2019 7:44 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (August 7, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - sanity update - any reds? - reviews in need of attention - RC1 declaration - branch logistics, etc. - high/medium launchpads - docs update - opens Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190807T1400 From scott.little at windriver.com Thu Aug 8 18:08:15 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 8 Aug 2019 14:08:15 -0400 Subject: [Starlingx-discuss] Helm Charts In-Reply-To: References: Message-ID: <1e319b81-5f0b-5904-e0f0-a590980e1332@windriver.com> There appears to be a CENGN bug when the docker images have not been rebuilt as part of the overall build.   I'll fix it. Helm charts were  built for the 20190807T135731Z build. http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/ 20190807T135731Z aka http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/latest_docker_image_build Scott On 2019-08-08 11:48 a.m., Lemus Contreras, Cristopher J wrote: > > Hello, > > Quick question, were helm-charts moved to a new location? Today, the > sanity execution failed because the folders, where they are usually > located, are empty: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190808T013000Z/outputs/helm-charts/ > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190808T053000Z/outputs/helm-charts/ > > Thanks in advance, > > Cristopher Lemus > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Aug 8 18:28:56 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 8 Aug 2019 14:28:56 -0400 Subject: [Starlingx-discuss] Helm Charts In-Reply-To: <1e319b81-5f0b-5904-e0f0-a590980e1332@windriver.com> References: <1e319b81-5f0b-5904-e0f0-a590980e1332@windriver.com> Message-ID: The helm charts for the 20190808T013000Z build are now available. Scott On 2019-08-08 2:08 p.m., Scott Little wrote: > There appears to be a CENGN bug when the docker images have not been > rebuilt as part of the overall build.   I'll fix it. > > Helm charts were built for the 20190807T135731Z build. > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/ > 20190807T135731Z > > > aka > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/latest_docker_image_build > > > Scott > > > On 2019-08-08 11:48 a.m., Lemus Contreras, Cristopher J wrote: >> >> Hello, >> >> Quick question, were helm-charts moved to a new location? Today, the >> sanity execution failed because the folders, where they are usually >> located, are empty: >> >> http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190808T013000Z/outputs/helm-charts/ >> >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190808T053000Z/outputs/helm-charts/ >> >> Thanks in advance, >> >> Cristopher Lemus >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Aug 8 18:42:12 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 8 Aug 2019 18:42:12 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - August 8/2019 Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACA359@ALA-MBD.corp.ad.wrs.com> Notes from today's release team meeting below and at [0] - we'll announce RC1 once we see the green sanity on the release branch. Bill... - RC1 Milestone - based on the status of feature/regression test, we'll declare the RC1 milestone, will do so once we see the green sanity on the release branch - Release Branch / Sanity - sanity is held up waiting on the resolution of the helm chart issue raised by Cristopher (http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005626.html) - Scott's looking into it now, and will notify Ada/Cristopher asap - if this is done soon enough, Cristopher should be able to do sanity on bare metal today - this is the build: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ - Scott came back & said that the helm charts are available, will send an email - Cherry Picking into the RC1 Branch - Frank raised the question of how we make sure that all commits to master that should be cherry-picked to the release branch have in fact been cherry-picked - Saul noted that there's a 'cherry-pick' button in Gerrit, but hasn't used it - Don confirmed that this does the job of cherry-picking, and updates the Launchpad - ACTION: Bill to sort out what the query is to check if all high/medium importance LPs on master since have also been cherry-picked back to the release branch - ACTION: Bill send an email to the Community/Cores that commits have to go to Master first, and be cherry-picked to the release branch (ask Cores to +2 only if these steps were done) - Feature Test status - Ironic & Helm Overrides testing - forecast Aug 13 - Ada: in progress - 1 left for Helm Overrides, pending on some input from Bob Church - 7 left for Ironic - still working on the setup - fcast Aug 13 - Ada will ask Jose if he's having any particular issues that someone could help with - Brent noted that Mingyuan did the feature - Ada later noted that Jose does have info from Mingyuan and is working on setup now - Regression Test status - Ada: all have been run now, except for the 13 that are blocked - 3 SRIOV: blocked pending some instructions on how to test - probably will get these from Chenjie - the testcases are simple, so should be quick to execute once they have that information - 2 Security: working through IPv6 setup issues - no firm plan currently, but IPv6 is more of an aspirational thing for 2.0 (read: this does not block RC1) - 2 Storage: actually unblocked now, just need to work through them, plan to finish today/tomorrow - 6 System (2+2+2): the switch/other issues have been resolved now, plan to finish by next week - 3.0 Content in Master - Saul asked when 3.0 content can start going into Master - we agreed last week to hold off until we have a green sanity on the release branch - should be very soon now [0] https://etherpad.openstack.org/p/stx-releases From maria.g.perez.ibarra at intel.com Thu Aug 8 22:37:31 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 8 Aug 2019 22:37:31 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 8/08/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID="20190802T013000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 493 Pass = 410 Fail = 25 Blocked = 14 Not Run = 0 Obsolete = 26 Deferred = 18 Total executed = 449 Pass Rate = 94.25% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore 6 Deferrer Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 11 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 7 PASS | 1 Fail Regression - Maintenance 8 PASS | 1 FAIL Regression - Networking 121 PASS | 1 FAIL | 3 BLOCKED | 19 OBSOLETE Regression - Nova 24 PASS | 13 FAIL | 3 OBSOLETE Regression - Security 34 PASS | 1 FAIL | 2 BLOCKED | 1 OBSOLETE | 4 Deferrer Regression - Storage 23 PASS |2 BLOCKED| 2 Deferrer Regression - Inventory 30 PASS System Test 20 PASS | 2 FAIL | 7 BLOCKED | 1 OBSOLETE | 6 Deferrer Regression - new features 73 PASS | 5 FAIL --------------------------------------------------------------------------- AUTOMATED EXECUTION - INTEL --------------------------------------------------------------------------- Overall Results: Pass = 197 Fail = 38 Total executed = 235 Pass Rate = 83.82% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Fault-Management 15 PASS Gnocchi 12 PASS HEAT 6 PASS High-Availability 9 PASS | 2 FAIL Horizon 2 PASS Insallation-And-Config 8 PASS Maintenance 26 PASS | 3 FAIL Networking 45 PASS | 7 FAIL Nova 17 PASS | 2 FAIL Security 18 PASS | 5 FAIL Storage 5 PASS | 11 FAIL SYSINVENTORY 26 PASS | 5 FAIL System 8 PASS |3 FAIL ---------------------------------------------------------------------- AUTOMATED EXECUTION - Wind River ---------------------------------------------------------------------- Overall Results: Pass = 644 Fail = 123 Total executed = 767 Pass Rate = 84.0% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Horizon 57 PASS | 5 FAIL MTC General 30 PASS | 37 FAIL MTC Process Monitoring Networking 29 PASS | 9 FAIL Nova 193 PASS | 54 FAIL REST API 222 PASS | 4 FAIL Security 34 PASS | 4 FAIL Storage 63 PASS | 6 FAIL Sysinv 16 PASS | 4 FAIL Backup/Restore ---------------------------------------------------------------------- user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 Create Volume dialog opens (from image panel in Horizon) but getting error default volume type cannot be found https://bugs.launchpad.net/starlingx/+bug/1826259 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 Containers: openstack pods failed after force rebooting active controller https://bugs.launchpad.net/starlingx/+bug/1816842 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 VM filesystem is not RW when attached the 2nd volume https://bugs.launchpad.net/starlingx/+bug/1838546 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 vSwitch 1G Hugepage available size cannot be changed https://bugs.launchpad.net/starlingx/+bug/1834530 Validate re-purposing worker is getting degraded state https://bugs.launchpad.net/starlingx/+bug/1839018 hypervisor stays down after force lock and unlock due to pci-irq-affinity-agent process failure https://bugs.launchpad.net/starlingx/+bug/1839160 Image conversion fails with large qcow2 guest image due to insufficient filesystem size https://bugs.launchpad.net/starlingx/+bug/1819688 SSH to secure boot VM fails after evacuation https://bugs.launchpad.net/starlingx/+bug/1839320 platform keystone account lockout feature is not enabled https://bugs.launchpad.net/starlingx/+bug/1838100 stx-openstack application-applying stuck at osh-openstack-placement https://bugs.launchpad.net/starlingx/+bug/1837769 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Aug 9 04:14:43 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 9 Aug 2019 04:14:43 +0000 Subject: [Starlingx-discuss] [RC1/Master] Sanity Test - ISO 20190808 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-08 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Fri Aug 9 07:04:44 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Fri, 9 Aug 2019 07:04:44 +0000 Subject: [Starlingx-discuss] how initial_config_complete is created Message-ID: <56829C2A36C2E542B0CCB9854828E4D8562561CE@CDSMSX102.ccr.corp.intel.com> Hi After using ansible for bootstrap deploy, how initial_config_complete is created, by ansible or by sysinv-agent. Thanks. Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Fri Aug 9 14:07:24 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Fri, 9 Aug 2019 14:07:24 +0000 Subject: [Starlingx-discuss] [RC1/Master] Sanity Test - ISO 20190808 In-Reply-To: References: Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEA88A4@FMSMSX114.amr.corp.intel.com> These results are from RC1 - and this is the green we were expecting. @Bill, you can make the announcement. Ada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, August 8, 2019 11:15 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [RC1/Master] Sanity Test - ISO 20190808 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-08 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Fri Aug 9 18:17:16 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 9 Aug 2019 18:17:16 +0000 Subject: [Starlingx-discuss] StarlingX Weekly Containerization Meeting Message-ID: For the August 12 meeting I request you attend if you have a gating issues on the list below. If you cannot attend then please send a brief email with status of your issue. Thank-you. Agenda for August 12 meeting: 1. stx2.0 gating bugs (28, down from 36 two weeks ago) a) Application apply issues: [Bob Church, Shuicheng Lin, Angie Wang, Tyler Smith, Stefan Dinescu, Daniel Badea] 1836406 VM boot up failed due to nova-cell-setup-h6phb container in init state 1836609 application apply fails after compute lock and unlock 1837792 stx-openstack application apply aborted 1837769 stx-openstack application-applying stuck at osh-openstack-placement 1838101 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress 1838542 Duplex: stx-openstack failed after lock/unlock controller 1836378 stx-openstack application stuck at applying status by processing chart: osh-kube-system-ingress 1837750 stx-application re-apply strategy requires some changes b) Performance/recovery time issues: 1837426 Very high platform CPU usage on AIO-DX active controller with stx-openstack installed [Al Bailey/Gerry Kopec] 1834796 AIO: Too many rabbit threads [Bin Yang] 1838411 Uncontrolled swact on standard system is slow [Bart Wensley] 1829931 AIO-DX: hypervisor is not up in 5 mins after unlocked standby controller becomes available [Bart Wensley] c) Other higher priority issues: 1817936 Periodic message loss seen between VIM and OpenStack REST APIs [Austin Sun] 1838659 kubernetes apiserver certificate needs rotation [Mingyuan Qi] 1838778 sr-iov cni/plugin pods need wildcard Noschedule toleration [Steve Webster] 1824881 Unlock after force lock enabled the worker according to maintenance but hypervisor remained down [Erich Cordoba] 1837686 Openstack commands hold prompt > 30 seconds [Tao Liu] Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4279 bytes Desc: not available URL: From Kristine.Bujold at windriver.com Fri Aug 9 20:01:29 2019 From: Kristine.Bujold at windriver.com (Bujold, Kristine) Date: Fri, 9 Aug 2019 20:01:29 +0000 Subject: [Starlingx-discuss] Code in multiple repos being merged Message-ID: <5ECD8395442B0C4FB807F9737625BB6768C574F7@ALA-MBD.corp.ad.wrs.com> Hi, Just an FYI that changes that were spread over 7 repos just went in this afternoon. It you have pulled in the last few hours you may need to pull again after Zuul merges the last commit https://review.opendev.org/#/c/674365/ Sorry for the inconvenience, Kristine -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Fri Aug 9 20:37:55 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 9 Aug 2019 20:37:55 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190809 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-09 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Aug 9 22:07:28 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 9 Aug 2019 15:07:28 -0700 Subject: [Starlingx-discuss] New repo: zuul-jobs created Message-ID: <85968768-2f24-87ac-2858-e52746a1567f@linux.intel.com> Folks, I worked on getting the new repo created for zuul-jobs, I also pushed[0] a very initial files LICENSE, README.rst and .gitignore. I plan on having this match the zuul and openstack zuul-jobs repos, I have been working on cleaning up the initial playbook that I had in originally pushed in fault [1] as a WIP. I will be gone next week and back at it the week of the 19th. Have a great weekend and next week. Sau! [0] https://review.opendev.org/#/c/675706/ [1] https://review.opendev.org/#/c/670363/ From maria.g.perez.ibarra at intel.com Sat Aug 10 02:26:59 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Sat, 10 Aug 2019 02:26:59 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190809 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-09 (link) Status: YELLOW =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Compute remains on degraded after lock/unlock https://bugs.launchpad.net/starlingx/+bug/1839692 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Sun Aug 11 21:41:23 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Sun, 11 Aug 2019 21:41:23 +0000 Subject: [Starlingx-discuss] [RC1/Master] Sanity Test - ISO 20190808 In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CEA88A4@FMSMSX114.amr.corp.intel.com> References: <4F6AACE4B0F173488D033B02A8BB5B7E7CEA88A4@FMSMSX114.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACB3A0@ALA-MBD.corp.ad.wrs.com> Great, thanks Ada. From: Cabrales, Ada Sent: Friday, August 9, 2019 10:07 AM To: starlingx-discuss at lists.starlingx.io; Zvonar, Bill Subject: RE: [Starlingx-discuss] [RC1/Master] Sanity Test - ISO 20190808 These results are from RC1 - and this is the green we were expecting. @Bill, you can make the announcement. Ada From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra at intel.com] Sent: Thursday, August 8, 2019 11:15 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [RC1/Master] Sanity Test - ISO 20190808 Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-08 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Sun Aug 11 23:02:46 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Sun, 11 Aug 2019 23:02:46 +0000 Subject: [Starlingx-discuss] stx.2.0 RC1 Declared Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACB486@ALA-MBD.corp.ad.wrs.com> As per the review in the StarlingX Release meeting on August 8/2019, stx.2.0 RC1 is declared now that we have the first green sanity on the RC1 branch [0]. Thanks to everyone who contributed to this important milestone, and let's work together on bug resolution as head towards the 2.0 release date (Aug 30). Bill... [0] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005632.html -----Original Message----- From: Zvonar, Bill Sent: Thursday, August 8, 2019 2:42 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: Minutes: StarlingX Release Meeting - August 8/2019 Notes from today's release team meeting below and at [0] - we'll announce RC1 once we see the green sanity on the release branch. Bill... - RC1 Milestone - based on the status of feature/regression test, we'll declare the RC1 milestone, will do so once we see the green sanity on the release branch - Release Branch / Sanity - sanity is held up waiting on the resolution of the helm chart issue raised by Cristopher (http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005626.html) - Scott's looking into it now, and will notify Ada/Cristopher asap - if this is done soon enough, Cristopher should be able to do sanity on bare metal today - this is the build: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190807T135731Z/ - Scott came back & said that the helm charts are available, will send an email - Cherry Picking into the RC1 Branch - Frank raised the question of how we make sure that all commits to master that should be cherry-picked to the release branch have in fact been cherry-picked - Saul noted that there's a 'cherry-pick' button in Gerrit, but hasn't used it - Don confirmed that this does the job of cherry-picking, and updates the Launchpad - ACTION: Bill to sort out what the query is to check if all high/medium importance LPs on master since have also been cherry-picked back to the release branch - ACTION: Bill send an email to the Community/Cores that commits have to go to Master first, and be cherry-picked to the release branch (ask Cores to +2 only if these steps were done) - Feature Test status - Ironic & Helm Overrides testing - forecast Aug 13 - Ada: in progress - 1 left for Helm Overrides, pending on some input from Bob Church - 7 left for Ironic - still working on the setup - fcast Aug 13 - Ada will ask Jose if he's having any particular issues that someone could help with - Brent noted that Mingyuan did the feature - Ada later noted that Jose does have info from Mingyuan and is working on setup now - Regression Test status - Ada: all have been run now, except for the 13 that are blocked - 3 SRIOV: blocked pending some instructions on how to test - probably will get these from Chenjie - the testcases are simple, so should be quick to execute once they have that information - 2 Security: working through IPv6 setup issues - no firm plan currently, but IPv6 is more of an aspirational thing for 2.0 (read: this does not block RC1) - 2 Storage: actually unblocked now, just need to work through them, plan to finish today/tomorrow - 6 System (2+2+2): the switch/other issues have been resolved now, plan to finish by next week - 3.0 Content in Master - Saul asked when 3.0 content can start going into Master - we agreed last week to hold off until we have a green sanity on the release branch - should be very soon now [0] https://etherpad.openstack.org/p/stx-releases From build.starlingx at gmail.com Mon Aug 12 06:46:02 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 12 Aug 2019 02:46:02 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_wheels - Build # 112 - Failure! Message-ID: <234007702.136.1565592363500.JavaMail.javamailuser@localhost> Project: STX_build_wheels Build #: 112 Status: Failure Timestamp: 20190812T061023Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190812T033004Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/logs OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190812T033004Z/logs From build.starlingx at gmail.com Mon Aug 12 06:46:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 12 Aug 2019 02:46:06 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 112 - Failure! Message-ID: <87450702.139.1565592366907.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 112 Status: Failure Timestamp: 20190812T060610Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190812T033004Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/logs MASTER_BUILD_NUMBER: 210 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190812T033004Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190812T033004Z DOCKER_BUILD_ID: jenkins-master-20190812T033004Z-builder TIMESTAMP: 20190812T033004Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190812T033004Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190812T033004Z/outputs From build.starlingx at gmail.com Mon Aug 12 06:46:09 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 12 Aug 2019 02:46:09 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_master_master - Build # 210 - Failure! Message-ID: <1388367329.142.1565592370227.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 210 Status: Failure Timestamp: 20190812T033004Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From maria.g.perez.ibarra at intel.com Mon Aug 12 17:03:14 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 12 Aug 2019 17:03:14 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190812 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-12 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Aug 12 20:18:49 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 12 Aug 2019 13:18:49 -0700 Subject: [Starlingx-discuss] [ptg][tsc][all] Space and time request for the next PTG in Shanghai - URGENT Message-ID: <7029A79D-6629-4806-A356-0F354448365A@gmail.com> Hi StarlingX Community, As the Shanghai PTG is approaching quickly we need to request space for the project asap to ensure we have opportunity to discuss project details in more technical details. On the TSC meeting three weeks ago we talked about asking for a room for the project for 1 day. I don’t think we made estimates on space I was thinking about 30 people. We are also handling requests for onboarding and the PTG on the same form and I assume the project would like to do both. So in Summary for Shanghai: * Do onboarding and PTG session as well * 1 day * space for ~30 people max Does anyone have any questions or objections to the above proposal? Thanks and Best Regards, Ildikó From ildiko.vancsa at gmail.com Mon Aug 12 21:06:57 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 12 Aug 2019 14:06:57 -0700 Subject: [Starlingx-discuss] [ptg][tsc][all] Space and time request for the next PTG in Shanghai - URGENT In-Reply-To: <7029A79D-6629-4806-A356-0F354448365A@gmail.com> References: <7029A79D-6629-4806-A356-0F354448365A@gmail.com> Message-ID: Hi, A little correction to my previous e-mail. The time estimate needs to be for the PTG session AND the onboarding time combined. The shortest time slot is 1/4 day that can be requested. In that sense I suggest to ask for 1.25 days for the two activities combined. Updated summary: * Do onboarding and PTG session as well * 1 day * space for ~30 people max Thanks, Ildikó > On 2019. Aug 12., at 13:18, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > As the Shanghai PTG is approaching quickly we need to request space for the project asap to ensure we have opportunity to discuss project details in more technical details. > > On the TSC meeting three weeks ago we talked about asking for a room for the project for 1 day. I don’t think we made estimates on space I was thinking about 30 people. > > We are also handling requests for onboarding and the PTG on the same form and I assume the project would like to do both. > > So in Summary for Shanghai: > > * Do onboarding and PTG session as well > * 1 day > * space for ~30 people max > > Does anyone have any questions or objections to the above proposal? > > Thanks and Best Regards, > Ildikó > > From maria.g.perez.ibarra at intel.com Mon Aug 12 22:26:36 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 12 Aug 2019 22:26:36 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190812 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-12 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Tue Aug 13 00:16:33 2019 From: yong.hu at intel.com (Yong Hu) Date: Mon, 12 Aug 2019 17:16:33 -0700 Subject: [Starlingx-discuss] [stx.distro.openstack] agenda for WW33 meeting Message-ID: <5df0d52c-f84e-e5ae-94e7-9f570624c577@intel.com> Here is the agenda for this week in stx.distro.openstack: 1. Upstream patches and activities update 2. HIGH Launchpad review 3. Open We are approaching to stx.2.0 release rapidly in 2 weeks, so if you have patches or HIGH LP, please join the meeting [1] or *in advance* update your updates directly in Etherpad[2]. Thanks in advance for the sense of urgency! [1]: https://zoom.us/j/342730236; [2]: https://etherpad.openstack.org/p/stx-distro-openstack-meetings regards, Yong From haochuan.z.chen at intel.com Tue Aug 13 05:10:51 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Tue, 13 Aug 2019 05:10:51 +0000 Subject: [Starlingx-discuss] request for patch review Message-ID: <56829C2A36C2E542B0CCB9854828E4D856256A8F@CDSMSX102.ccr.corp.intel.com> Hi Anyone could help to review my patch https://review.opendev.org/672260 Thanks Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Aug 13 07:27:40 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 13 Aug 2019 07:27:40 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting Message-ID: <2FD5DDB5A04D264C80D42CA35194914F360122CA@SHSMSX104.ccr.corp.intel.com> Agenda for 8/14 meeting: 1. stx.2.0 bug triage & review - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 2. Ceph containerization plan & demo (Tingjie) 3. Test status for kernel minor upgrade (Shuai) 4. Opens (all) Please add more topics as you wish. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 14, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Bill.Zvonar at windriver.com Tue Aug 13 13:38:25 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 13 Aug 2019 13:38:25 +0000 Subject: [Starlingx-discuss] Community Call (August 14, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACBB2B@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - Rel 2.0 Communication - stx-zuul-jobs - Edge Hacking Days - 2.0 gating bugs Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190814T1400 From Ghada.Khalil at windriver.com Tue Aug 13 15:47:09 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 13 Aug 2019 15:47:09 +0000 Subject: [Starlingx-discuss] Cherry-picking bug fixes to the r/2.0 branch Message-ID: <151EE31B9FCCA54397A757BC674650F0C159162C@ALA-MBD.corp.ad.wrs.com> Hi, This is just a friendly reminder to all starlingx developers to cherry-pick commits once merged in master to the r/2.0 branch for all High stx.2.0 bugs. Medium stx.2.0 bugs need to be cherry-picked as well until August 23. At that time, all open medium stx.2.0 bugs will be moved to stx.3.0 and go into master only. Regards, Ghada & Bill Starlingx release team From scott.little at windriver.com Tue Aug 13 16:03:56 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 13 Aug 2019 12:03:56 -0400 Subject: [Starlingx-discuss] cherry-pick a fix from master to stx.2.0 Message-ID: <12f11309-160c-5983-b10b-772098dba643@windriver.com> # Check out the 2.0 branch     BRANCH=r/stx.2.0     repo init -u https://opendev.org/starlingx/manifest -b $BRANCH     repo sync --force-sync # change directory to the destination repo and start a working branch.  e.g.     cd cgcs-root/stx/stx-integ     repo start my-branch # a)  Normal case.  The commit is already commited on the master branch     # Find the sha of the commit you wish to cherry pick     git log remotes/starlingx/master     git cherry-pick # b) The commit is still under review in gerrit     # Point your browser at the review   e.g. https://review.opendev.org/#/c/674719/     # In the upper right hand corner find the 'Download' button to expand the sub-menu.     # Find the 'cherry-pick' option.  On the right you'll find a button that fill copy cherry pick commands to your clip-board.     # Paste the cherry pick commands into your shell.   It will look something like ...     git fetch https://review.opendev.org/starlingx/integ refs/changes/19/674719/1 && git cherry-pick FETCH_HEAD # You cherry-pick might not apply cleanly.  If there is a conflict ...     # Edit files that report conflicts.     vi     # Look for conflict markers '<<<<<<<' and '>>>>>>>' that show the competing changes.     # Resolve the conflict and remove the conflict markers.     # save changed      'git add '     # resume the cherry-pick     git cherry-pick --continue # Test your change # Submit for review     git review -s     git review -------------- next part -------------- An HTML attachment was scrubbed... URL: From fungi at yuggoth.org Tue Aug 13 16:20:33 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Tue, 13 Aug 2019 16:20:33 +0000 Subject: [Starlingx-discuss] cherry-pick a fix from master to stx.2.0 In-Reply-To: <12f11309-160c-5983-b10b-772098dba643@windriver.com> References: <12f11309-160c-5983-b10b-772098dba643@windriver.com> Message-ID: <20190813162033.bqtxxpoovogtk5d6@yuggoth.org> On 2019-08-13 12:03:56 -0400 (-0400), Scott Little wrote: [...] > # b) The commit is still under review in gerrit > >     # Point your browser at the review   e.g. > https://review.opendev.org/#/c/674719/ >     # In the upper right hand corner find the 'Download' button to expand > the sub-menu. >     # Find the 'cherry-pick' option.  On the right you'll find a button that > fill copy cherry pick commands to your clip-board. >     # Paste the cherry pick commands into your shell.   It will look > something like ... >     git fetch https://review.opendev.org/starlingx/integ > refs/changes/19/674719/1 && git cherry-pick FETCH_HEAD [...] You can also simply `git review -x 674719` to cherry-pick a change from Gerrit into your current branch. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Kristine.Bujold at windriver.com Tue Aug 13 16:49:21 2019 From: Kristine.Bujold at windriver.com (Bujold, Kristine) Date: Tue, 13 Aug 2019 16:49:21 +0000 Subject: [Starlingx-discuss] [DOCS] new host-fs CLI commands Message-ID: <5ECD8395442B0C4FB807F9737625BB6768C578E2@ALA-MBD.corp.ad.wrs.com> A new CLI command called host-fs has been introduced to enable resizing of filesystems on a specific host. Changes were also made to the controllerfs API. https://bugs.launchpad.net/starlingx/+bug/1830142 Previously Previously we allowed resizing of our filesystems through the controllerfs API, which only applied to controller hosts. The controllerfs API modified the following filesystems; - drbd replicated database docker-distribution etcd extension glance - non replicated backup docker scratch New host-fs API A new host-fs API has been created to allow a filesystem to be resized on any specific host. The non drbd filesystems from controllerfs (backup, docker and scratch) have been moved to the new host-fs API. Also a new filesystem called "kublet" has been created on all hosts with a default size of 10G. The new API has the following CLI commands system host-fs-list system host-fs-modify [ ...] system host-host-show system host-fs-list controller-1 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | a4d83571-a555-4ba5-999f-af709206ae35 | backup | 40 | backup-lv | | d57652a1-af17-47b8-b941-9ebfeee4a56f | docker | 30 | docker-lv | | a84374c6-8917-4db5-bd34-2a8d244f2bf6 | kubelet | 10 | kubelet-lv | | 2c026d6f-5c03-4135-abca-c0047aa7f5a6 | scratch | 8 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ system host-fs-list compute-1 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | 32e8b0b2-8a26-4c87-ae2e-71477b276740 | docker | 30 | docker-lv | | 223671f7-aed0-4c6e-b1d2-1327aae74439 | kubelet | 10 | kubelet-lv | | 36b96d05-febb-411b-a290-abdc5a2be0ff | scratch | 4 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ system host-fs-show controller-1 a4d83571-a555-4ba5-999f-af709206ae35 +----------------+--------------------------------------+ | Property | Value | +----------------+--------------------------------------+ | uuid | a4d83571-a555-4ba5-999f-af709206ae35 | | name | backup | | size | 40 | | logical_volume | backup-lv | | created_at | 2019-08-08T03:05:25.341669+00:00 | | updated_at | None | +----------------+--------------------------------------+ system host-fs-modify controller-1 docker=31 kubelet=11 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | a4d83571-a555-4ba5-999f-af709206ae35 | backup | 40 | backup-lv | | d57652a1-af17-47b8-b941-9ebfeee4a56f | docker | 31 | docker-lv | | a84374c6-8917-4db5-bd34-2a8d244f2bf6 | kubelet | 11 | kubelet-lv | | 2c026d6f-5c03-4135-abca-c0047aa7f5a6 | scratch | 8 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ Filesystem Supported Hosts Folder Logical Volume Default Size backup controller /opt/backups backup-lv 40G scratch all hosts /scratch scratch-lv varies depending on system, configured in kickstart docker all hosts /var/lib/docker docker-lv 30G (new fs to worker and storage hosts) kublet all hosts /var/lib/kublet kublet-lv 10G (new fs to all hosts) Changes to controllerfs API The existing "platform" filesystem is now resizable and added to controllerfs. The "glance" filesystem is now merged into platform. Below are the new filesystems resizable by controllerfs. The backup, docker and scratch filesystems are now resizable under host-fs. Filesystem Folder Logical Volume Default Size database /var/lib/postgresql pgsql-lv 40G docker-distribution /var/lib/docker-distribution dockerdistribution-lv 16G etcd /opt/etcd etcd-lv 5G extension /opt/extension extension-lv 1G glance /opt/cgcs cgcs-lv 20g (removed and merged with platform) platform /opt/platform platform-lv 10G backup /opt/backups backup-lv managed by host-fs scratch /scratch scratch-lv managed by host-fs docker /var/lib/docker docker-lv managed by host-fs Thanks, Kristine -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Aug 13 16:56:37 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 13 Aug 2019 11:56:37 -0500 Subject: [Starlingx-discuss] Detect failed VM on starling X Message-ID: Hi Bart/team After reading this launchpad: https://bugs.launchpad.net/starlingx/+bug/1835591 I was wondering if you could point me to the proper nova patches for early detection of failed VM. So far we only have the nova logs (which takes too long to detect the failed VM) and some information by NFV log that Mario, Zhipeng, and the team has helping us out to find. Any information is more than welcome Regards Victor Rodriguez From yang.liu at windriver.com Tue Aug 13 17:03:42 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Tue, 13 Aug 2019 17:03:42 +0000 Subject: [Starlingx-discuss] [DOCS] new host-fs CLI commands In-Reply-To: <5ECD8395442B0C4FB807F9737625BB6768C578E2@ALA-MBD.corp.ad.wrs.com> References: <5ECD8395442B0C4FB807F9737625BB6768C578E2@ALA-MBD.corp.ad.wrs.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F869289F59@ALA-MBD.corp.ad.wrs.com> Hi Kristine, What is the current expectation for backup fs when modify controllerfs? e.g., if we want to increase any replicated controller fs, do we first need to modify non-replicated backup fs on both controllers to ensure there are enough spaces for all the replicated and non-replicated file systems? Thanks, Yang From: Bujold, Kristine [mailto:Kristine.Bujold at windriver.com] Sent: August-13-19 12:49 PM To: starlingx-discuss at lists.starlingx.io Cc: Balaraj, Juanita Subject: [Starlingx-discuss] [DOCS] new host-fs CLI commands A new CLI command called host-fs has been introduced to enable resizing of filesystems on a specific host. Changes were also made to the controllerfs API. https://bugs.launchpad.net/starlingx/+bug/1830142 Previously Previously we allowed resizing of our filesystems through the controllerfs API, which only applied to controller hosts. The controllerfs API modified the following filesystems; - drbd replicated database docker-distribution etcd extension glance - non replicated backup docker scratch New host-fs API A new host-fs API has been created to allow a filesystem to be resized on any specific host. The non drbd filesystems from controllerfs (backup, docker and scratch) have been moved to the new host-fs API. Also a new filesystem called "kublet" has been created on all hosts with a default size of 10G. The new API has the following CLI commands system host-fs-list system host-fs-modify [ ...] system host-host-show system host-fs-list controller-1 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | a4d83571-a555-4ba5-999f-af709206ae35 | backup | 40 | backup-lv | | d57652a1-af17-47b8-b941-9ebfeee4a56f | docker | 30 | docker-lv | | a84374c6-8917-4db5-bd34-2a8d244f2bf6 | kubelet | 10 | kubelet-lv | | 2c026d6f-5c03-4135-abca-c0047aa7f5a6 | scratch | 8 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ system host-fs-list compute-1 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | 32e8b0b2-8a26-4c87-ae2e-71477b276740 | docker | 30 | docker-lv | | 223671f7-aed0-4c6e-b1d2-1327aae74439 | kubelet | 10 | kubelet-lv | | 36b96d05-febb-411b-a290-abdc5a2be0ff | scratch | 4 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ system host-fs-show controller-1 a4d83571-a555-4ba5-999f-af709206ae35 +----------------+--------------------------------------+ | Property | Value | +----------------+--------------------------------------+ | uuid | a4d83571-a555-4ba5-999f-af709206ae35 | | name | backup | | size | 40 | | logical_volume | backup-lv | | created_at | 2019-08-08T03:05:25.341669+00:00 | | updated_at | None | +----------------+--------------------------------------+ system host-fs-modify controller-1 docker=31 kubelet=11 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | a4d83571-a555-4ba5-999f-af709206ae35 | backup | 40 | backup-lv | | d57652a1-af17-47b8-b941-9ebfeee4a56f | docker | 31 | docker-lv | | a84374c6-8917-4db5-bd34-2a8d244f2bf6 | kubelet | 11 | kubelet-lv | | 2c026d6f-5c03-4135-abca-c0047aa7f5a6 | scratch | 8 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ Filesystem Supported Hosts Folder Logical Volume Default Size backup controller /opt/backups backup-lv 40G scratch all hosts /scratch scratch-lv varies depending on system, configured in kickstart docker all hosts /var/lib/docker docker-lv 30G (new fs to worker and storage hosts) kublet all hosts /var/lib/kublet kublet-lv 10G (new fs to all hosts) Changes to controllerfs API The existing "platform" filesystem is now resizable and added to controllerfs. The "glance" filesystem is now merged into platform. Below are the new filesystems resizable by controllerfs. The backup, docker and scratch filesystems are now resizable under host-fs. Filesystem Folder Logical Volume Default Size database /var/lib/postgresql pgsql-lv 40G docker-distribution /var/lib/docker-distribution dockerdistribution-lv 16G etcd /opt/etcd etcd-lv 5G extension /opt/extension extension-lv 1G glance /opt/cgcs cgcs-lv 20g (removed and merged with platform) platform /opt/platform platform-lv 10G backup /opt/backups backup-lv managed by host-fs scratch /scratch scratch-lv managed by host-fs docker /var/lib/docker docker-lv managed by host-fs Thanks, Kristine -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Aug 13 17:06:14 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 13 Aug 2019 12:06:14 -0500 Subject: [Starlingx-discuss] Detection of network error on Starling X Message-ID: Hi team/Ghada >From a functional perspective, we are not able to detect when the data network is lost. We found the launchpad: https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering if there is any log where we could find the lost of data network apart from fm alarm-list. Was this something supported in STX 1.0? I saw that we will discuss if this will be supported by R 3.0 but I was wondering if we could discuss the possibility to change the priority to the launchpad since some use cases might want to measure/test the network link failure detection. Thanks Victor Rodriguez From jose.perez.carranza at intel.com Tue Aug 13 20:46:11 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 13 Aug 2019 20:46:11 +0000 Subject: [Starlingx-discuss] [Test] Releasing Automated Test Suite - Robot Message-ID: Hi StarlingX community We have sent a set of patches with the code of the test suite that Intel Validation Team has been using to perform testing activities on the ISO’s released. It is based on python + robot framework. Please review the patches [1] and let us know your feedback. 1. Series: https://review.opendev.org/#/c/676220/ https://review.opendev.org/#/c/676225/ https://review.opendev.org/#/c/676237/ https://review.opendev.org/#/c/676240/ https://review.opendev.org/#/c/676241/ Regards Jose -- -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Aug 13 21:36:22 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 13 Aug 2019 21:36:22 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190813 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Aug 13 22:54:09 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 13 Aug 2019 22:54:09 +0000 Subject: [Starlingx-discuss] [ Regression testing - stx2.0 ] Report for 8/13/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190809T053000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 493 Pass = 412 Fail = 25 Blocked = 12 Not Run = 0 Obsolete = 26 Deferred = 18 Total executed = 449 Pass Rate = 94.27% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 25 PASS | 1 Obsolete Regression - Backup & Restore 6 Deferrer Regression - Distributed Cloud Regression - Gnoochi 15 PASS Regression - FM 3 PASS Regression - HA 11 PASS | 1 FAIL Regression - Heat 12 PASS | 1 OBSOLETE Regression - Horizon 4 PASS Regression - Install and Config 7 PASS | 1 Fail Regression - Maintenance 8 PASS | 1 FAIL Regression - Networking 122 PASS | 1 FAIL | 2 BLOCKED | 19 OBSOLETE Regression - Nova 24 PASS | 13 FAIL | 3 OBSOLETE Regression - Security 34 PASS | 1 FAIL | 2 BLOCKED | 1 OBSOLETE | 4 Deferrer Regression - Storage 24 PASS |1 BLOCKED| 2 Deferrer Regression - Inventory 30 PASS System Test 20 PASS | 2 FAIL | 7 BLOCKED | 1 OBSOLETE | 6 Deferrer Regression - new features 73 PASS | 5 FAIL --------------------------------------------------------------------------- AUTOMATED EXECUTION - INTEL --------------------------------------------------------------------------- Overall Results: Pass = 203 Fail = 32 Total executed = 235 Pass Rate = 86.38% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Fault-Management 15 PASS Gnocchi 12 PASS HEAT 6 PASS High-Availability 9 PASS | 2 FAIL Horizon 2 PASS Insallation-And-Config 8 PASS Maintenance 27 PASS | 2 FAIL Networking 45 PASS | 7 FAIL Nova 17 PASS | 2 FAIL Security 19 PASS | 4 FAIL Storage 8 PASS | 8 FAIL SYSINVENTORY 27 PASS | 4 FAIL System 8 PASS |3 FAIL ---------------------------------------------------------------------- AUTOMATED EXECUTION - Wind River ---------------------------------------------------------------------- "Pending Results" ---------------------------------------------------------------------- user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 200.006 alarm "controller-0 is degraded due to the failure of its 'pci-irq-affinity-agent' process" after reboot https://bugs.launchpad.net/starlingx/+bug/1832047 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 VM filesystem is not RW when attached the 2nd volume https://bugs.launchpad.net/starlingx/+bug/1838546 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 vSwitch 1G Hugepage available size cannot be changed https://bugs.launchpad.net/starlingx/+bug/1834530 hypervisor stays down after force lock and unlock due to pci-irq-affinity-agent process failure https://bugs.launchpad.net/starlingx/+bug/1839160 Image conversion fails with large qcow2 guest image due to insufficient filesystem size https://bugs.launchpad.net/starlingx/+bug/1819688 SSH to secure boot VM fails after evacuation https://bugs.launchpad.net/starlingx/+bug/1839320 platform keystone account lockout feature is not enabled https://bugs.launchpad.net/starlingx/+bug/1838100 stx-openstack application-applying stuck at osh-openstack-placement https://bugs.launchpad.net/starlingx/+bug/1837769 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1MoBroFimeQjsJvCC_N_DSLYMQsNZVMzTwAVc5tPy_t0/edit?pli=1#gid=322455033 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Aug 13 23:51:35 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 13 Aug 2019 23:51:35 +0000 Subject: [Starlingx-discuss] [ Test ] meeting minutes - 08/13/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEAE562@FMSMSX114.amr.corp.intel.com> Agenda for 08/13 Attendees: Elio, Richo, Fernando, Al, Cristopher, Jose, Numan, Yang, JP, Ada, JC 1. Sanity Status - Cristopher Two runs - yes Release candidate - continues green master branch - issue with external storage seen, but got green yesterday Virtual sanity - running on both branches. Launchpads * Openstack commands hold prompt > 30 seconds - https://bugs.launchpad.net/starlingx/+bug/1837686 Assigned and being worked * Multiple Local registry: 500 Server Error cause application-apply errors - https://bugs.launchpad.net/starlingx/+bug/1839696 Not happening in WR labs. Local registry is slow. This is increasing the deployment times. Ada to ask Abraham to assign someone to work on it. WR lab - still present - triaged and being worked on * Neutron dhcp not coming up after lock unlock compute host - https://bugs.launchpad.net/starlingx/+bug/1836252 still present - triaged and being worked on * neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot - https://bugs.launchpad.net/starlingx/+bug/1835807 debugging in a system with the failure * Configuration out-of-date alarms on storage nodes since fresh install - https://bugs.launchpad.net/starlingx/+bug/1838652 * After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" - https://bugs.launchpad.net/starlingx/+bug/1836928 * RPC timeout error when creating barbican secret on host-bulk-add after controller-0 is up https://bugs.launchpad.net/starlingx/+bug/1839665 Is intermittent Elio to try to reproduce the issue Wind River Running sanity on Release branch. Last one run on master was 0807. 2. Regression / Final Regression - Elio, Numan Final regression tracker - https://docs.google.com/spreadsheets/d/1FxrwgivQCG3Ksvqm46zhKILJlZtucsNxGYG4a8d0LSs/edit?usp=sharing Tests selected for final regression (80) Blocked tests - System test Networking - vswitch alarm SRIOV - instructions ready - not run yet. Storage - waiting for the external config to run them WR - done (3 test cases marked as obsolete) Numan to update the final regression tracker marking the tests to be run during final regression For some Launchpads that are marked as fix released, the problem is still there. The launchpads are updated with comments. Please make sure of updating all the launchpads with the info on re-runs. 3. Feature testing - Jose * Ironic Following the documentation we were not able to deploy it. A Launchpad has been created - https://bugs.launchpad.net/starlingx/+bug/1840031 Mingyuan working in the config failing. * Helm overrides Still waiting for information for one test. Reminder sent last week. Jose to send another reminder and copy Numan. 4. Unified sanity - Ada Already started to work on this. We are working on reviewing the two sets of sanities to identify duplicates and missing tests. A formal proposal will be done in the following weeks. 5. IPv6 test progress - Numan The lab has been setup. Chris has run some basic tests. Some issues identified, Numan to send the list offline. 6. Opens * Yong - https://bugs.launchpad.net/starlingx/+bug/1827692 Elio to verify it. Richo updated. Can be closed. * Jose - a couple of patches sent to the repo with the code of the robot suite. Please help with the review. Jose to send an email to introduce it to the community. From maria.g.perez.ibarra at intel.com Wed Aug 14 00:52:22 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 14 Aug 2019 00:52:22 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190813 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-13 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Wed Aug 14 01:53:21 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Wed, 14 Aug 2019 01:53:21 +0000 Subject: [Starlingx-discuss] Detection of network error on Starling X In-Reply-To: References: Message-ID: Hi Victor, The feature you want to test is not implemented. The below comment from Ghada explains the reason which is “Starlingx does not have the capability to raise alarms when data links are pulled”. https://bugs.launchpad.net/starlingx/+bug/1834512/comments/2 Hi Ghada, As requested by Victor, can we improve the priority of this feature? Best Regards, Xu, Chenjie -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, August 14, 2019 1:06 AM To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: [Starlingx-discuss] Detection of network error on Starling X Hi team/Ghada From a functional perspective, we are not able to detect when the data network is lost. We found the launchpad: https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering if there is any log where we could find the lost of data network apart from fm alarm-list. Was this something supported in STX 1.0? I saw that we will discuss if this will be supported by R 3.0 but I was wondering if we could discuss the possibility to change the priority to the launchpad since some use cases might want to measure/test the network link failure detection. Thanks Victor Rodriguez _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Aug 14 02:27:58 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 13 Aug 2019 21:27:58 -0500 Subject: [Starlingx-discuss] Detection of network error on Starling X In-Reply-To: References: Message-ID: On Tue, Aug 13, 2019 at 8:53 PM Xu, Chenjie wrote: > > Hi Victor, > The feature you want to test is not implemented. The below comment from Ghada explains the reason which is “Starlingx does not have the capability to raise alarms when data links are pulled”. > https://bugs.launchpad.net/starlingx/+bug/1834512/comments/2 > I have read all the comments :) my question is if we can increase the priority of this feature since some use cases might want to measure/test the network link failure detection (like me) Thanks > Hi Ghada, > As requested by Victor, can we improve the priority of this feature? > > Best Regards, > Xu, Chenjie > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Wednesday, August 14, 2019 1:06 AM > To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada > Subject: [Starlingx-discuss] Detection of network error on Starling X > > Hi team/Ghada > > From a functional perspective, we are not able to detect when the data network is lost. We found the launchpad: > https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering if there is any log where we could find the lost of data network apart from fm alarm-list. > > Was this something supported in STX 1.0? I saw that we will discuss if this will be supported by R 3.0 but I was wondering if we could discuss the possibility to change the priority to the launchpad since some use cases might want to measure/test the network link failure detection. > > Thanks > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Wed Aug 14 02:36:03 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Aug 2019 22:36:03 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_wheels - Build # 113 - Still Failing! In-Reply-To: <714179922.134.1565592360331.JavaMail.javamailuser@localhost> References: <714179922.134.1565592360331.JavaMail.javamailuser@localhost> Message-ID: <2073743891.146.1565750164479.JavaMail.javamailuser@localhost> Project: STX_build_wheels Build #: 113 Status: Still Failing Timestamp: 20190814T020549Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190813T233000Z OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs From build.starlingx at gmail.com Wed Aug 14 02:36:06 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Aug 2019 22:36:06 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 113 - Still Failing! In-Reply-To: <2087070873.137.1565592364484.JavaMail.javamailuser@localhost> References: <2087070873.137.1565592364484.JavaMail.javamailuser@localhost> Message-ID: <1410996083.149.1565750167679.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 113 Status: Still Failing Timestamp: 20190814T020140Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190813T233000Z OS: centos MUNGED_BRANCH: rc-2.0 MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs MASTER_BUILD_NUMBER: 20 PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos PUBLISH_TIMESTAMP: 20190813T233000Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190813T233000Z-builder TIMESTAMP: 20190813T233000Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190813T233000Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190813T233000Z/outputs From build.starlingx at gmail.com Wed Aug 14 02:36:09 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Aug 2019 22:36:09 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 20 - Failure! Message-ID: <508284825.152.1565750170822.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 20 Status: Failure Timestamp: 20190813T233000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From cindy.xie at intel.com Wed Aug 14 13:59:13 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 14 Aug 2019 13:59:13 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 8/14 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36013F6F@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 8/14 meeting: 1. stx.2.0 bug triage & review - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage 10 High/Medium LP remaining, 7 of them are marked as stx.2.0 LP updated reflecting the latest update. - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other Only 1 LP 1836638 is high and needs to be addressed, Yi & Jim is working on this to determing if this is kernel memory leak issue. 2. Ceph containerization plan & demo (Tingjie) 4 milestons proposed - plan updated in Storyboard. https://storyboard.openstack.org/#!/story/2005527 deployment using rook to automatically deploy the containerized Ceph. A demo show the containerized Ceph deployed by rook under StarlingX environment. Ansible config to bring Rook operator and Ceph cluster and OSDs are created on both controller-0 and controller-1. Call for technical review for spec: https://review.opendev.org/#/c/656371. 3. Test status for kernel minor upgrade (Shuai) Deployment testing passed for both RT and STD kernel; Sanity testing: auto-testing for AIO-SX passed; Duplex: the same results w/ master build Multi-node: the same results w/ master build. AR: Shuai to provide a test summary for what is the failed test cases. After a clean test report provided, contact Jim to remove W -1 from the patch review. 4. call for contributions: de-brand "Titanium Cloud" words in StarlingX: https://storyboard.openstack.org/#!/story/2006387 user visible de-branding to remove "Titanium, tis" from doc, source code, configuration, etc. keywords: "Titanium", "tis", "WindRiver". Should address user visible conversions: developer visible is not included in this story (file-name, parameters, etc) focus on documentation, configurations, passwords, which is user visible. call for contributors who can voluntariy work on the story. 5. Opens (all) - None -----Original Message----- From: Xie, Cindy Sent: Tuesday, August 13, 2019 3:28 PM To: Wold, Saul ; 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' Subject: Agenda: Weekly StarlingX non-OpenStack distro meeting Agenda for 8/14 meeting: 1. stx.2.0 bug triage & review - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage - https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 2. Ceph containerization plan & demo (Tingjie) 3. Test status for kernel minor upgrade (Shuai) 4. Opens (all) Please add more topics as you wish. Thx. - cindy -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 14, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From build.starlingx at gmail.com Wed Aug 14 13:59:19 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Aug 2019 09:59:19 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_wheels - Build # 114 - Still Failing! In-Reply-To: <1400025521.144.1565750161582.JavaMail.javamailuser@localhost> References: <1400025521.144.1565750161582.JavaMail.javamailuser@localhost> Message-ID: <1206224296.155.1565791160908.JavaMail.javamailuser@localhost> Project: STX_build_wheels Build #: 114 Status: Still Failing Timestamp: 20190814T135224Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190813T233000Z OS: centos MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190813T233000Z/logs From Bill.Zvonar at windriver.com Wed Aug 14 14:43:58 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 14 Aug 2019 14:43:58 +0000 Subject: [Starlingx-discuss] Community Call (August 14, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACC26E@ALA-MBD.corp.ad.wrs.com> Notes from today's Community call... - 2.0 RC1 declared / gating bugs - see release etherpad for details and please continue to focus on High & Medium importance Launchpads! - cherry-picking from Master to r/2.0 - Ghada's email: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005653.html - Scott's email: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005654.html - Commits merged in master: https://review.opendev.org/#/q/projects:starlingx+is:merged+branch:master - High/Medium bug fixes as of August 6 need to be cherrypicked to r/stx.2.0 - No medium bugs should be merged after August 23. Open Medium bugs will be bulk moved to stx.3.0 (submissions - Commits in r/stx.2.0 branch: https://review.opendev.org/#/q/branch:r/stx.2.0 - To date, 9 bug fixes have been cherrypicked/merged in the r/stx.2.0 branch - When commits are cherrypicked to the r/stx.2.0 branch, a label will be added to the launchpad: in-r-stx20 - This is currently broken, but we are working with the opendev team to get it fixed. For now, the label is being added manually. - Rel 2.0 Communication - re: Ildiko's email http://lists.starlingx.io/pipermail/starlingx-discuss/2019-July/005493.html - Edge Hacking Days - re: Ildiko's email http://lists.starlingx.io/pipermail/starlingx-discuss/2019-July/005465.html - see etherpad https://etherpad.openstack.org/p/osf-edge-hacking-days - Documentation Branching - discussed how the current/latest is managed - need to clarify how the documentation, per release, will be managed on https://docs.starlingx.io - review open ARs -----Original Message----- From: Zvonar, Bill Sent: Tuesday, August 13, 2019 9:38 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (August 14, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - Rel 2.0 Communication - stx-zuul-jobs - Edge Hacking Days - 2.0 gating bugs Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190814T1400 From Bill.Zvonar at windriver.com Wed Aug 14 15:05:53 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 14 Aug 2019 15:05:53 +0000 Subject: [Starlingx-discuss] First Contact SIG (Aug 15, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACC2A6@ALA-MBD.corp.ad.wrs.com> Hi all - there will be a First Contact SIG call tomorrow at 1330 UTC (see [1] for start time in your timezone). Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-first-contact [1] meeting start time in various timezones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190815T1330 From maria.g.perez.ibarra at intel.com Wed Aug 14 17:06:06 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 14 Aug 2019 17:06:06 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190814 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-14 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Aug 14 18:24:42 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 14 Aug 2019 13:24:42 -0500 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock Message-ID: Hello team/Scott Last week during the building meeting I took the AR to experiment and if possible fix all the missing build requirements for the flock services. Why am I interested in this? To enable the community to bein able to build the core technology of starling x using the build system for spec/srpms they prefer. The one that I used for this case is a plain mock. As we know mock is a tool for building RPM packages. You can use mock to build packages for many different versions of CentOS/Red Hat and Fedora. The main advantage of using mock to build RPMs instead of rpmbuild is that mock builds RPMs in a cleanroom environment. mock does this by creating a chroot and performing the RPM build in the chroot. In my case, I don't have at home a really powerful workstation, so I decided to create a solution for my HW limitations. Here is a simple solution to build the SRPMS from the flock services using containers. https://github.com/VictorRodriguez/stx-packaging/tree/build_w_docker_centos/configs/docker-centos-img The docker image provided is a plain vanilla centos 7 w/ the necessary packages for mock and rpmbuild. It also add local-centos-7-x86_64.cfg which point to regular vanilla centos 7 yum repo[0] as well as the stx yum input/output repos [1][2] I am testing this on my regular laptop w/ docker and works fine. The docker image builds one flock service at the time with the command (using an example): $ make upstream-pkg SRPM=mtce-1.0-154.tis.src.rpm MOCK_CONFIG=local-centos-7-x86_64 Here an update of the flock services that I have tested so far and the errors I have found. https://docs.google.com/spreadsheets/d/1kWrV3A28tTc3xgKiYtbir3ymcI4pew3VosE0jfB9_Fo/edit?usp=sharing Scott, I have one question, on the IRC channel I ask about why sometimes the http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/RPMS/std/ link shows as forbidden or down. Is this because I catch it in the middle of an image creation? [0] http://mirror.centos.org/centos/7/extras/x86_64/ [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/RPMS/std/ [2]http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/inputs/RPMS/ TODO: Enable from Makefile to build the flock service if I provide not the SRPM but the tarball and the spec file. The problem that I have is at the handling of the Flock Package Version when I create the tarballs by myself. I hope that this works for someone else. Regards Victor Rodriguez From Ghada.Khalil at windriver.com Wed Aug 14 18:44:51 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 14 Aug 2019 18:44:51 +0000 Subject: [Starlingx-discuss] Detection of network error on Starling X In-Reply-To: References: Message-ID: <151EE31B9FCCA54397A757BC674650F0C1591E78@ALA-MBD.corp.ad.wrs.com> Hi Victor / Chenjie, I have no issues with increasing the priority of this feature. It's all a matter of resourcing. Hi Forrest, Do you have resources in your team to work on this feature? I will not be able to resource this request. Regards, Ghada -----Original Message----- From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, August 13, 2019 9:53 PM To: Victor Rodriguez; starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: RE: [Starlingx-discuss] Detection of network error on Starling X Hi Victor, The feature you want to test is not implemented. The below comment from Ghada explains the reason which is “Starlingx does not have the capability to raise alarms when data links are pulled”. https://bugs.launchpad.net/starlingx/+bug/1834512/comments/2 Hi Ghada, As requested by Victor, can we improve the priority of this feature? Best Regards, Xu, Chenjie -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, August 14, 2019 1:06 AM To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: [Starlingx-discuss] Detection of network error on Starling X Hi team/Ghada From a functional perspective, we are not able to detect when the data network is lost. We found the launchpad: https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering if there is any log where we could find the lost of data network apart from fm alarm-list. Was this something supported in STX 1.0? I saw that we will discuss if this will be supported by R 3.0 but I was wondering if we could discuss the possibility to change the priority to the launchpad since some use cases might want to measure/test the network link failure detection. Thanks Victor Rodriguez _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed Aug 14 19:15:51 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 14 Aug 2019 15:15:51 -0400 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: References: Message-ID: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn. In the file system latest_build is a symbolic link to one of the timestamped build directories.  I only change it at the end of a successful build when the timestamped build directory is fully populated.  During a build, the symlink should be pointing you at the previous build.  Deleting the old link and creating the new one should only take a fraction of a second. How many folks have seen this?  What was the time of the event?  How long did it persist?  Please report events in UTC. Scott On 2019-08-14 2:24 p.m., Victor Rodriguez wrote: > Scott, I have one question, on the IRC channel I ask about why sometimes the > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/RPMS/std/ > link shows as forbidden or down. Is this because I catch it in the > middle of an image creation? From michael.l.tullis at intel.com Wed Aug 14 19:59:35 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 14 Aug 2019 19:59:35 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 8/14/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1BA30F5@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Aug 14 20:00:00 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 14 Aug 2019 20:00:00 +0000 Subject: [Starlingx-discuss] [DOCS] new host-fs CLI commands In-Reply-To: <19C65A6E92EA384D809B1772130CD7F869289F59@ALA-MBD.corp.ad.wrs.com> References: <5ECD8395442B0C4FB807F9737625BB6768C578E2@ALA-MBD.corp.ad.wrs.com> <19C65A6E92EA384D809B1772130CD7F869289F59@ALA-MBD.corp.ad.wrs.com> Message-ID: <3808363B39586544A6839C76CF81445EA1BA3101@ORSMSX104.amr.corp.intel.com> Thanks Kristine. From the doc update perspective, we’ve added this as a new task to https://storyboard.openstack.org/#!/story/2006289. -- Mike and team From: Liu, Yang Sent: Tuesday, August 13, 2019 10:04 AM To: Bujold, Kristine ; starlingx-discuss at lists.starlingx.io Cc: Balaraj, Juanita Subject: Re: [Starlingx-discuss] [DOCS] new host-fs CLI commands Hi Kristine, What is the current expectation for backup fs when modify controllerfs? e.g., if we want to increase any replicated controller fs, do we first need to modify non-replicated backup fs on both controllers to ensure there are enough spaces for all the replicated and non-replicated file systems? Thanks, Yang From: Bujold, Kristine [mailto:Kristine.Bujold at windriver.com] Sent: August-13-19 12:49 PM To: starlingx-discuss at lists.starlingx.io Cc: Balaraj, Juanita Subject: [Starlingx-discuss] [DOCS] new host-fs CLI commands A new CLI command called host-fs has been introduced to enable resizing of filesystems on a specific host. Changes were also made to the controllerfs API. https://bugs.launchpad.net/starlingx/+bug/1830142 Previously Previously we allowed resizing of our filesystems through the controllerfs API, which only applied to controller hosts. The controllerfs API modified the following filesystems; - drbd replicated database docker-distribution etcd extension glance - non replicated backup docker scratch New host-fs API A new host-fs API has been created to allow a filesystem to be resized on any specific host. The non drbd filesystems from controllerfs (backup, docker and scratch) have been moved to the new host-fs API. Also a new filesystem called "kublet" has been created on all hosts with a default size of 10G. The new API has the following CLI commands system host-fs-list system host-fs-modify [ ...] system host-host-show system host-fs-list controller-1 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | a4d83571-a555-4ba5-999f-af709206ae35 | backup | 40 | backup-lv | | d57652a1-af17-47b8-b941-9ebfeee4a56f | docker | 30 | docker-lv | | a84374c6-8917-4db5-bd34-2a8d244f2bf6 | kubelet | 10 | kubelet-lv | | 2c026d6f-5c03-4135-abca-c0047aa7f5a6 | scratch | 8 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ system host-fs-list compute-1 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | 32e8b0b2-8a26-4c87-ae2e-71477b276740 | docker | 30 | docker-lv | | 223671f7-aed0-4c6e-b1d2-1327aae74439 | kubelet | 10 | kubelet-lv | | 36b96d05-febb-411b-a290-abdc5a2be0ff | scratch | 4 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ system host-fs-show controller-1 a4d83571-a555-4ba5-999f-af709206ae35 +----------------+--------------------------------------+ | Property | Value | +----------------+--------------------------------------+ | uuid | a4d83571-a555-4ba5-999f-af709206ae35 | | name | backup | | size | 40 | | logical_volume | backup-lv | | created_at | 2019-08-08T03:05:25.341669+00:00 | | updated_at | None | +----------------+--------------------------------------+ system host-fs-modify controller-1 docker=31 kubelet=11 +--------------------------------------+---------+-------------+----------------+ | UUID | FS Name | Size in GiB | Logical Volume | +--------------------------------------+---------+-------------+----------------+ | a4d83571-a555-4ba5-999f-af709206ae35 | backup | 40 | backup-lv | | d57652a1-af17-47b8-b941-9ebfeee4a56f | docker | 31 | docker-lv | | a84374c6-8917-4db5-bd34-2a8d244f2bf6 | kubelet | 11 | kubelet-lv | | 2c026d6f-5c03-4135-abca-c0047aa7f5a6 | scratch | 8 | scratch-lv | +--------------------------------------+---------+-------------+----------------+ Filesystem Supported Hosts Folder Logical Volume Default Size backup controller /opt/backups backup-lv 40G scratch all hosts /scratch scratch-lv varies depending on system, configured in kickstart docker all hosts /var/lib/docker docker-lv 30G (new fs to worker and storage hosts) kublet all hosts /var/lib/kublet kublet-lv 10G (new fs to all hosts) Changes to controllerfs API The existing “platform” filesystem is now resizable and added to controllerfs. The “glance” filesystem is now merged into platform. Below are the new filesystems resizable by controllerfs. The backup, docker and scratch filesystems are now resizable under host-fs. Filesystem Folder Logical Volume Default Size database /var/lib/postgresql pgsql-lv 40G docker-distribution /var/lib/docker-distribution dockerdistribution-lv 16G etcd /opt/etcd etcd-lv 5G extension /opt/extension extension-lv 1G glance /opt/cgcs cgcs-lv 20g (removed and merged with platform) platform /opt/platform platform-lv 10G backup /opt/backups backup-lv managed by host-fs scratch /scratch scratch-lv managed by host-fs docker /var/lib/docker docker-lv managed by host-fs Thanks, Kristine -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Aug 14 20:09:54 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 14 Aug 2019 15:09:54 -0500 Subject: [Starlingx-discuss] Detection of network error on Starling X In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1591E78@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1591E78@ALA-MBD.corp.ad.wrs.com> Message-ID: On Wed, Aug 14, 2019 at 1:44 PM Khalil, Ghada wrote: > > Hi Victor / Chenjie, > I have no issues with increasing the priority of this feature. It's all a matter of resourcing. > Thanks !! > Hi Forrest, > Do you have resources in your team to work on this feature? I will not be able to resource this request. > > Regards, > Ghada > > -----Original Message----- > From: Xu, Chenjie [mailto:chenjie.xu at intel.com] > Sent: Tuesday, August 13, 2019 9:53 PM > To: Victor Rodriguez; starlingx-discuss at lists.starlingx.io; Khalil, Ghada > Subject: RE: [Starlingx-discuss] Detection of network error on Starling X > > Hi Victor, > The feature you want to test is not implemented. The below comment from Ghada explains the reason which is “Starlingx does not have the capability to raise alarms when data links are pulled”. > https://bugs.launchpad.net/starlingx/+bug/1834512/comments/2 > > Hi Ghada, > As requested by Victor, can we improve the priority of this feature? > > Best Regards, > Xu, Chenjie > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Wednesday, August 14, 2019 1:06 AM > To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada > Subject: [Starlingx-discuss] Detection of network error on Starling X > > Hi team/Ghada > > From a functional perspective, we are not able to detect when the data network is lost. We found the launchpad: > https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering if there is any log where we could find the lost of data network apart from fm alarm-list. > > Was this something supported in STX 1.0? I saw that we will discuss if this will be supported by R 3.0 but I was wondering if we could discuss the possibility to change the priority to the launchpad since some use cases might want to measure/test the network link failure detection. > > Thanks > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Wed Aug 14 20:17:43 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 14 Aug 2019 15:17:43 -0500 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> Message-ID: On Wed, Aug 14, 2019 at 2:16 PM Scott Little wrote: > > I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > outages attributed to know issues at cengn. > > In the file system latest_build is a symbolic link to one of the > timestamped build directories. I only change it at the end of a > successful build when the timestamped build directory is fully > populated. During a build, the symlink should be pointing you at the > previous build. Deleting the old link and creating the new one should > only take a fraction of a second. > Ok, the failure I had was : failure: repodata/repomd.xml from stx-cengn: [Errno 256] No more mirrors to try. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/RPMS/std/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden > How many folks have seen this? So far, only me, and I double tested it > What was the time of the event? Around 6 PM in UTC > How long did it persist? Less than 10 min > Please report events in UTC. Got it, I will do the next time. In the meantime, I will leave my script to try to build all the output packages and see if everyone can build or if this error came back Thanks for the help Victor R > > Scott > > > On 2019-08-14 2:24 p.m., Victor Rodriguez wrote: > > Scott, I have one question, on the IRC channel I ask about why sometimes the > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/RPMS/std/ > > link shows as forbidden or down. Is this because I catch it in the > > middle of an image creation? > > From maria.g.perez.ibarra at intel.com Wed Aug 14 21:53:53 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 14 Aug 2019 21:53:53 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190814 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-14 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Wed Aug 14 22:06:14 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 14 Aug 2019 22:06:14 +0000 Subject: [Starlingx-discuss] [stable] [build-report] STX_build_wheels - Build # 113 - Still Failing! In-Reply-To: <2073743891.146.1565750164479.JavaMail.javamailuser@localhost> References: <714179922.134.1565592360331.JavaMail.javamailuser@localhost> <2073743891.146.1565750164479.JavaMail.javamailuser@localhost> Message-ID: > Project: STX_build_wheels > Build #: 113 > Status: Still Failing > Timestamp: 20190814T020549Z Fixes are ready in both master and r/stx2.0 branches: - https://review.opendev.org/#/c/676267/ master - https://review.opendev.org/#/c/676520/ r/stx.2.0 From dtroyer at gmail.com Wed Aug 14 22:42:57 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Wed, 14 Aug 2019 17:42:57 -0500 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> Message-ID: On Wed, Aug 14, 2019 at 2:18 PM Scott Little wrote: > I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > outages attributed to know issues at cengn. [...] > How many folks have seen this? What was the time of the event? How > long did it persist? Please report events in UTC. So I've been poking at this for the last few minutes, so around 2200-2230 UTC These links work: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053000Z/ http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033000Z/ These do not: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/ http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053000Z/ Until I tried them again to write this email, then they swapped. Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough. dt -- Dean Troyer dtroyer at gmail.com From forrest.zhao at intel.com Thu Aug 15 01:14:01 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Thu, 15 Aug 2019 01:14:01 +0000 Subject: [Starlingx-discuss] Detection of network error on Starling X In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1591E78@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1591E78@ALA-MBD.corp.ad.wrs.com> Message-ID: <6345119E91D5C843A93D64F498ACFA1374EFCB89@shsmsx102.ccr.corp.intel.com> Hi Victor and Ghada, We agree that this is a required feature for networking testing and failure detection. We set the priority to high and Chenjie is committed to working on it. Thanks, Forrest -----Original Message----- From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, August 15, 2019 2:45 AM To: Xu, Chenjie ; Victor Rodriguez ; starlingx-discuss at lists.starlingx.io; Zhao, Forrest Subject: Re: [Starlingx-discuss] Detection of network error on Starling X Hi Victor / Chenjie, I have no issues with increasing the priority of this feature. It's all a matter of resourcing. Hi Forrest, Do you have resources in your team to work on this feature? I will not be able to resource this request. Regards, Ghada -----Original Message----- From: Xu, Chenjie [mailto:chenjie.xu at intel.com] Sent: Tuesday, August 13, 2019 9:53 PM To: Victor Rodriguez; starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: RE: [Starlingx-discuss] Detection of network error on Starling X Hi Victor, The feature you want to test is not implemented. The below comment from Ghada explains the reason which is “Starlingx does not have the capability to raise alarms when data links are pulled”. https://bugs.launchpad.net/starlingx/+bug/1834512/comments/2 Hi Ghada, As requested by Victor, can we improve the priority of this feature? Best Regards, Xu, Chenjie -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Wednesday, August 14, 2019 1:06 AM To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada Subject: [Starlingx-discuss] Detection of network error on Starling X Hi team/Ghada From a functional perspective, we are not able to detect when the data network is lost. We found the launchpad: https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering if there is any log where we could find the lost of data network apart from fm alarm-list. Was this something supported in STX 1.0? I saw that we will discuss if this will be supported by R 3.0 but I was wondering if we could discuss the possibility to change the priority to the launchpad since some use cases might want to measure/test the network link failure detection. Thanks Victor Rodriguez _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From erich.cordoba.malibran at intel.com Thu Aug 15 03:08:31 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 15 Aug 2019 03:08:31 +0000 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> Message-ID: <39DC5BEB-6468-41F8-83B6-8CE4920E8D89@intel.com> I can see it also and it's easily reproducible with this line: $ while true; do curl -I -q http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ && sleep 1; done HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:46 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr03, 1.1 jfintpr01 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:48 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:49 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:51 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:52 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:53 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive On 8/14/19, 5:43 PM, "Dean Troyer" wrote: On Wed, Aug 14, 2019 at 2:18 PM Scott Little wrote: > I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > outages attributed to know issues at cengn. [...] > How many folks have seen this? What was the time of the event? How > long did it persist? Please report events in UTC. So I've been poking at this for the last few minutes, so around 2200-2230 UTC These links work: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053000Z/ http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033000Z/ These do not: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/ http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053000Z/ Until I tried them again to write this email, then they swapped. Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough. dt -- Dean Troyer dtroyer at gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Thu Aug 15 13:41:17 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 15 Aug 2019 06:41:17 -0700 Subject: [Starlingx-discuss] Edge Hacking Days - August 16 Message-ID: <7B973AFB-D11A-44BF-9F84-42D8EA097403@gmail.com> Hi, It is a friendly reminder that we are having the second edge hacking days in August this Friday (August 16). The dial-in information is the same, you can find the details here: https://etherpad.openstack.org/p/osf-edge-hacking-days If you’re interested in joining please __add your name and the time period (with time zone) when you will be available__ on these dates. You can also add topics that you would be interested in working on. We will keep on working on two items: * Keystone to Keystone federation testing in DevStack * Building the centralized edge reference architecture on Packet HW using TripleO Please let me know if you have any questions. See you on Friday! :) Thanks, Ildikó From ildiko.vancsa at gmail.com Thu Aug 15 13:51:12 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 15 Aug 2019 06:51:12 -0700 Subject: [Starlingx-discuss] Shanghai Open Infrastructure Summit preparation Message-ID: <1CE683C8-41EB-4876-81F0-AA74871C8C6D@gmail.com> Hi StarlingX Community, The Shanghai Summit is approaching quickly and there are a few items to prepare for: * Project update session: * We have 15 min and 40 min long slots available on a first come first serve basis - Which one would the community prefer? * __Volunteer needed to present__ - reply to this thread or reach out to me if you are interested * Forum * https://wiki.openstack.org/wiki/Forum * Submission period is September 2 - 16 * Etherpad for brainstorming and preparation: https://etherpad.openstack.org/p/PVG-StarlingX-brainstorming * PTG: * Requested 1 room for 1.25 - 1.5 days * Requested space and time for both technical discussions and project onboarding * Preparation etherpad: https://etherpad.openstack.org/p/PVG-StarlingX-PTG Please take a look at the above items and let me know if you have any questions or would be interested in presenting the project update. Thanks and Best Regards, Ildikó From vm.rod25 at gmail.com Thu Aug 15 15:05:04 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 15 Aug 2019 10:05:04 -0500 Subject: [Starlingx-discuss] Detection of network error on Starling X In-Reply-To: <6345119E91D5C843A93D64F498ACFA1374EFCB89@shsmsx102.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0C1591E78@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA1374EFCB89@shsmsx102.ccr.corp.intel.com> Message-ID: That is excellent news, thanks a lot !! On Wed, Aug 14, 2019 at 8:14 PM Zhao, Forrest wrote: > Hi Victor and Ghada, > > We agree that this is a required feature for networking testing and > failure detection. > > We set the priority to high and Chenjie is committed to working on it. > > Thanks, > Forrest > > -----Original Message----- > From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] > Sent: Thursday, August 15, 2019 2:45 AM > To: Xu, Chenjie ; Victor Rodriguez < > vm.rod25 at gmail.com>; starlingx-discuss at lists.starlingx.io; Zhao, Forrest < > forrest.zhao at intel.com> > Subject: Re: [Starlingx-discuss] Detection of network error on Starling X > > Hi Victor / Chenjie, > I have no issues with increasing the priority of this feature. It's all a > matter of resourcing. > > Hi Forrest, > Do you have resources in your team to work on this feature? I will not be > able to resource this request. > > Regards, > Ghada > > -----Original Message----- > From: Xu, Chenjie [mailto:chenjie.xu at intel.com] > Sent: Tuesday, August 13, 2019 9:53 PM > To: Victor Rodriguez; starlingx-discuss at lists.starlingx.io; Khalil, Ghada > Subject: RE: [Starlingx-discuss] Detection of network error on Starling X > > Hi Victor, > The feature you want to test is not implemented. The below comment from > Ghada explains the reason which is “Starlingx does not have the capability > to raise alarms when data links are pulled”. > https://bugs.launchpad.net/starlingx/+bug/1834512/comments/2 > > Hi Ghada, > As requested by Victor, can we improve the priority of this feature? > > Best Regards, > Xu, Chenjie > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Wednesday, August 14, 2019 1:06 AM > To: starlingx-discuss at lists.starlingx.io; Khalil, Ghada < > Ghada.Khalil at windriver.com> > Subject: [Starlingx-discuss] Detection of network error on Starling X > > Hi team/Ghada > > From a functional perspective, we are not able to detect when the data > network is lost. We found the launchpad: > https://bugs.launchpad.net/starlingx/+bug/1834512 and we were wondering > if there is any log where we could find the lost of data network apart from > fm alarm-list. > > Was this something supported in STX 1.0? I saw that we will discuss if > this will be supported by R 3.0 but I was wondering if we could discuss the > possibility to change the priority to the launchpad since some use cases > might want to measure/test the network link failure detection. > > Thanks > > Victor Rodriguez > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Aug 15 15:12:26 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 15 Aug 2019 15:12:26 +0000 Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing Message-ID: The devstack zuul jobs for config are failing. The error looks to be a database migration failure when initializing the glance DB These are the DB components being installed psycopg2 2.8.3 SQLAlchemy 1.3.6 Sqlalchemy_migrate 0.12.0 I think this may be due to this upstream commit which merged this morning https://review.opendev.org/#/c/665606/ I don't have any idea on how to fix this. We can disable the jobs, or remove glance from our devstack jobs, but we are blocked until this is resolved. It may affect other flock repos as well, if they also have devstack jobs that are setting up glance. Example stacktrace: File "/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/train_migrate01_backend_to_store.py", line 28, in has_migrations metadata_backend = con.execute(sql_query) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 982, in execute return self._execute_text(object_, multiparams, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1155, in _execute_text parameters, File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context e, statement, parameters, cursor, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1464, in _handle_dbapi_exception util.raise_from_cause(newraise, exc_info) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context cursor, statement, parameters, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 552, in do_execute cursor.execute(statement, parameters) DBError: (psycopg2.errors.UndefinedFunction) function instr(text, unknown) does not exist LINE 1: select meta_data from image_locations where INSTR(meta_data,... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. [SQL: select meta_data from image_locations where INSTR(meta_data, '"backend":') > 0] (Background on this error at: http://sqlalche.me/e/f405) Upgraded database to: train_expand01, current revision(s): train_expand01 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Thu Aug 15 15:30:36 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 15 Aug 2019 15:30:36 +0000 Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC262365D@ALA-MBD.corp.ad.wrs.com> Why would config be setting up a glance db ? Brent From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Thursday, August 15, 2019 11:12 AM To: starlingx-discuss Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing The devstack zuul jobs for config are failing. The error looks to be a database migration failure when initializing the glance DB These are the DB components being installed psycopg2 2.8.3 SQLAlchemy 1.3.6 Sqlalchemy_migrate 0.12.0 I think this may be due to this upstream commit which merged this morning https://review.opendev.org/#/c/665606/ I don't have any idea on how to fix this. We can disable the jobs, or remove glance from our devstack jobs, but we are blocked until this is resolved. It may affect other flock repos as well, if they also have devstack jobs that are setting up glance. Example stacktrace: File "/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/train_migrate01_backend_to_store.py", line 28, in has_migrations metadata_backend = con.execute(sql_query) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 982, in execute return self._execute_text(object_, multiparams, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1155, in _execute_text parameters, File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context e, statement, parameters, cursor, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1464, in _handle_dbapi_exception util.raise_from_cause(newraise, exc_info) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context cursor, statement, parameters, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 552, in do_execute cursor.execute(statement, parameters) DBError: (psycopg2.errors.UndefinedFunction) function instr(text, unknown) does not exist LINE 1: select meta_data from image_locations where INSTR(meta_data,... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. [SQL: select meta_data from image_locations where INSTR(meta_data, '"backend":') > 0] (Background on this error at: http://sqlalche.me/e/f405) Upgraded database to: train_expand01, current revision(s): train_expand01 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Thu Aug 15 15:33:19 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 15 Aug 2019 10:33:19 -0500 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: <39DC5BEB-6468-41F8-83B6-8CE4920E8D89@intel.com> References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> <39DC5BEB-6468-41F8-83B6-8CE4920E8D89@intel.com> Message-ID: Thanks a lot to everyone that help us to test and verify this issue During the building meeting, Scott agreed to help us to talk with CENGN to fix this issue In the meantime a local repo with the RPMs from [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/RPMS/std/ [2] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/inputs/RPMS/ is the solution. If you download them you just need to run createrepo -c . This is just a temporary solution since the idea is that anyone of us can build w/o the need for heavy workstations. Thanks a lot Scott Regards Victor R On Wed, Aug 14, 2019 at 10:08 PM Cordoba Malibran, Erich < erich.cordoba.malibran at intel.com> wrote: > I can see it also and it's easily reproducible with this line: > > $ while true; do curl -I -q > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ && sleep > 1; done > > HTTP/1.1 200 OK > Server: nginx/1.15.8 > Date: Thu, 15 Aug 2019 02:03:46 GMT > Content-Type: text/html > Vary: Accept-Encoding > Via: 1.1 jfdmzpr03, 1.1 jfintpr01 > Proxy-Connection: Keep-Alive > Connection: Keep-Alive > > HTTP/1.1 403 Forbidden > Server: nginx/1.15.8 > Date: Thu, 15 Aug 2019 02:03:48 GMT > Content-Type: text/html > Content-Length: 153 > Via: 1.1 jfdmzpr04, 1.1 jfintpr02 > Proxy-Connection: Keep-Alive > Connection: Keep-Alive > > HTTP/1.1 200 OK > Server: nginx/1.15.8 > Date: Thu, 15 Aug 2019 02:03:49 GMT > Content-Type: text/html > Vary: Accept-Encoding > Via: 1.1 jfdmzpr04, 1.1 jfintpr02 > Proxy-Connection: Keep-Alive > Connection: Keep-Alive > > HTTP/1.1 403 Forbidden > Server: nginx/1.15.8 > Date: Thu, 15 Aug 2019 02:03:51 GMT > Content-Type: text/html > Content-Length: 153 > Via: 1.1 jfdmzpr04, 1.1 jfintpr02 > Proxy-Connection: Keep-Alive > Connection: Keep-Alive > > HTTP/1.1 200 OK > Server: nginx/1.15.8 > Date: Thu, 15 Aug 2019 02:03:52 GMT > Content-Type: text/html > Vary: Accept-Encoding > Via: 1.1 jfdmzpr04, 1.1 jfintpr02 > Proxy-Connection: Keep-Alive > Connection: Keep-Alive > > HTTP/1.1 403 Forbidden > Server: nginx/1.15.8 > Date: Thu, 15 Aug 2019 02:03:53 GMT > Content-Type: text/html > Content-Length: 153 > Via: 1.1 jfdmzpr04, 1.1 jfintpr02 > Proxy-Connection: Keep-Alive > Connection: Keep-Alive > > > On 8/14/19, 5:43 PM, "Dean Troyer" wrote: > > On Wed, Aug 14, 2019 at 2:18 PM Scott Little < > scott.little at windriver.com> wrote: > > I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > > outages attributed to know issues at cengn. > [...] > > How many folks have seen this? What was the time of the event? How > > long did it persist? Please report events in UTC. > > So I've been poking at this for the last few minutes, so around > 2200-2230 UTC > > These links work: > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053000Z/ > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033000Z/ > > These do not: > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/ > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053000Z/ > > Until I tried them again to write this email, then they swapped. > > Is there perchance a load balancer in front of multiple web servers > and one of the backends is having trouble? Even if that isn't the > case that seems to describe the observed behaviour well enough. > > dt > > -- > Dean Troyer > dtroyer at gmail.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Aug 15 15:39:27 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 15 Aug 2019 15:39:27 +0000 Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EC262365D@ALA-MBD.corp.ad.wrs.com> References: <2588653EBDFFA34B982FAF00F1B4844EC262365D@ALA-MBD.corp.ad.wrs.com> Message-ID: So I think that because the devstack job for config is requiring nova and neutron, and nova pulls in glance, that is why. I think those jobs were started before containerized openstack support, so I don't think we need nova and neutron anymore. I'm trying this review https://review.opendev.org/#/c/676719/ If that does not work, we likely just need to turn off devstack job for config. Our env (sysinv) uses postgres, and perhaps postgres support has been dropped in train. Al From: Rowsell, Brent Sent: Thursday, August 15, 2019 11:31 AM To: Bailey, Henry Albert (Al); starlingx-discuss Subject: RE: StarlingX devstack jobs randomly failing Why would config be setting up a glance db ? Brent From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Thursday, August 15, 2019 11:12 AM To: starlingx-discuss Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing The devstack zuul jobs for config are failing. The error looks to be a database migration failure when initializing the glance DB These are the DB components being installed psycopg2 2.8.3 SQLAlchemy 1.3.6 Sqlalchemy_migrate 0.12.0 I think this may be due to this upstream commit which merged this morning https://review.opendev.org/#/c/665606/ I don't have any idea on how to fix this. We can disable the jobs, or remove glance from our devstack jobs, but we are blocked until this is resolved. It may affect other flock repos as well, if they also have devstack jobs that are setting up glance. Example stacktrace: File "/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/train_migrate01_backend_to_store.py", line 28, in has_migrations metadata_backend = con.execute(sql_query) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 982, in execute return self._execute_text(object_, multiparams, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1155, in _execute_text parameters, File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context e, statement, parameters, cursor, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1464, in _handle_dbapi_exception util.raise_from_cause(newraise, exc_info) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context cursor, statement, parameters, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 552, in do_execute cursor.execute(statement, parameters) DBError: (psycopg2.errors.UndefinedFunction) function instr(text, unknown) does not exist LINE 1: select meta_data from image_locations where INSTR(meta_data,... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. [SQL: select meta_data from image_locations where INSTR(meta_data, '"backend":') > 0] (Background on this error at: http://sqlalche.me/e/f405) Upgraded database to: train_expand01, current revision(s): train_expand01 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Aug 15 16:22:03 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 15 Aug 2019 16:22:03 +0000 Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing In-Reply-To: References: <2588653EBDFFA34B982FAF00F1B4844EC262365D@ALA-MBD.corp.ad.wrs.com> Message-ID: I raised the bug for this https://bugs.launchpad.net/starlingx/+bug/1840292 My attempted change (dropping nova/neutron) did not work, so I have removed devstack as a zuul job from config. https://review.opendev.org/#/c/676660/ If approved, this will almost definitely need to be cherry picked to R2.0, since I suspect any attempted cherry-picks for already committed bugs back to the R2 config branch will also fail zuul. Al From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Thursday, August 15, 2019 11:39 AM To: Rowsell, Brent; starlingx-discuss Subject: Re: [Starlingx-discuss] StarlingX devstack jobs randomly failing So I think that because the devstack job for config is requiring nova and neutron, and nova pulls in glance, that is why. I think those jobs were started before containerized openstack support, so I don't think we need nova and neutron anymore. I'm trying this review https://review.opendev.org/#/c/676719/ If that does not work, we likely just need to turn off devstack job for config. Our env (sysinv) uses postgres, and perhaps postgres support has been dropped in train. Al From: Rowsell, Brent Sent: Thursday, August 15, 2019 11:31 AM To: Bailey, Henry Albert (Al); starlingx-discuss Subject: RE: StarlingX devstack jobs randomly failing Why would config be setting up a glance db ? Brent From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Thursday, August 15, 2019 11:12 AM To: starlingx-discuss Subject: [Starlingx-discuss] StarlingX devstack jobs randomly failing The devstack zuul jobs for config are failing. The error looks to be a database migration failure when initializing the glance DB These are the DB components being installed psycopg2 2.8.3 SQLAlchemy 1.3.6 Sqlalchemy_migrate 0.12.0 I think this may be due to this upstream commit which merged this morning https://review.opendev.org/#/c/665606/ I don't have any idea on how to fix this. We can disable the jobs, or remove glance from our devstack jobs, but we are blocked until this is resolved. It may affect other flock repos as well, if they also have devstack jobs that are setting up glance. Example stacktrace: File "/opt/stack/glance/glance/db/sqlalchemy/alembic_migrations/data_migrations/train_migrate01_backend_to_store.py", line 28, in has_migrations metadata_backend = con.execute(sql_query) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 982, in execute return self._execute_text(object_, multiparams, params) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1155, in _execute_text parameters, File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1248, in _execute_context e, statement, parameters, cursor, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1464, in _handle_dbapi_exception util.raise_from_cause(newraise, exc_info) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/util/compat.py", line 398, in raise_from_cause reraise(type(exception), exception, tb=exc_tb, cause=cause) File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/base.py", line 1244, in _execute_context cursor, statement, parameters, context File "/usr/local/lib/python2.7/dist-packages/sqlalchemy/engine/default.py", line 552, in do_execute cursor.execute(statement, parameters) DBError: (psycopg2.errors.UndefinedFunction) function instr(text, unknown) does not exist LINE 1: select meta_data from image_locations where INSTR(meta_data,... ^ HINT: No function matches the given name and argument types. You might need to add explicit type casts. [SQL: select meta_data from image_locations where INSTR(meta_data, '"backend":') > 0] (Background on this error at: http://sqlalche.me/e/f405) Upgraded database to: train_expand01, current revision(s): train_expand01 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Aug 15 17:00:18 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 15 Aug 2019 13:00:18 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 22 - Failure! Message-ID: <2134308674.161.1565888419063.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 22 Status: Failure Timestamp: 20190815T165739Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T165739Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From Tao.Liu at windriver.com Thu Aug 15 17:05:48 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Thu, 15 Aug 2019 17:05:48 +0000 Subject: [Starlingx-discuss] Pending: Support single huge page size for openstack worker node Message-ID: <7242A3DC72E453498E3D783BBB134C3EA4E812A6@ALA-MBD.corp.ad.wrs.com> Hi All, Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. The code changes are available here: https://review.opendev.org/#/c/676710/ Regards, Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Aug 15 18:08:37 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 15 Aug 2019 14:08:37 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 23 - Still Failing! In-Reply-To: <624237049.159.1565888416515.JavaMail.javamailuser@localhost> References: <624237049.159.1565888416515.JavaMail.javamailuser@localhost> Message-ID: <193125265.165.1565892518585.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 23 Status: Still Failing Timestamp: 20190815T180628Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T180628Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From build.starlingx at gmail.com Thu Aug 15 18:38:43 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 15 Aug 2019 14:38:43 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 24 - Still Failing! In-Reply-To: <191223008.163.1565892516223.JavaMail.javamailuser@localhost> References: <191223008.163.1565892516223.JavaMail.javamailuser@localhost> Message-ID: <1479970648.170.1565894323883.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 24 Status: Still Failing Timestamp: 20190815T183624Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From abraham.arce.moreno at intel.com Thu Aug 15 19:08:36 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 15 Aug 2019 19:08:36 +0000 Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 24 - Still Failing! In-Reply-To: <1479970648.170.1565894323883.JavaMail.javamailuser@localhost> References: <191223008.163.1565892516223.JavaMail.javamailuser@localhost> <1479970648.170.1565894323883.JavaMail.javamailuser@localhost> Message-ID: > Project: STX_BUILD_2.0 > Build #: 24 > Status: Still Failing > Timestamp: 20190815T183624Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs The failure is due to the error: [Errno 14] HTTP Error 403 - Forbidden during the STX_download_mirror [0], same web server issue as described in another email thread [1]. There is already a ticket submitted with CENGN infrastructure team. [0] http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs/jenkins-STX_download_mirror-414.log.html [1] http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005675.html From vm.rod25 at gmail.com Thu Aug 15 19:11:21 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 15 Aug 2019 14:11:21 -0500 Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 24 - Still Failing! In-Reply-To: References: <191223008.163.1565892516223.JavaMail.javamailuser@localhost> <1479970648.170.1565894323883.JavaMail.javamailuser@localhost> Message-ID: Glad we check this topic on the building meeting! On Thu, Aug 15, 2019 at 2:09 PM Arce Moreno, Abraham < abraham.arce.moreno at intel.com> wrote: > > Project: STX_BUILD_2.0 > > Build #: 24 > > Status: Still Failing > > Timestamp: 20190815T183624Z > > > > Check logs at: > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs > > The failure is due to the error: > [Errno 14] HTTP Error 403 - Forbidden > during the STX_download_mirror [0], same web server issue as described in > another email thread [1]. > > There is already a ticket submitted with CENGN infrastructure team. > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs/jenkins-STX_download_mirror-414.log.html > [1] > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005675.html > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Aug 15 19:48:05 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 15 Aug 2019 19:48:05 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190815 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-15 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Aug 15 20:23:27 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 15 Aug 2019 16:23:27 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 25 - Still Failing! In-Reply-To: <677294009.168.1565894321685.JavaMail.javamailuser@localhost> References: <677294009.168.1565894321685.JavaMail.javamailuser@localhost> Message-ID: <987791968.174.1565900608708.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 25 Status: Still Failing Timestamp: 20190815T202106Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T202106Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From build.starlingx at gmail.com Thu Aug 15 20:28:23 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 15 Aug 2019 16:28:23 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 26 - Still Failing! In-Reply-To: <583961482.172.1565900606566.JavaMail.javamailuser@localhost> References: <583961482.172.1565900606566.JavaMail.javamailuser@localhost> Message-ID: <948177465.178.1565900904033.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 26 Status: Still Failing Timestamp: 20190815T202631Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T202631Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true From erich.cordoba.malibran at intel.com Thu Aug 15 23:08:08 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Thu, 15 Aug 2019 23:08:08 +0000 Subject: [Starlingx-discuss] [Containers] Worker nodes pulling from external registry instead from the internal Message-ID: <9bec40650b28680d952b669fced842ff6dbbb48a.camel@intel.com> Hi. I'm working on this bug: https://bugs.launchpad.net/starlingx/+bug/1817958 and I have a proposal that I would like to discuss. In this bug, if an external docker registry is being used, all worker nodes will retrieve containers from the external registry instead of using the registry.local. This happens because the download_an_image[0] function has the following logic. If container has registry.local, then Try to download If not found in registry.local, then Try to download from public/private registry. Push image to registry.local Else Try to download from public/private registry. Here there are two bugs: 1. If the container doesn't has the registry.local prefix, then the image won't be push into the local registry. This can be easily fix by adding the client.push call. 2. Worker nodes will still try to download from external registry even if the image was pushed to the local registry. This is because the images won't have the registry.local prefix. So, to fix this I would like to propose the following: 1. Ensure that all images should have the registry.local prefix, regardless if a private is defined or not. I'm not sure yet how to do this and I will appreciate some help point me to the right direction. 2. Change the logic in download_an_image to do this: Try to download image: If not found then: Remove registry.local prefix Try to download from public/private registry Retag and push to local registry. By this way all nodes will try registry.local first and then public/private if not found. What do you think about this fix? Thank you in advance. -Erich - [0] https://opendev.org/starlingx/config/src/branch/master/sysinv/sysinv/sysinv/sysinv/conductor/kube_app.py#L2592 From maria.g.perez.ibarra at intel.com Thu Aug 15 23:10:35 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 15 Aug 2019 23:10:35 +0000 Subject: [Starlingx-discuss] [ Final Regression - stx2.0 ] Report for 8/15/19 Message-ID: StarlingX 2.0 Release Status: A reduction on the total tests is expected once this report is updated. ISO: BUILD_ID=" 20190809T053000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 211 Pass = 43 Fail = 1 Blocked = 0 Not Run = 0 Total executed = 44 Pass Rate = 97.72% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 6 PASS Regression - Backup & Restore - Regression - Distributed Cloud - Regression - Gnoochi 2 PASS Regression - FM Regression - HA Regression - Heat 4 PASS Regression - Horizon 1 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 15 PASS | 1 FAIL Regression - Nova Regression - Security 3 PASS Regression - Storage 4 PASS Regression - Inventory 5 PASS System Test 3 PASS Regression - new features 2 PASS ---------------------------------------------------------------------- user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 VM filesystem is not RW when attached the 2nd volume https://bugs.launchpad.net/starlingx/+bug/1838546 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 vSwitch 1G Hugepage available size cannot be changed https://bugs.launchpad.net/starlingx/+bug/1834530 hypervisor stays down after force lock and unlock due to pci-irq-affinity-agent process failure https://bugs.launchpad.net/starlingx/+bug/1839160 Image conversion fails with large qcow2 guest image due to insufficient filesystem size https://bugs.launchpad.net/starlingx/+bug/1819688 SSH to secure boot VM fails after evacuation https://bugs.launchpad.net/starlingx/+bug/1839320 platform keystone account lockout feature is not enabled https://bugs.launchpad.net/starlingx/+bug/1838100 stx-openstack application-applying stuck at osh-openstack-placement https://bugs.launchpad.net/starlingx/+bug/1837769 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1FxrwgivQCG3Ksvqm46zhKILJlZtucsNxGYG4a8d0LSs/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Aug 16 01:32:11 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 15 Aug 2019 21:32:11 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 27 - Still Failing! In-Reply-To: <715088537.176.1565900901513.JavaMail.javamailuser@localhost> References: <715088537.176.1565900901513.JavaMail.javamailuser@localhost> Message-ID: <919487133.183.1565919132295.JavaMail.javamailuser@localhost> Project: STX_BUILD_2.0 Build #: 27 Status: Still Failing Timestamp: 20190816T013000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190816T013000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false From maria.g.perez.ibarra at intel.com Fri Aug 16 02:02:04 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 16 Aug 2019 02:02:04 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190815 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-15 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. Simplex virtual couldn't be executed due to a problem on cengn. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Aug 16 14:14:07 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 16 Aug 2019 10:14:07 -0400 Subject: [Starlingx-discuss] Cengn http 403 error fixed Message-ID: <1e5c09ee-2b45-13d9-645c-2c547a374055@windriver.com> CENGN has identified and corrected the issue with the webserver at mirror.starlingx.cengn/ca. Hopefully our 403 error issues are a thing of the past. If there are any new occurances, please let me know. Scott From scott.little at windriver.com Fri Aug 16 14:17:57 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 16 Aug 2019 10:17:57 -0400 Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 24 - Still Failing! In-Reply-To: References: <191223008.163.1565892516223.JavaMail.javamailuser@localhost> <1479970648.170.1565894323883.JavaMail.javamailuser@localhost> Message-ID: <68a75afc-c24c-6b57-8911-b5ff6949bead@windriver.com> CENGN's 403 errors should be fixed now. I'll start a new build. Scott On 2019-08-15 3:11 p.m., Victor Rodriguez wrote: > Glad we check this topic on the building meeting! > > On Thu, Aug 15, 2019 at 2:09 PM Arce Moreno, Abraham > > > wrote: > > > Project: STX_BUILD_2.0 > > Build #: 24 > > Status: Still Failing > > Timestamp: 20190815T183624Z > > > > Check logs at: > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs > > The failure is due to the error: >   [Errno 14] HTTP Error 403 - Forbidden > during the STX_download_mirror [0], same web server issue as > described in another email thread [1]. > > There is already a ticket submitted with CENGN infrastructure team. > > [0] > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190815T183624Z/logs/jenkins-STX_download_mirror-414.log.html > [1] > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005675.html > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Fri Aug 16 14:22:25 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Fri, 16 Aug 2019 09:22:25 -0500 Subject: [Starlingx-discuss] Cengn http 403 error fixed In-Reply-To: <1e5c09ee-2b45-13d9-645c-2c547a374055@windriver.com> References: <1e5c09ee-2b45-13d9-645c-2c547a374055@windriver.com> Message-ID: On Fri, Aug 16, 2019 at 9:15 AM Scott Little wrote: > CENGN has identified and corrected the issue with the webserver at > mirror.starlingx.cengn/ca. > Hopefully our 403 error issues are a thing of the past. > If there are any new occurances, please let me know. Looks good from the two systems I was using to extract a copy the last couple of days. Thanks! dt -- Dean Troyer dtroyer at gmail.com From yong.hu at intel.com Fri Aug 16 18:07:35 2019 From: yong.hu at intel.com (Yong Hu) Date: Fri, 16 Aug 2019 11:07:35 -0700 Subject: [Starlingx-discuss] [starlingx/config] ASK for HELP on patch review Message-ID: Hi starlingx/config cores, please review this patch: https://review.opendev.org/#/c/672929/12 if this patch is merged into master, it will be picked into RC branch as follows. Hope it can catch up Monday Sanity. thanks, Yong From Ian.Jolliffe at windriver.com Fri Aug 16 19:22:05 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Fri, 16 Aug 2019 19:22:05 +0000 Subject: [Starlingx-discuss] [TSC] Minutes 8/8 and 8/15 Message-ID: <3708CBF5-DF88-4E70-81EB-1DE8F5B1D438@windriver.com> 8/15/2019: ========== TSC spent half the meeting working through specs in for review. Election date? (ildikov) Combined election (TSC, PL, TL) 3 week interval - (PL/TL 2 weeks) Oct 28 - week of election - proposed and accepted Start nominations Oct 7th - TSC Nomination and voting - start process Oct 14th - PL/TL Approved Shanghai prep (ildikov) PTG - asked for 1 room 1.25 - 1.5 days Technical discussion + onboarding for new contributors/users Preparation etherpad: https://etherpad.openstack.org/p/PVG-StarlingX-PTG We will find out shortly on how much time will be allocated. Forum Submission period September 2 - 16 https://wiki.openstack.org/wiki/Forum Preparation etherpad: https://etherpad.openstack.org/p/PVG-StarlingX-brainstorming Project update session Yes or No? Yes - 15 min or 40 min? 40 min 8/8 Meeting: =========== Standing topics New R3 feature candidates Kata containers This was previously flagged as an R4 candidate. Request to add it to R3? - discussed that R3 is a short release - maybe look at this in the context of prep Package Versioning Spec - needs reviews Follow up to Project Maturity discussion last week Key takeaways and next steps we want to take 1) To attract more developers 2) To attract more users users around the globe - who from which regions reviews diversity - recruiting more reviewers +1's and path to core reviewer Maybe the TSC can reach out to the First contact SIG Review governance - top down vs bottom up - spread out responsibilities and decision making encourage broad participation Board update presentation mentioned below - Shanghai ? There may be a board meeting over the phone prior to Shanghai we can start to prep now - more detailed than prior updates. Delayed to next TSC meeting Prefer one election for all positions Need a plan date by end of Month From maria.g.perez.ibarra at intel.com Fri Aug 16 21:31:37 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 16 Aug 2019 21:31:37 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190816 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-16 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Aug 16 22:40:37 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 16 Aug 2019 18:40:37 -0400 Subject: [Starlingx-discuss] [build-report] STX_BUILD_2.0 - Build # 27 - Still Failing! In-Reply-To: <919487133.183.1565919132295.JavaMail.javamailuser@localhost> References: <715088537.176.1565900901513.JavaMail.javamailuser@localhost> <919487133.183.1565919132295.JavaMail.javamailuser@localhost> Message-ID: With cengn's web server fixed, we now have a successful 2.0 build Build #: 28 Timestamp: 20190816T140051Z Docker images were also generated. Enjoy Scott On 2019-08-15 9:32 p.m., build.starlingx at gmail.com wrote: > Project: STX_BUILD_2.0 > Build #: 27 > Status: Still Failing > Timestamp: 20190816T013000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190816T013000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bin.yang at intel.com Mon Aug 19 02:27:48 2019 From: bin.yang at intel.com (Yang, Bin) Date: Mon, 19 Aug 2019 02:27:48 +0000 Subject: [Starlingx-discuss] ask for help on patch review Message-ID: Hi stx/upstream cores, Could you please help to review https://review.opendev.org/#/c/676031/? It needs a workflow +1 for merge. Thanks, Bin -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Mon Aug 19 03:29:58 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Mon, 19 Aug 2019 11:29:58 +0800 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Message-ID: Dear all, How to power off an active controller via VM? >From the STX R1.0 testplan: https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions - An active controller can be power off/on via VM. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhang.kunpeng at 99cloud.net Mon Aug 19 08:27:51 2019 From: zhang.kunpeng at 99cloud.net (=?utf-8?B?5byg6bKy6bmP?=) Date: Mon, 19 Aug 2019 16:27:51 +0800 Subject: [Starlingx-discuss] ask for help on patch review Message-ID: Hi stx cores, Could you please help to review https://review.opendev.org/#/c/672929? It needs a workflow +1 for merge. Thanks Kunpeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Mon Aug 19 13:17:00 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 19 Aug 2019 13:17:00 +0000 Subject: [Starlingx-discuss] StarlingX Weekly Containerization Meeting Message-ID: This week's agenda: Agenda for August 19 meeting: 1. stx2.0 gating bugs (27, down from 28 one week ago) a) Application apply issues: - Review timeline and findings for https://bugs.launchpad.net/starlingx/+bug/1836406/ [Bob] - 1837750 stx-application re-apply strategy requires some changes [Tyler] - 1837792 stx-openstack application apply aborted [Angie] - 1839696 Multiple Local registry: 500 Server Error cause application-apply errors [Abraham/Christopher] - 1840031 [Ironic] Cannot do a re-apply on ironic node deployment [Mingyuan] - Others: b) Performance/recovery time issues: 1837426 Very high platform CPU usage on AIO-DX active controller with stx-openstack installed [Al Bailey/Gerry Kopec] - No single root cause - Running in steady state, cpu 0&1 are running 80-90% range leading to high load average; lots of processes and threads, none that are cpu hogs on their own - Experimented with liveness probes where all turned off: saw dramatic drop in cpu usage down to 40-50% range 1834796 AIO: Too many rabbit threads [Bin Yang] 1838411 Uncontrolled swact on standard system is slow [Bart Wensley] 1829931 AIO-DX: hypervisor is not up in 5 mins after unlocked standby controller becomes available [Bart Wensley] - Dependency on 1837426 being addressed 1st c) Other higher priority issues: 1817936 Periodic message loss seen between VIM and OpenStack REST APIs [Austin Sun] 1837686 Openstack commands hold prompt > 30 seconds [Tao Liu] 2. Collect tool enhancements for containers debug: - Initial ideas were captured here: https://etherpad.openstack.org/p/stx-containerization-debug - Additional info required in collect? Etherpad: https://etherpad.openstack.org/p/stx-containerization Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4101 bytes Desc: not available URL: From scott.little at windriver.com Mon Aug 19 14:34:23 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 19 Aug 2019 10:34:23 -0400 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> Message-ID: The server multi-thread, and only one server thread had lost connectivity of the ceph back end.  It's fixed now. Scott On 2019-08-14 6:42 p.m., Dean Troyer wrote: > On Wed, Aug 14, 2019 at 2:18 PM Scott Little wrote: >> I've never seen a 404 or 403 myself, outside of the 3 or 4 extended >> outages attributed to know issues at cengn. > [...] >> How many folks have seen this? What was the time of the event? How >> long did it persist? Please report events in UTC. > So I've been poking at this for the last few minutes, so around 2200-2230 UTC > > These links work: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053000Z/ > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033000Z/ > > These do not: > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/ > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053000Z/ > > Until I tried them again to write this email, then they swapped. > > Is there perchance a load balancer in front of multiple web servers > and one of the backends is having trouble? Even if that isn't the > case that seems to describe the observed behaviour well enough. > > dt > From Ghada.Khalil at windriver.com Mon Aug 19 14:46:04 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 19 Aug 2019 14:46:04 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - August 15/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C159316E@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release Team Meeting - August 15 2019 stx.2.0 - Branch creation complete / cherrypicking in progress. - Request for PLs to monitor cherrypicks from their team - Bugs - 31 high priority bugs for stx.2.0 are still open - High priority bugs that are not addressed for the release date will need to be worked for a future maintenance release - Test - Feature Test - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Ironic >> Blocked. Launchpad created. Code in review. - helm overrides >> 1 TC remaining. Waiting for info from Bob Church - Re-forecast is August 22 - Regression Test - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Complete as of August 9. 5 blocked TCs now unblocked and moved to final regression - Final Regression - Tracker: https://docs.google.com/spreadsheets/d/1FxrwgivQCG3Ksvqm46zhKILJlZtucsNxGYG4a8d0LSs/edit#gid=838066175 - Expecting to have 150 TCs - Testing in progress. Focusing first on unblocked TCs and bug retest - For next meeting, discuss plan for first maintenance release - Need to update the release information on https://www.starlingx.io/faq/ - Yong volunteered to do the pull request. Ghada to provide some text. stx.3.0 - Intermediate Milestone (S1 -- no new specs accepted) is scheduled for this week. Specs are already posted for the majority of features targeted for stx.3.0 - Exceptions: - R2 >> R3 Upgrades -- Fcst to post: TBD - Performance Testing / Measurement Framework -- Fcst to post: Aug 23 - Milestone-2 is fast approaching -- wk of Sept 4 - Need PLs to start poplulating their plans From Ken.Young at windriver.com Mon Aug 19 14:51:53 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Mon, 19 Aug 2019 14:51:53 +0000 Subject: [Starlingx-discuss] Ken Young is Leaving the StarlingX Security Team Message-ID: Team, I am resigning my position as a member of the security team for StarlingX effective immediately. I am moving to a new position in Wind River which will not allow me to follow the project as closely as required. To ensure that the security team maintains its momentum, Ghada Khalil will step into my position. We have worked closely together on Security within Wind River. She has a tremendous grasp on what is required and will be a strong contributor to the security team going forward. Thank you Ghada. I have updated the Starling X Security Wiki to reflect this change. I wish you all the best, I will miss working with all of you. I think we have started something great together. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Mon Aug 19 16:29:42 2019 From: yong.hu at intel.com (Yong Hu) Date: Mon, 19 Aug 2019 09:29:42 -0700 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) In-Reply-To: References: Message-ID: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> Did the test steps in this test_plan page answer your question? Or you tried the steps but saw issues? On 18/08/2019 8:29 PM, Ezpeer Chen wrote: > Dear all, > > > How to power off an active controller via VM? > > From the STX R1.0 testplan: > https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions > > - An active controller can be power off/on via VM. > > > > > Thanks > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Angie.Wang at windriver.com Mon Aug 19 16:34:48 2019 From: Angie.Wang at windriver.com (Wang, Jing (Angie)) Date: Mon, 19 Aug 2019 16:34:48 +0000 Subject: [Starlingx-discuss] [Containers] Worker nodes pulling from external registry instead from the internal In-Reply-To: <9bec40650b28680d952b669fced842ff6dbbb48a.camel@intel.com> References: <9bec40650b28680d952b669fced842ff6dbbb48a.camel@intel.com> Message-ID: Hi Erich, There is no logic problem of the download_an_image function. This function is only used at the application time (system application-apply). The issue here is after unlocking worker nodes, k8s pods(ie.. calico-node, kube-proxy ...) failed to start up due to image download failed. Currently, the k8s base images are pulled from external registry at bootstrap/puppet time, it depends on external network, so the issue happens if there has networking issue. This LP is to update to pull those base images from local registry instead to bring up k8s static and dynamic pods. Thanks, -Angie -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: August-15-19 7:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Worker nodes pulling from external registry instead from the internal Hi. I'm working on this bug: https://bugs.launchpad.net/starlingx/+bug/1817958 and I have a proposal that I would like to discuss. In this bug, if an external docker registry is being used, all worker nodes will retrieve containers from the external registry instead of using the registry.local. This happens because the download_an_image[0] function has the following logic. If container has registry.local, then Try to download If not found in registry.local, then Try to download from public/private registry. Push image to registry.local Else Try to download from public/private registry. Here there are two bugs: 1. If the container doesn't has the registry.local prefix, then the image won't be push into the local registry. This can be easily fix by adding the client.push call. 2. Worker nodes will still try to download from external registry even if the image was pushed to the local registry. This is because the images won't have the registry.local prefix. So, to fix this I would like to propose the following: 1. Ensure that all images should have the registry.local prefix, regardless if a private is defined or not. I'm not sure yet how to do this and I will appreciate some help point me to the right direction. 2. Change the logic in download_an_image to do this: Try to download image: If not found then: Remove registry.local prefix Try to download from public/private registry Retag and push to local registry. By this way all nodes will try registry.local first and then public/private if not found. What do you think about this fix? Thank you in advance. -Erich - [0] https://opendev.org/starlingx/config/src/branch/master/sysinv/sysinv/sysinv/sysinv/conductor/kube_app.py#L2592 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Mon Aug 19 18:10:40 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Mon, 19 Aug 2019 18:10:40 +0000 Subject: [Starlingx-discuss] [Containers] Worker nodes pulling from external registry instead from the internal In-Reply-To: References: <9bec40650b28680d952b669fced842ff6dbbb48a.camel@intel.com> Message-ID: Erich - FYI - I re-gated this to stx.3.0 as this is an optimization and not urgent for StarlingX. Frank -----Original Message----- From: Wang, Jing (Angie) [mailto:Angie.Wang at windriver.com] Sent: Monday, August 19, 2019 12:35 PM To: Cordoba Malibran, Erich ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Worker nodes pulling from external registry instead from the internal Hi Erich, There is no logic problem of the download_an_image function. This function is only used at the application time (system application-apply). The issue here is after unlocking worker nodes, k8s pods(ie.. calico-node, kube-proxy ...) failed to start up due to image download failed. Currently, the k8s base images are pulled from external registry at bootstrap/puppet time, it depends on external network, so the issue happens if there has networking issue. This LP is to update to pull those base images from local registry instead to bring up k8s static and dynamic pods. Thanks, -Angie -----Original Message----- From: Cordoba Malibran, Erich [mailto:erich.cordoba.malibran at intel.com] Sent: August-15-19 7:08 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Worker nodes pulling from external registry instead from the internal Hi. I'm working on this bug: https://bugs.launchpad.net/starlingx/+bug/1817958 and I have a proposal that I would like to discuss. In this bug, if an external docker registry is being used, all worker nodes will retrieve containers from the external registry instead of using the registry.local. This happens because the download_an_image[0] function has the following logic. If container has registry.local, then Try to download If not found in registry.local, then Try to download from public/private registry. Push image to registry.local Else Try to download from public/private registry. Here there are two bugs: 1. If the container doesn't has the registry.local prefix, then the image won't be push into the local registry. This can be easily fix by adding the client.push call. 2. Worker nodes will still try to download from external registry even if the image was pushed to the local registry. This is because the images won't have the registry.local prefix. So, to fix this I would like to propose the following: 1. Ensure that all images should have the registry.local prefix, regardless if a private is defined or not. I'm not sure yet how to do this and I will appreciate some help point me to the right direction. 2. Change the logic in download_an_image to do this: Try to download image: If not found then: Remove registry.local prefix Try to download from public/private registry Retag and push to local registry. By this way all nodes will try registry.local first and then public/private if not found. What do you think about this fix? Thank you in advance. -Erich - [0] https://opendev.org/starlingx/config/src/branch/master/sysinv/sysinv/sysinv/sysinv/conductor/kube_app.py#L2592 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Mon Aug 19 19:03:43 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 19 Aug 2019 14:03:43 -0500 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> Message-ID: Awesome, thanks! On Mon, Aug 19, 2019 at 9:35 AM Scott Little wrote: > > The server multi-thread, and only one server thread had lost > connectivity of the ceph back end. It's fixed now. > > Scott > > On 2019-08-14 6:42 p.m., Dean Troyer wrote: > > On Wed, Aug 14, 2019 at 2:18 PM Scott Little wrote: > >> I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > >> outages attributed to know issues at cengn. > > [...] > >> How many folks have seen this? What was the time of the event? How > >> long did it persist? Please report events in UTC. > > So I've been poking at this for the last few minutes, so around 2200-2230 UTC > > > > These links work: > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053000Z/ > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033000Z/ > > > > These do not: > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/ > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053000Z/ > > > > Until I tried them again to write this email, then they swapped. > > > > Is there perchance a load balancer in front of multiple web servers > > and one of the backends is having trouble? Even if that isn't the > > case that seems to describe the observed behaviour well enough. > > > > dt > > > From maria.g.perez.ibarra at intel.com Mon Aug 19 19:57:58 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 19 Aug 2019 19:57:58 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190819 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-19 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Aug 20 02:26:25 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 20 Aug 2019 02:26:25 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190819 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-19 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Tue Aug 20 03:52:58 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Tue, 20 Aug 2019 11:52:58 +0800 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) In-Reply-To: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> References: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> Message-ID: Dear Yong, ============================================================= [wrsroot at controller-0 ~(keystone_admin)]$ system host-power-off controller-0 Can not 'Power-Off' an 'unlocked' host controller-0; Please 'Lock' first [wrsroot at controller-0 ~(keystone_admin)]$ system host-lock controller-0 controller-0 : Rejected: Can not lock an active controller. [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ ============================================================= I don't know how to power off an active controller via VM? There's no Instructions about VM.? Thanks Yong Hu 於 2019年8月20日 週二 上午12:32寫道: > Did the test steps in this test_plan page answer your question? > Or you tried the steps but saw issues? > > > On 18/08/2019 8:29 PM, Ezpeer Chen wrote: > > Dear all, > > > > > > How to power off an active controller via VM? > > > > From the STX R1.0 testplan: > > > https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions > > > > - An active controller can be power off/on via VM. > > > > > > > > > > Thanks > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Aug 20 12:46:48 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 20 Aug 2019 12:46:48 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack distro meeting, 8/21 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601919B@SHSMSX104.ccr.corp.intel.com> Agenda for 8/21 meeting: 1. stx.2.0 bug triage & review (Cindy) 2. call for contribution: de-brand "Titanium Cloud" 3. package version spec opens (Brent/Bin) 4. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 21, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From cindy.xie at intel.com Tue Aug 20 12:52:08 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 20 Aug 2019 12:52:08 +0000 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) In-Reply-To: References: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F360191FD@SHSMSX104.ccr.corp.intel.com> Hi, Ezpeer, I am adding Elio, who may know how to execute those test steps. Thanks. - cindy From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, August 20, 2019 11:53 AM To: Hu, Yong Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Dear Yong, ============================================================= [wrsroot at controller-0 ~(keystone_admin)]$ system host-power-off controller-0 Can not 'Power-Off' an 'unlocked' host controller-0; Please 'Lock' first [wrsroot at controller-0 ~(keystone_admin)]$ system host-lock controller-0 controller-0 : Rejected: Can not lock an active controller. [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ ============================================================= I don't know how to power off an active controller via VM? There's no Instructions about VM.? Thanks Yong Hu > 於 2019年8月20日 週二 上午12:32寫道: Did the test steps in this test_plan page answer your question? Or you tried the steps but saw issues? On 18/08/2019 8:29 PM, Ezpeer Chen wrote: > Dear all, > > > How to power off an active controller via VM? > > From the STX R1.0 testplan: > https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions > > - An active controller can be power off/on via VM. > > > > > Thanks > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerry.Sun at windriver.com Tue Aug 20 15:49:44 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Tue, 20 Aug 2019 15:49:44 +0000 Subject: [Starlingx-discuss] Changing ansible docker_registries structure Message-ID: Hi All, I am making changes that change the accepted format for docker registries from something like docker_registries: k8s.gcr.io: url to something like docker_registries: k8s.gcr.io: url: url This affects anyone with a setup that specifies alternate docker registries as they will need to change their ansible localhost.yml files after this commit. the change is currently under review: https://review.opendev.org/#/c/677005/ please let me know if you have any concerns Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From elio.martinez.monroy at intel.com Tue Aug 20 15:58:09 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Tue, 20 Aug 2019 15:58:09 +0000 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F360191FD@SHSMSX104.ccr.corp.intel.com> References: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> <2FD5DDB5A04D264C80D42CA35194914F360191FD@SHSMSX104.ccr.corp.intel.com> Message-ID: <1466AF2176E6F040BD63860D0A241BBD495C2030@FMSMSX109.amr.corp.intel.com> Hi Chen, it seems that you want to turn off the active controller, the system is protecting for doing that. That is because is saying that you need to lock it first. My suggestion should be to swact controller in case that you have more than one controller nodes. If you don’t then you can try to turn it down using your BMC ip with ipmitool. Besides that. Im guessing that when you say VM , it is a virtual environment. Then you can use virsh. Please correct me if I’m making wrong asummptions. BR Elio From: Xie, Cindy Sent: Tuesday, August 20, 2019 7:52 AM To: Ezpeer Chen ; Hu, Yong ; Martinez Monroy, Elio Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Hi, Ezpeer, I am adding Elio, who may know how to execute those test steps. Thanks. - cindy From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, August 20, 2019 11:53 AM To: Hu, Yong > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Dear Yong, ============================================================= [wrsroot at controller-0 ~(keystone_admin)]$ system host-power-off controller-0 Can not 'Power-Off' an 'unlocked' host controller-0; Please 'Lock' first [wrsroot at controller-0 ~(keystone_admin)]$ system host-lock controller-0 controller-0 : Rejected: Can not lock an active controller. [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ ============================================================= I don't know how to power off an active controller via VM? There's no Instructions about VM.? Thanks Yong Hu > 於 2019年8月20日 週二 上午12:32寫道: Did the test steps in this test_plan page answer your question? Or you tried the steps but saw issues? On 18/08/2019 8:29 PM, Ezpeer Chen wrote: > Dear all, > > > How to power off an active controller via VM? > > From the STX R1.0 testplan: > https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions > > - An active controller can be power off/on via VM. > > > > > Thanks > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Aug 20 17:58:57 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 20 Aug 2019 12:58:57 -0500 Subject: [Starlingx-discuss] Ken Young is Leaving the StarlingX Security Team In-Reply-To: References: Message-ID: <6286AD80-45D9-4E89-9229-AA52A5F6A7E1@gmail.com> Hi Ken, I’m sorry to see you leave. Thank you for all the great work in and support towards the StarlingX community. I wish you all the best in your new role and hoping to see you again in the community some time in the future. :) Best Regards, Ildikó > On 2019. Aug 19., at 9:51, Young, Ken wrote: > > Team, > > I am resigning my position as a member of the security team for StarlingX effective immediately. I am moving to a new position in Wind River which will not allow me to follow the project as closely as required. > > To ensure that the security team maintains its momentum, Ghada Khalil will step into my position. We have worked closely together on Security within Wind River. She has a tremendous grasp on what is required and will be a strong contributor to the security team going forward. Thank you Ghada. > > I have updated the Starling X Security Wiki to reflect this change. > > I wish you all the best, I will miss working with all of you. I think we have started something great together. > > Regards, > Ken Y > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Joseph.Richard at windriver.com Tue Aug 20 18:28:55 2019 From: Joseph.Richard at windriver.com (Richard, Joseph) Date: Tue, 20 Aug 2019 18:28:55 +0000 Subject: [Starlingx-discuss] rabbitmq dead bindings Message-ID: <7B6A2AE64F40F245AE81F245059F0C3672F1C764@ALA-MBD.corp.ad.wrs.com> I'm looking at an issue (https://bugs.launchpad.net/starlingx/+bug/1835807) where bindings in rabbit (e.g. neutron q-l3-plugin) intermittently stop working during host failover (via reboot -f). Once this happens, any RPCs on the topic where the binding has died start timing out and being dropped, which renders the stx-openstack application nearly unusable once this happens. This can be recovered by deleting the queues those topics are bound to, or by restarting rabbit. It looks like messages get into rabbit, which still has the bindings(seen by running rabbitmqctl -p neutron list_bindings), but rabbit ignores the bindings and treats the message as unroutable. When I set the alternative exchange for that exchange (e.g. neutron) to the fanout for that topic, the RPCs start going through again. I am testing turning off ha mode in rabbit to see if that stops this from occurring, but would prefer a proper solution to prevent or fix this. Has anyone seen this before, and has a better solution for this? Does anyone have any suggestions on debugging this, given that it clears up when rabbit restarts? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerry.Sun at windriver.com Tue Aug 20 20:18:27 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Tue, 20 Aug 2019 20:18:27 +0000 Subject: [Starlingx-discuss] [docs] Changing ansible docker_registries structure Message-ID: Hi all, The change have been merged into master. Please make any needed documentation changes. Thanks, Jerry From: Sun, Yicheng (Jerry) Sent: August-20-19 11:50 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Changing ansible docker_registries structure Hi All, I am making changes that change the accepted format for docker registries from something like docker_registries: k8s.gcr.io: url to something like docker_registries: k8s.gcr.io: url: url This affects anyone with a setup that specifies alternate docker registries as they will need to change their ansible localhost.yml files after this commit. the change is currently under review: https://review.opendev.org/#/c/677005/ please let me know if you have any concerns Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Aug 20 21:23:05 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 20 Aug 2019 21:23:05 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190820 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-20 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Aug 20 21:40:41 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 20 Aug 2019 16:40:41 -0500 Subject: [Starlingx-discuss] Community Marketing Planning call prepping for Release 2.0 tomorrow Message-ID: Hi StarlingX Community, We have our next Community Marketing Planning call tomorrow to finalize preparations for communications around the upcoming 2.0 release of StarlingX. The call will be at a slightly different time at 9am Pacific Time / 1600 UTC tomorrow. You can find dial in information and meeting agenda here: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans Please feel free to add items to the agenda for tomorrow. Add your name next to your item so we know who to give the floor to. Please let me know if you have questions. Thanks and Best Regards, Ildikó From maria.g.perez.ibarra at intel.com Tue Aug 20 21:56:51 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 20 Aug 2019 21:56:51 +0000 Subject: [Starlingx-discuss] [ Final Regression - stx2.0 ] Report for 8/20/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190815T053000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 211 Pass = 75 Fail = 1 Blocked = 0 Not Run = 0 Total executed = 76 Pass Rate = 98.68% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 6 PASS Regression - Backup & Restore - Regression - Distributed Cloud - Regression - Gnoochi 2 PASS Regression - FM Regression - HA Regression - Heat 4 PASS Regression - Horizon 1 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 21 PASS | 1 FAIL Regression - Nova Regression - Security 6 PASS Regression - Storage 9 PASS Regression - Inventory 7 PASS System Test 7 PASS Regression - new features 12 PASS ---------------------------------------------------------------------- user does not login within configured time(60s) login is aborted https://bugs.launchpad.net/starlingx/+bug/1833469 After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 3 instances launched in soft anti-affinity server group but unexpectedly ignored the 3rd host https://bugs.launchpad.net/starlingx/+bug/1834255 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 Port list was not showing for some computes during install https://bugs.launchpad.net/starlingx/+bug/1834245 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 100.114 NTP alarm not cleared after swact https://bugs.launchpad.net/starlingx/+bug/1834071 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 VM filesystem is not RW when attached the 2nd volume https://bugs.launchpad.net/starlingx/+bug/1838546 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 vSwitch 1G Hugepage available size cannot be changed https://bugs.launchpad.net/starlingx/+bug/1834530 hypervisor stays down after force lock and unlock due to pci-irq-affinity-agent process failure https://bugs.launchpad.net/starlingx/+bug/1839160 Image conversion fails with large qcow2 guest image due to insufficient filesystem size https://bugs.launchpad.net/starlingx/+bug/1819688 SSH to secure boot VM fails after evacuation https://bugs.launchpad.net/starlingx/+bug/1839320 platform keystone account lockout feature is not enabled https://bugs.launchpad.net/starlingx/+bug/1838100 stx-openstack application-applying stuck at osh-openstack-placement https://bugs.launchpad.net/starlingx/+bug/1837769 after changing a setting of panko stx-openstack failed to reach 'applied' status after 1800 seconds https://bugs.launchpad.net/starlingx/+bug/1828056 ----------------------------------------------------------------------------- For more detail of the tests: https://docs.google.com/spreadsheets/d/1FxrwgivQCG3Ksvqm46zhKILJlZtucsNxGYG4a8d0LSs/edit#gid=838066175 Regards! Maria G -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Aug 20 22:17:55 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 20 Aug 2019 22:17:55 +0000 Subject: [Starlingx-discuss] [ Test ] Meeting notes - 08/20/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEB213E@FMSMSX114.amr.corp.intel.com> Agenda for 08/20 Attendees: Elio, Richo, Al, JC, Jose, JP, Maria P, Numan, Cristopher, Fernando, Yang, Yong 1. Final regression status - Elio, Numan https://docs.google.com/spreadsheets/d/1FxrwgivQCG3Ksvqm46zhKILJlZtucsNxGYG4a8d0LSs/edit?pli=1#gid=838066175 Elio - 6 test cases missing. Unblocking tests cases from regular regression. Networking - one test case failed. Fixed pushed to stx.3.0. Numan - 13 tests remaining, to be finished by EOW 13 launchapds to be verified Send an email if any help is required. 2. Feature testing - Jose Helm overrides - completed. Ironic - Mingyuan send a patch, but it's not in the RC ISO. The config is set, but we are not able to deploy VMs. Working with Mingyuan on this. 3. Sanity status - Cristopher Still green. In Today's RC sanity for standard config, a swact was executed and this is not expected. Happened on Virtual and Bare metal. Re-running in order to verify it. Master - will be run after we finish RC sanity. The failure: Multiple Local registry: 500 Server Error cause application-apply errors - was because of our infrastructure (DNS) Platform sanity is normally green - issues remaining Neutron dhcp not coming up after lock unlock compute host neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" RPC timeout error when creating barbican secret on host-bulk-add after controller-0 is up - is not a blocker. simplex subsequent unlock failed (traceback KubeAppApplyFailure) - https://bugs.launchpad.net/starlingx/+bug/1840351 (to be fixed in 3.0) Fixed - Configuration out-of-date alarms on storage nodes since fresh install 4. Testing frameworks - Running Pytest suite - Elio Having problems with local environment - could be related to mix of python versions. A mail to Yang will be sent to the mailing list. Robot suite in public repo - Jose Please help with the review - https://review.opendev.org/#/q/project:starlingx/test Added Al to the reviewers. 5. Opens Ada - unified sanity. Still working on the comparison on both sanities (WR and Intel). No big progress on this one. Yong - suggest to schedule a meeting between Mingyuan and Jose in order to finish the Ironic debug. Numan - for stx.3.0 - dates looks tight. The timeline is very aggressive. Let's raise the concern in the release meeting (Thursday) Regards Ada From Bill.Zvonar at windriver.com Tue Aug 20 23:21:50 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 20 Aug 2019 23:21:50 +0000 Subject: [Starlingx-discuss] Community Call (August 21, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACE507@ALA-MBD.corp.ad.wrs.com> Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - preparing for the 2.0 declaration next week! Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190821T1400 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tao.Liu at windriver.com Wed Aug 21 00:39:22 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Wed, 21 Aug 2019 00:39:22 +0000 Subject: [Starlingx-discuss] Pending: Support single huge page size for openstack worker node Message-ID: <7242A3DC72E453498E3D783BBB134C3EA4E82CA6@ALA-MBD.corp.ad.wrs.com> Hi All, The changes to support single huge page size have been merged into master. In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. Regards, Tao From: Liu, Tao Sent: Thursday, August 15, 2019 1:06 PM To: starlingx-discuss at lists.starlingx.io Subject: Pending: Support single huge page size for openstack worker node Hi All, Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. The code changes are available here: https://review.opendev.org/#/c/676710/ Regards, Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Wed Aug 21 02:12:47 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Wed, 21 Aug 2019 10:12:47 +0800 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) In-Reply-To: <1466AF2176E6F040BD63860D0A241BBD495C2030@FMSMSX109.amr.corp.intel.com> References: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> <2FD5DDB5A04D264C80D42CA35194914F360191FD@SHSMSX104.ccr.corp.intel.com> <1466AF2176E6F040BD63860D0A241BBD495C2030@FMSMSX109.amr.corp.intel.com> Message-ID: Dear Elio, >From the STX R1.0 testplan: https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions Testplan: - An active controller can be power off/on via VM. I want to know how to power off an active controller via VM? I can't find any command about this. Thanks Martinez Monroy, Elio 於 2019年8月20日 週二 下午11:58寫道: > Hi Chen, it seems that you want to turn off the active controller, the > system is protecting for doing that. That is because is saying that you > need to lock it first. > > > > My suggestion should be to swact controller in case that you have more > than one controller nodes. If you don’t then you can try to turn it down > using your BMC ip with ipmitool. > > > > Besides that. Im guessing that when you say VM , it is a virtual > environment. Then you can use virsh. > > > > Please correct me if I’m making wrong asummptions. > > > > BR > > > > Elio > > > > *From:* Xie, Cindy > *Sent:* Tuesday, August 20, 2019 7:52 AM > *To:* Ezpeer Chen ; Hu, Yong ; > Martinez Monroy, Elio > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* RE: [Starlingx-discuss] How to power off an active controller > via VM? (STX R1.0) > > > > Hi, Ezpeer, > > I am adding Elio, who may know how to execute those test steps. > > > > Thanks. - cindy > > > > *From:* Ezpeer Chen [mailto:ezpeerchen at gmail.com ] > *Sent:* Tuesday, August 20, 2019 11:53 AM > *To:* Hu, Yong > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] How to power off an active controller > via VM? (STX R1.0) > > > > Dear Yong, > > > > ============================================================= > > [wrsroot at controller-0 ~(keystone_admin)]$ system host-power-off > controller-0 > Can not 'Power-Off' an 'unlocked' host controller-0; Please 'Lock' first > [wrsroot at controller-0 ~(keystone_admin)]$ system host-lock controller-0 > controller-0 : Rejected: Can not lock an active controller. > [wrsroot at controller-0 ~(keystone_admin)]$ > [wrsroot at controller-0 ~(keystone_admin)]$ > ============================================================= > > > > I don't know how to power off an active controller via VM? > > > > There's no Instructions about VM.? > > > > > > Thanks > > > > > > Yong Hu 於 2019年8月20日 週二 上午12:32寫道: > > Did the test steps in this test_plan page answer your question? > Or you tried the steps but saw issues? > > > On 18/08/2019 8:29 PM, Ezpeer Chen wrote: > > Dear all, > > > > > > How to power off an active controller via VM? > > > > From the STX R1.0 testplan: > > > https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions > > > > - An active controller can be power off/on via VM. > > > > > > > > > > Thanks > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Aug 21 02:33:54 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 21 Aug 2019 02:33:54 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190820 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-20(link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From yatindrax.shashi at intel.com Wed Aug 21 08:38:32 2019 From: yatindrax.shashi at intel.com (Shashi, YatindraX) Date: Wed, 21 Aug 2019 08:38:32 +0000 Subject: [Starlingx-discuss] Commands for dns resolution In-Reply-To: References: Message-ID: Hi Ivan, I think you can do using helm-override. See the page https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ. Create a value yml file with the change that you want to do with the DNS. Mit freundlichen Grüßen/ with best regards, Yatindra Shashi On behalf of Developer Relations Division, Intel Corporation Munich, Germany P Save Paper, Go Digital :) From: Hector Ivan, Ramos EscobarX [mailto:ramos.escobarx.hector.ivan at intel.com] Sent: Wednesday, July 31, 2019 8:33 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Commands for dns resolution Hi, currently im working on a TC that uses the next commands: >> system service-parameter-add network ml2 extension_drivers=dns >> system service-parameter-add network default dns_domain=wrs_dns.com Both show the next error: >> Invalid service name network. Can someone provide the current command used to add this parameters? -------------- next part -------------- An HTML attachment was scrubbed... URL: From ezpeerchen at gmail.com Wed Aug 21 09:51:22 2019 From: ezpeerchen at gmail.com (Ezpeer Chen) Date: Wed, 21 Aug 2019 17:51:22 +0800 Subject: [Starlingx-discuss] How could i get the latest STXR1.0 ISO? Message-ID: Dear all, How could i get the latest STX R1.0 ISO with bug fixed before STX R2.0? For example: I want a STX R1.0 ISO with bug fixed between 2018.11 ~2019.01. I don't know how get the latest STX R1.0 ISO. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Aug 21 10:36:42 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 21 Aug 2019 10:36:42 +0000 Subject: [Starlingx-discuss] How could i get the latest STXR1.0 ISO? In-Reply-To: References: Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601A244@SHSMSX104.ccr.corp.intel.com> Hi, Ezpeer, I don’t think community keeps building ISO from 2018.10 branch. Can you consider to switch your deployment from 1.0 to 2.0? Thanks. - cindy From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Wednesday, August 21, 2019 5:51 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] How could i get the latest STXR1.0 ISO? Dear all, How could i get the latest STX R1.0 ISO with bug fixed before STX R2.0? For example: I want a STX R1.0 ISO with bug fixed between 2018.11 ~2019.01. I don't know how get the latest STX R1.0 ISO. Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Wed Aug 21 11:28:51 2019 From: serverascode at gmail.com (Curtis) Date: Wed, 21 Aug 2019 07:28:51 -0400 Subject: [Starlingx-discuss] Moving on from the STX TSC Message-ID: Hi All, Unfortunately I'm moving on from the STX TSC. About a month ago I changed jobs and I'm unable to dedicate the proper amount of time to the STX project. I really appreciate having been part of the TSC and the project overall--it's a great group of people and organizations. I wish the project all the best and look forward to reading about its accomplishments in the future. :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Aug 21 11:35:01 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 21 Aug 2019 11:35:01 +0000 Subject: [Starlingx-discuss] Issue in running ansible playbook In-Reply-To: <03ab01d55416$8fe4fc20$afaef460$@calsoftinc.com> References: <03ab01d55416$8fe4fc20$afaef460$@calsoftinc.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACE73F@ALA-MBD.corp.ad.wrs.com> Hi folks - anyone have some insight for Pavan on this? Thanks, Bill... From: Pavan Gupta Sent: Friday, August 16, 2019 5:40 AM To: Zvonar, Bill Cc: saichandu.behara at calsoftinc.com Subject: Issue in running ansible playbook Hi Bill, With Stx2.0, we are looking in to the following issue after running this command: 'ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml'. If you have any pointers, kindly let us know. TASK [persist-config : debug] ******************************************************************************* ok: [localhost] => { "populate_result": { "changed": true, "failed": false, "failed_when_result": false, "msg": "non-zero return code", "rc": 1, "stderr": "No handlers could be found for logger \"controllerconfig.common.rest_api_utils\"\nTraceback (most recent call last):\n File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1565850999.9-200492702943299/populate_initial_config.py\", line 950, in \n with openstack.OpenStack() as client:\n File \"/usr/lib64/python2.7/site-packages/controllerconfig/openstack.py\", line 62, in __enter__\n raise Exception('Failed to connect')\nException: Failed to connect\n", "stderr_lines": [ "No handlers could be found for logger \"controllerconfig.common.rest_api_utils\"", "Traceback (most recent call last):", " File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1565850999.9-200492702943299/populate_initial_config.py\", line 950, in ", " with openstack.OpenStack() as client:", " File \"/usr/lib64/python2.7/site-packages/controllerconfig/openstack.py\", line 62, in __enter__", " raise Exception('Failed to connect')", "Exception: Failed to connect" ], "stdout": "Failed to provision the initial system config.\n", "stdout_lines": [ "Failed to provision the initial system config." ] } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Aug 21 13:40:31 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 21 Aug 2019 13:40:31 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 8/21 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601A3AF@SHSMSX104.ccr.corp.intel.com> Agenda & notes for 8/21 meeting: 1. stx.2.0 bug triage & review (Cindy) stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other 1836638 : Jim found 3 upstream patches may related to the mem leak issues. Testing in progress to verify the findings. Yi has tested one patch, on controller-0 no leak, on controll-1 still have growing. The next action is to build with all 3 patches together and run the testing again. stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage 1830736: reproduced in SH, debug in progress; 1839181: cannot reproduced, reporter said that it cannot repro in WR's lab either. Status is now "incomplete". 1826886 / 1827063 : Fang Liang is out for vacation and will be back by Friday. 1827080: patch under review. Daniel to review it: https://review.opendev.org/#/c/677196/ 1827119: this is a re-open. 2 issues from the reporter. 1st issue has a patch uploaded for review: https://review.opendev.org/#/c/677424/; Need to double check w/ reporter regarding issue 2 which is not repro from Tingjie. Daniel to review the patch as well. 1831635: new info uploaded by reporter, need Fang Liang to investigate it after he is back. 1836800: Chris Winnicki to try the command and see if this can be reproduced. 1837242: reporter to try it on NVMe system. 2. call for contribution: de-brand "Titanium Cloud" https://storyboard.openstack.org/#!/story/2006387 clarified the scope: scope: no replacement in comments, copyrights, commit messages. limit the scope only visible to the user (people who deploy the StarlingX image). Things like alarm, logs, error messages and console messages needs to be replaced. anybody who want to own this, please assign the task to yourself. 3. package version spec opens (Brent/Bin) Waiting for Ian to provide his review comments on this spec: https://review.opendev.org/#/c/671644/ 4. systemd story (Saul): https://storyboard.openstack.org/#!/story/2006192 StarlingX uses systemd to launch flock services, though there are some services that use sysvinit instead. It is required to standardize the use of systemd and move away from hybrid mode. This is related to multi-OS but the change is going to be in master code. Saul will create several examples in stx-integ to show how to do it. We will use other resources to do the rest and the story will be tracked in our project meeting. this is not going to be 3.0 blocking. 5. Opens (all) Tingjie: spec review for the Ceph containerization. comments and suggestion provided by Brent and Ian. TSC this week shall review and approve the spec. -----Original Message----- From: Xie, Cindy Sent: Tuesday, August 20, 2019 8:47 PM To: Wold, Saul ; 'starlingx-discuss at lists.starlingx.io' ; 'Rowsell, Brent' Subject: Agenda: Weekly StarlingX non-OpenStack distro meeting, 8/21 Agenda for 8/21 meeting: 1. stx.2.0 bug triage & review (Cindy) 2. call for contribution: de-brand "Titanium Cloud" 3. package version spec opens (Brent/Bin) 4. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; Wold, Saul; 'starlingx-discuss at lists.starlingx.io'; 'Rowsell, Brent'; 'zhaos' Cc: 'Seiler, Glenn'; Hu, Wei W; Peng Tan; Gomez, Juan P; 'Waines, Greg'; 'Eslimi, Dariush'; Jones, Bruce E; 'Zhi Zhi2 Chang'; Chen, Tingjie; 'Badea, Daniel'; 'Chen, Jacky'; 'Komiyama, Takeo'; Armstrong, Robert H; 'Carlos Cebrian'; Cobbley, David A Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 21, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From elio.martinez.monroy at intel.com Wed Aug 21 14:46:35 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Wed, 21 Aug 2019 14:46:35 +0000 Subject: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) In-Reply-To: References: <79a262e4-b0c4-da27-3f04-34c20a5d4296@intel.com> <2FD5DDB5A04D264C80D42CA35194914F360191FD@SHSMSX104.ccr.corp.intel.com> <1466AF2176E6F040BD63860D0A241BBD495C2030@FMSMSX109.amr.corp.intel.com> Message-ID: <1466AF2176E6F040BD63860D0A241BBD495C25F6@FMSMSX109.amr.corp.intel.com> So, my recommendation should be to configure the BMC in order to reach the host in other way like this: ~(keystone_admin)$ system host-update bm_username=user_name bm_password= bm_type=bmc ~(keystone_admin)$ system host-update bm_ip=. Just remember that you should have bmc network on your infra. BR Elio From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, August 20, 2019 9:13 PM To: Martinez Monroy, Elio Cc: Xie, Cindy ; Hu, Yong ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Dear Elio, From the STX R1.0 testplan: https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions Testplan: - An active controller can be power off/on via VM. I want to know how to power off an active controller via VM? I can't find any command about this. Thanks Martinez Monroy, Elio > 於 2019年8月20日 週二 下午11:58寫道: Hi Chen, it seems that you want to turn off the active controller, the system is protecting for doing that. That is because is saying that you need to lock it first. My suggestion should be to swact controller in case that you have more than one controller nodes. If you don’t then you can try to turn it down using your BMC ip with ipmitool. Besides that. Im guessing that when you say VM , it is a virtual environment. Then you can use virsh. Please correct me if I’m making wrong asummptions. BR Elio From: Xie, Cindy Sent: Tuesday, August 20, 2019 7:52 AM To: Ezpeer Chen >; Hu, Yong >; Martinez Monroy, Elio > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Hi, Ezpeer, I am adding Elio, who may know how to execute those test steps. Thanks. - cindy From: Ezpeer Chen [mailto:ezpeerchen at gmail.com] Sent: Tuesday, August 20, 2019 11:53 AM To: Hu, Yong > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] How to power off an active controller via VM? (STX R1.0) Dear Yong, ============================================================= [wrsroot at controller-0 ~(keystone_admin)]$ system host-power-off controller-0 Can not 'Power-Off' an 'unlocked' host controller-0; Please 'Lock' first [wrsroot at controller-0 ~(keystone_admin)]$ system host-lock controller-0 controller-0 : Rejected: Can not lock an active controller. [wrsroot at controller-0 ~(keystone_admin)]$ [wrsroot at controller-0 ~(keystone_admin)]$ ============================================================= I don't know how to power off an active controller via VM? There's no Instructions about VM.? Thanks Yong Hu > 於 2019年8月20日 週二 上午12:32寫道: Did the test steps in this test_plan page answer your question? Or you tried the steps but saw issues? On 18/08/2019 8:29 PM, Ezpeer Chen wrote: > Dear all, > > > How to power off an active controller via VM? > > From the STX R1.0 testplan: > https://wiki.openstack.org/wiki/StarlingX/stx.2018.10_Testplan_Instructions > > - An active controller can be power off/on via VM. > > > > > Thanks > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tee.Ngo at windriver.com Wed Aug 21 15:17:08 2019 From: Tee.Ngo at windriver.com (Ngo, Tee) Date: Wed, 21 Aug 2019 15:17:08 +0000 Subject: [Starlingx-discuss] Issue in running ansible playbook In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007ACE73F@ALA-MBD.corp.ad.wrs.com> References: <03ab01d55416$8fe4fc20$afaef460$@calsoftinc.com> <586E8B730EA0DA4A9D6A80A10E486BC007ACE73F@ALA-MBD.corp.ad.wrs.com> Message-ID: <80ED4CE81E3D8F4099306648E95DAFE453AAA234@ALA-MBD.corp.ad.wrs.com> Odd. Test teams have not encountered a bootstrap issue in stx2.0 Pavan, is sysinv-api up? You may want to check sysinv and puppet logs for clue. Can you also share the ansible.log (should be under /home/sysadmin) and the content of host override file (localhost.yml)? Tee From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: August-21-19 7:35 AM To: Pavan Gupta; starlingx-discuss at lists.starlingx.io Cc: saichandu.behara at calsoftinc.com Subject: Re: [Starlingx-discuss] Issue in running ansible playbook Hi folks - anyone have some insight for Pavan on this? Thanks, Bill... From: Pavan Gupta Sent: Friday, August 16, 2019 5:40 AM To: Zvonar, Bill Cc: saichandu.behara at calsoftinc.com Subject: Issue in running ansible playbook Hi Bill, With Stx2.0, we are looking in to the following issue after running this command: 'ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml'. If you have any pointers, kindly let us know. TASK [persist-config : debug] ******************************************************************************* ok: [localhost] => { "populate_result": { "changed": true, "failed": false, "failed_when_result": false, "msg": "non-zero return code", "rc": 1, "stderr": "No handlers could be found for logger \"controllerconfig.common.rest_api_utils\"\nTraceback (most recent call last):\n File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1565850999.9-200492702943299/populate_initial_config.py\", line 950, in \n with openstack.OpenStack() as client:\n File \"/usr/lib64/python2.7/site-packages/controllerconfig/openstack.py\", line 62, in __enter__\n raise Exception('Failed to connect')\nException: Failed to connect\n", "stderr_lines": [ "No handlers could be found for logger \"controllerconfig.common.rest_api_utils\"", "Traceback (most recent call last):", " File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1565850999.9-200492702943299/populate_initial_config.py\", line 950, in ", " with openstack.OpenStack() as client:", " File \"/usr/lib64/python2.7/site-packages/controllerconfig/openstack.py\", line 62, in __enter__", " raise Exception('Failed to connect')", "Exception: Failed to connect" ], "stdout": "Failed to provision the initial system config.\n", "stdout_lines": [ "Failed to provision the initial system config." ] } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Aug 21 16:56:29 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 21 Aug 2019 11:56:29 -0500 Subject: [Starlingx-discuss] Community Marketing Planning call prepping for Release 2.0 tomorrow - Notes In-Reply-To: References: Message-ID: <9D1818B9-94B8-4900-AD4B-4D830556C44A@gmail.com> Hi StarlingX Community, During our call today we mainly talked about the starlingx.io website updates and the timeline for the communications. As the release is planned fro August 30 which is a Friday of a long weekend (Monday is a holiday in the US) __the press release and website updates will go live on September 3rd__. We are finalizing the website updates on this pull request: https://github.com/StarlingXWeb/starlingx-website/pull/42 There will be further pull requests for blog posts to highlight new features and functionality in the 2.0 release. You can also see the new overview slide deck proposal here which is still in progress: https://github.com/StarlingXWeb/starlingx-website/issues/39 Please reply to this mail thread or leave notes on the GitHub items if you have any questions or comments to any of the above. Thanks and Best Regards, Ildikó > On 2019. Aug 20., at 16:40, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > We have our next Community Marketing Planning call tomorrow to finalize preparations for communications around the upcoming 2.0 release of StarlingX. > > The call will be at a slightly different time at 9am Pacific Time / 1600 UTC tomorrow. > > You can find dial in information and meeting agenda here: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans > > Please feel free to add items to the agenda for tomorrow. Add your name next to your item so we know who to give the floor to. > > Please let me know if you have questions. > > Thanks and Best Regards, > Ildikó > > From ildiko.vancsa at gmail.com Wed Aug 21 17:46:52 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 21 Aug 2019 12:46:52 -0500 Subject: [Starlingx-discuss] Distributed Cloud in the overview slide deck update Message-ID: <65571BB7-2D0C-4F46-8ED7-0261D123E1B3@gmail.com> Hi, I’ve been going through the new overview slide deck that is proposed to describe the enhanced platform: https://github.com/StarlingXWeb/starlingx-website/files/3392060/StarlingX.Onboarding.Deck.for.Web.July.2019.pdf I came across a section on Distributed Cloud which got me a bit confused as if I remember correctly we said that this functionality was delayed until the 3.0 release. To make sure communications around 2.0 are accurate, my question would be if the functionality in the slides covers only a subset of features that was planned for 2.0, did the feature fully make it into 2.0 or the slides are containing information about 2.0 + 3.0? Thanks and Best Regards, Ildikó From Dariush.Eslimi at windriver.com Wed Aug 21 17:58:29 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Wed, 21 Aug 2019 17:58:29 +0000 Subject: [Starlingx-discuss] Distributed Cloud in the overview slide deck update In-Reply-To: <65571BB7-2D0C-4F46-8ED7-0261D123E1B3@gmail.com> References: <65571BB7-2D0C-4F46-8ED7-0261D123E1B3@gmail.com> Message-ID: Hi, The code for the feature is partly in the 2.0 (not all) and no testing is been done on it by verification teams. It was officially voted out of 2.0 by TSC and it will be part of 3.0. Thanks, Dariush -----Original Message----- From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] Sent: August-21-19 1:47 PM To: starlingx Cc: Waines, Greg Subject: [Starlingx-discuss] Distributed Cloud in the overview slide deck update Hi, I’ve been going through the new overview slide deck that is proposed to describe the enhanced platform: https://github.com/StarlingXWeb/starlingx-website/files/3392060/StarlingX.Onboarding.Deck.for.Web.July.2019.pdf I came across a section on Distributed Cloud which got me a bit confused as if I remember correctly we said that this functionality was delayed until the 3.0 release. To make sure communications around 2.0 are accurate, my question would be if the functionality in the slides covers only a subset of features that was planned for 2.0, did the feature fully make it into 2.0 or the slides are containing information about 2.0 + 3.0? Thanks and Best Regards, Ildikó _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Jerry.Sun at windriver.com Wed Aug 21 19:00:17 2019 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Wed, 21 Aug 2019 19:00:17 +0000 Subject: [Starlingx-discuss] [docs] Changing ansible docker_registries structure Message-ID: Hi All, The change have been merged into the stx2.0 branch. Please make any needed documentation changes. Thanks, Jerry From: Sun, Yicheng (Jerry) Sent: August-20-19 4:18 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [docs] Changing ansible docker_registries structure Hi all, The change have been merged into master. Please make any needed documentation changes. Thanks, Jerry From: Sun, Yicheng (Jerry) Sent: August-20-19 11:50 AM To: 'starlingx-discuss at lists.starlingx.io' > Subject: Changing ansible docker_registries structure Hi All, I am making changes that change the accepted format for docker registries from something like docker_registries: k8s.gcr.io: url to something like docker_registries: k8s.gcr.io: url: url This affects anyone with a setup that specifies alternate docker registries as they will need to change their ansible localhost.yml files after this commit. the change is currently under review: https://review.opendev.org/#/c/677005/ please let me know if you have any concerns Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From jimmy at openstack.org Wed Aug 21 19:18:40 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 21 Aug 2019 14:18:40 -0500 Subject: [Starlingx-discuss] Shanghai Forum Selection Committee Message-ID: <5D5D9910.9050506@openstack.org> Hello Starling X! The Forum in Shanghai is coming up. We would love 1 volunteer from StarlingX for the Forum Selection Committee. Ideally, the volunteer would already be serving in some capacity in a governance role for your project. For information on the Summit in Shanghai: https://www.openstack.org/summit/shanghai-2019/ For more information on the Forum, please see:https://wiki.openstack.org/wiki/Forum Please reach out to myself orknelson at openstack.org if you're interested. Volunteers should respond on or before September 2, 2019. Cheers, Jimmy From ildiko.vancsa at gmail.com Wed Aug 21 19:38:08 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 21 Aug 2019 14:38:08 -0500 Subject: [Starlingx-discuss] Distributed Cloud in the overview slide deck update In-Reply-To: References: <65571BB7-2D0C-4F46-8ED7-0261D123E1B3@gmail.com> Message-ID: Hi Dariush, Sounds good, thank you for clarifying! Best Regards, Ildikó > On 2019. Aug 21., at 12:58, Eslimi, Dariush wrote: > > Hi, > > The code for the feature is partly in the 2.0 (not all) and no testing is been done on it by verification teams. > It was officially voted out of 2.0 by TSC and it will be part of 3.0. > > Thanks, > Dariush > > -----Original Message----- > From: Ildiko Vancsa [mailto:ildiko.vancsa at gmail.com] > Sent: August-21-19 1:47 PM > To: starlingx > Cc: Waines, Greg > Subject: [Starlingx-discuss] Distributed Cloud in the overview slide deck update > > Hi, > > I’ve been going through the new overview slide deck that is proposed to describe the enhanced platform: https://github.com/StarlingXWeb/starlingx-website/files/3392060/StarlingX.Onboarding.Deck.for.Web.July.2019.pdf > > I came across a section on Distributed Cloud which got me a bit confused as if I remember correctly we said that this functionality was delayed until the 3.0 release. To make sure communications around 2.0 are accurate, my question would be if the functionality in the slides covers only a subset of features that was planned for 2.0, did the feature fully make it into 2.0 or the slides are containing information about 2.0 + 3.0? > > Thanks and Best Regards, > Ildikó > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yong.hu at intel.com Wed Aug 21 19:40:57 2019 From: yong.hu at intel.com (Yong Hu) Date: Wed, 21 Aug 2019 12:40:57 -0700 Subject: [Starlingx-discuss] Shanghai Forum Selection Committee In-Reply-To: <5D5D9910.9050506@openstack.org> References: <5D5D9910.9050506@openstack.org> Message-ID: Hi Jimmy, I might be one of volunteers/candidates for this. I've been working on StarlingX from the beginning, and recently I am covering Bruce on some program/project management assignments. I am based in SH so travel won't be an issue to me :-) regards, Yong On 21/08/2019 12:18 PM, Jimmy McArthur wrote: > Hello Starling X! > > The Forum in Shanghai is coming up.  We would love 1 volunteer from > StarlingX for the Forum Selection Committee. Ideally, the volunteer > would already be serving in some capacity in a governance role for your > project. > > For information on the Summit in Shanghai: > https://www.openstack.org/summit/shanghai-2019/ > For more information on the Forum, please > see:https://wiki.openstack.org/wiki/Forum > > Please reach out to myself orknelson at openstack.org  if you're > interested. Volunteers should respond on or before September 2, 2019. > > Cheers, > Jimmy > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jimmy at openstack.org Wed Aug 21 19:55:37 2019 From: jimmy at openstack.org (Jimmy McArthur) Date: Wed, 21 Aug 2019 14:55:37 -0500 Subject: [Starlingx-discuss] Shanghai Forum Selection Committee In-Reply-To: References: <5D5D9910.9050506@openstack.org> Message-ID: <5D5DA1B9.2080200@openstack.org> Hi Yong, Saul Wold beat you by about 15 minutes :) For now, I think we're set on volunteers, but I appreciate your enthusiasm. Thanks for jumping on this, StarlingXers! Cheers, Jimmy > Yong Hu > August 21, 2019 at 2:40 PM > Hi Jimmy, > I might be one of volunteers/candidates for this. > I've been working on StarlingX from the beginning, and recently I am > covering Bruce on some program/project management assignments. > I am based in SH so travel won't be an issue to me :-) > > regards, > Yong > > > Jimmy McArthur > August 21, 2019 at 2:18 PM > Hello Starling X! > > The Forum in Shanghai is coming up. We would love 1 volunteer from > StarlingX for the Forum Selection Committee. Ideally, the volunteer > would already be serving in some capacity in a governance role for > your project. > > For information on the Summit in Shanghai: > https://www.openstack.org/summit/shanghai-2019/ > For more information on the Forum, please > see:https://wiki.openstack.org/wiki/Forum > > Please reach out to myself orknelson at openstack.org if you're > interested. Volunteers should respond on or before September 2, 2019. > > Cheers, > Jimmy > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Wed Aug 21 20:34:32 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Wed, 21 Aug 2019 20:34:32 +0000 Subject: [Starlingx-discuss] [RC] Sanity Test - ISO 20190821 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-August-21 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers – Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Best Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Aug 21 20:38:20 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 21 Aug 2019 20:38:20 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 8/21/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1BB2943@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Thu Aug 22 01:26:37 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Thu, 22 Aug 2019 01:26:37 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190821 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-August-21 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers – Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Best Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Thu Aug 22 03:21:16 2019 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Thu, 22 Aug 2019 03:21:16 +0000 Subject: [Starlingx-discuss] [Bug]About GPU passthrough issue. Message-ID: <93814834B4855241994F290E959305C7530BF5F5@SHSMSX104.ccr.corp.intel.com> Hi all, About bug of unable to create vm with GPU/Crypto passthrough devices https://bugs.launchpad.net/starlingx/+bug/1824831 The question is clear now for GPU passthrough. According to openstack doc, we need to modify nova.conf to add alias info. https://docs.openstack.org/nova/pike/admin/pci-passthrough.html For QAT, we have added some frequently used QAT items by hardcoding, like below alias = {"vendor_id": "8086", "product_id": "0435", "name": "qat-dh895xcc-pf"} alias = {"vendor_id": "8086", "product_id": "0443", "name": "qat-dh895xcc-vf"} alias = {"vendor_id": "8086", "product_id": "37c8", "name": "qat-c62x-pf"} alias = {"vendor_id": "8086", "product_id": "37c9", "name": "qat-c62x-vf"} Can we do it the same for GPU. Add GPU items by hardcoding. As I know, you used [102b:0522] [vendorid:productid] I also saw below GPU product. [1a03:2000] [8086:3e92] BTW, can we create port and attach it to VM, so that no need to add this alias. Perfect solution is to detect gpu info automatically and add to alias list after power on. >From my point, this is new feature requirement Any comment? Thanks! Zhipeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From abraham.arce.moreno at intel.com Thu Aug 22 14:19:40 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 22 Aug 2019 14:19:40 +0000 Subject: [Starlingx-discuss] Service Parameter Bug 1818887 Message-ID: Hi, I need your advice how to interpret my findings related to this bug: https://bugs.launchpad.net/starlingx/+bug/1818887 Failed to apply run-time manifest after the http port was modified I will take this review as my initial learning source: https://review.opendev.org/#/c/634237 Configurable Host HTTP/HTTPS Port Binding The test steps are: 1. system service-parameter-modify http config http_port=6666 2. system service-parameter-apply http The expected behavior: After modifying http service-parameter Horizon dashboard should be accessbile on http port 6666 [ Fail ] The first time service-parameter-apply is executed, there is a process error: /var/log/sysinv.log 2019-08-21 21:48:55.208 1727887 INFO sysinv.agent.manager ... _apply_runtime_manifest with hieradata_path = '/opt/platform/puppet/19.01/hieradata' 2019-08-21 21:48:55.209 1727887 WARNING sysinv.puppet.common ... /opt/platform/puppet/19.01/hieradata 192.168.204.3 controller runtime 2019-08-21 21:49:06.543 1727887 ERROR sysinv.puppet.common [req-20042efd-3d18-4231-85a8-fb38bdcb3a56 admin admin] Failed to execute runtime manifest for host 192.168.204.3 2019-08-21 21:49:06.543 1727887 TRACE sysinv.puppet.common Traceback (most recent call last): 2019-08-21 21:49:06.543 1727887 TRACE sysinv.puppet.common CalledProcessError: Command '['/usr/local/bin/puppet-manifest-apply.sh', '/opt/platform/puppet/19.01/hieradata', '192.168.204.3', 'controller', 'runtime', '/tmp/tmpothK1t.yaml']' returned non-zero exit status 1 /var/log/puppet/latest/puppet.log 2019-08-21T21:49:04.805 ... Exec[Adding StarlingX helm repo: stx-platform](provider=posix): Executing 'helm repo add stx-platform http://127.0.0.1:8083/helm_charts/stx-platform' 2019-08-21T21:49:04.810 ... Executing with uid=sysadmin gid=sys_protected: 'helm repo add stx-platform http://127.0.0.1:8083/helm_charts/stx-platform' 2019-08-21T21:49:04.840 ... /Stage[main]/Platform::Helm::Repositories/Platform::Helm::Repository[stx-platform]/Exec[Adding StarlingX helm repo: stx-platform]/returns: Error: Looks like "http://127.0.0.1:8083/helm_charts/stx-platform" is not a valid chart repository or cannot be reached: Failed to fetch http://127.0.0.1:8083/helm_charts/stx-platform/index.yaml : 404 Not Found 2019-08-21T21:49:04.843 Error: 2019-08-21 21:49:04 +0000 helm repo add stx-platform http://127.0.0.1:8083/helm_charts/stx-platform returned 1 instead of one of [0] 2019-08-21T21:49:04.983 ... Platform::Helm::Repository[stx-platform]: Resource is being skipped, unscheduling all events 2019-08-21T21:49:04.985 ... Exec[Adding StarlingX helm repo: starlingx](provider=posix): Executing 'helm repo add starlingx http://127.0.0.1:8083/helm_charts/starlingx' 2019-08-21T21:49:04.987 ... Executing with uid=sysadmin gid=sys_protected: 'helm repo add starlingx http://127.0.0.1:8083/helm_charts/starlingx' 2019-08-21T21:49:04.990 ... /Stage[main]/Platform::Helm::Repositories/Platform::Helm::Repository[starlingx]/Exec[Adding StarlingX helm repo: starlingx]/returns: Error: Looks like "http://127.0.0.1:8083/helm_charts/starlingx" is not a valid chart repository or cannot be reached: Failed to fetch http://127.0.0.1:8083/helm_charts/starlingx/index.yaml : 404 Not Found 2019-08-21T21:49:04.992 Error: 2019-08-21 21:49:04 +0000 helm repo add starlingx http://127.0.0.1:8083/helm_charts/starlingx returned 1 instead of one of [0] [ Success ] A second time service-parameter-apply is executed, we might have no process error, clearing the fault alarm. [ Questions ] Are we missing something under puppet-manifests/src/modules/platform/manifests/helm.pp? A restart of "systemctl restart lighttpd.service" so new port value can be properly configured before trying to access http://127.0.0.1:8083/helm_charts/starlingx/? Thanks for your advice! Best Regards Abraham From Kristine.Bujold at windriver.com Thu Aug 22 15:00:27 2019 From: Kristine.Bujold at windriver.com (Bujold, Kristine) Date: Thu, 22 Aug 2019 15:00:27 +0000 Subject: [Starlingx-discuss] documentation missing Message-ID: <5ECD8395442B0C4FB807F9737625BB6768C5BCB0@ALA-MBD.corp.ad.wrs.com> Hi The installation instructions for StarlingX in VBox have been removed from these sites; https://wiki.openstack.org/wiki/StarlingX/Containers/Installation https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnAIODX https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandard https://wiki.openstack.org/wiki/StarlingX/Containers/InstallationOnStandardStorage We should keep these instructions until we have a proper substitute which we do not right now. https://docs.starlingx.io/deploy_install_guides/index.html is not ready. Thank you, Kristine -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Aug 22 16:49:18 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 22 Aug 2019 16:49:18 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190822 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-22 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Aug 22 18:05:58 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 22 Aug 2019 18:05:58 +0000 Subject: [Starlingx-discuss] STX upversioned to kubernetes 1.15.3 Message-ID: The commits related to upversioning kubernetes from 1.13 to 1.15 have merged. If anyone encounters issues or odd behavior, let us know. For anyone who uses a private docker registry, you may need to resynchonize your docker images. One image in particular to be aware of is ceph-config-helper, since we are now using a newer version Previously it was originating from docker.io/port/ceph-config-helper:v1.10.3 A newer version is now being used and is hosted in docker's starlingx account at: docker.io/starlingx/ceph-config-helper:v1.15.0 Al -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Aug 22 18:48:10 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 22 Aug 2019 18:48:10 +0000 Subject: [Starlingx-discuss] Community Call (August 21, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007ACF710@ALA-MBD.corp.ad.wrs.com> Notes from yesterday's call... - standing topics - reviews that need attention - https://review.opendev.org/#/c/676031/ (Bin Yang) - now merged! - for "AIO: Too many rabbit threads", see http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005715.html - https://review.opendev.org/#/c/672929 (Kunpeng Zhang) - now merged! - for "Generate configuration option to enable numa-aware-vswitches", see http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005717.html - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005660.html - from Jose, for the Robot automated test suite - reviews for bugs that 99Cloud's working on... - LP 1792999 : The fix patch was proposed but only have few review: https://review.opendev.org/#/c/621646/ - LP 1820882 : The fix patch was proposed but only have few review: https://review.opendev.org/#/c/651969/ - sanity - any reds since last call? - no, all green since last week - prep for 2.0 declaration next week - logistics of the branch - Ghada to move all open/Medium stx.2.0 bugs to stx.3.0 - Friday August 23 - Final Candidate Build for r/stx.2.0 is planned for Monday August 26 pm ET - We will halt any cherrypicks to r/stx.2.0 for the rest of the week to finish off the build sanity/mini-regression/labeling/posting on CENGN, etc. - Email will be sent out the week after when cherrypicks can continue. - Only high priority stx.2.0 bugs are targeted for cherrypicking for the first maintenance release. - Need to work the details in the release planning meeting - final check on what needs to be cherry-picked - on Monday (26th) - feature test - ironic, etc. - final regression - documentation - review open ARs - First Contact SIG - update from last week's meeting (Bill, Yong): https://etherpad.openstack.org/p/stx-first-contact - zuul-jobs repo (Saul) - part of the multi-OS effort - will work for both CentOS and Open SUSE - per Dean, this repo will also host other common test stuff, like DevStack - much of this stuff is in stx-integ now, the idea is to move it to a good common location that all repos can use - as Saul enables it, it'll become a required project in the .yaml files of other repos ---------------------------------------------------------------------------------------------------------------- From: Zvonar, Bill Sent: Tuesday, August 20, 2019 7:22 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: Community Call (August 21, 2019) Hi everyone, reminder of the Community Call tomorrow. Topics on the agenda include... - preparing for the 2.0 declaration next week! Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190821T1400 From jim.somerville at windriver.com Thu Aug 22 20:58:43 2019 From: jim.somerville at windriver.com (Jim Somerville) Date: Thu, 22 Aug 2019 16:58:43 -0400 Subject: [Starlingx-discuss] Opinion wanted on STX 2.0 kernel memory leak bug 1836638 Message-ID: Hi Folks, I have identified 3 kernel patches to fix the observed memory leak in the RT kernel. Namely listed here in comment 53: https://bugs.launchpad.net/starlingx/+bug/1836638/comments/53 While the leak is only really seen in RT, the fixes are not RT specific. The RT kernel has ferreted out many linux bugs in the past mainly due to its different scheduling points, and running irq handlers as kernel threads, causing things to run in a different order from std. So should these patches be applied to both of our kernels, even though we only see the leaking on RT? I would say yes, but don't want to waste my, and possibly Yi's, time if approvers disagree. Thanks, -Jim From Brent.Rowsell at windriver.com Thu Aug 22 21:53:32 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Thu, 22 Aug 2019 21:53:32 +0000 Subject: [Starlingx-discuss] Opinion wanted on STX 2.0 kernel memory leak bug 1836638 In-Reply-To: References: Message-ID: <7EE828B6-3A03-4496-89B1-202AB8BB39D0@windriver.com> Nice work ! +1 on doing both kernels. Brent Sent from my iPhone > On Aug 23, 2019, at 5:58 AM, Jim Somerville wrote: > > Hi Folks, > > I have identified 3 kernel patches to fix the observed memory leak in the RT kernel. Namely listed here in comment 53: https://bugs.launchpad.net/starlingx/+bug/1836638/comments/53 > > While the leak is only really seen in RT, the fixes are not RT specific. The RT kernel has ferreted out many linux bugs in the past mainly due to its different scheduling points, and running irq handlers as kernel threads, causing things to run in a different order from std. > > So should these patches be applied to both of our kernels, even though we only see the leaking on RT? I would say yes, but don't want to waste my, and possibly Yi's, time if approvers disagree. > > Thanks, > > -Jim From maria.g.perez.ibarra at intel.com Thu Aug 22 22:57:04 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 22 Aug 2019 22:57:04 +0000 Subject: [Starlingx-discuss] [ Final Regression - stx2.0 ] Report for 8/22/19 Message-ID: StarlingX 2.0 Release Status: ISO: BUILD_ID=" 20190815T053000Z" from (link) ---------------------------------------------------------------------- MANUAL EXECUTION ---------------------------------------------------------------------- Overall Results: Total = 211 Pass = 78 Fail = 1 Blocked = 0 Not Run =132 Total executed = 79 Pass Rate = 98.73% Formula used : Pass Rate = pass * 100 / (pass + fail) Results per Domain: Regression - AIO-SX 6 PASS Regression - Backup & Restore - Regression - Distributed Cloud - Regression - Gnoochi 2 PASS Regression - FM Regression - HA Regression - Heat 4 PASS Regression - Horizon 1 PASS Regression - Install and Config Regression - Maintenance Regression - Networking 21 PASS | 1 FAIL Regression - Nova Regression - Security 6 PASS Regression - Storage 9 PASS Regression - Inventory 7 PASS System Test 7 PASS Regression - new features 15 PASS After pull data cable on the compute, no alarm has triggered https://bugs.launchpad.net/starlingx/+bug/1834512 Containers: lock_host failed on a host with config_drive VM https://bugs.launchpad.net/starlingx/+bug/1821026 stx-openstack apply takes longer time when lock and unlock on standby controller https://bugs.launchpad.net/starlingx/+bug/1834083 neutron-l3-agent and neutron-dhcp-agent never recovered after force reboot on compute https://bugs.launchpad.net/starlingx/+bug/1835807 When creating instance with pci-passthrough port getting error https://bugs.launchpad.net/starlingx/+bug/1836682 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 application apply fails after compute lock and unlock https://bugs.launchpad.net/starlingx/+bug/1836609 403 error in horizon log when try to update the flavor metadata (and admin user is logged out) https://bugs.launchpad.net/starlingx/+bug/1821213 instance creating via horizon failed https://bugs.launchpad.net/starlingx/+bug/1829925 After active controller reboot, VM boot up failed with error "Failed to allocate the network(s), not rescheduling" https://bugs.launchpad.net/starlingx/+bug/1836928 nova instance remnant left behind after cold migration completes https://bugs.launchpad.net/starlingx/+bug/1824858 disk_available_least value updates when instance moved but not to the value expected https://bugs.launchpad.net/nova/+bug/1834527 Containers: vm unreachable for minutes after live migration or vm reboot https://bugs.launchpad.net/starlingx/+bug/1818118 unexpected output when wipe unassigned disk https://bugs.launchpad.net/starlingx/+bug/1836633 AIO-DX Application apply aborted Unexpected process termination while application-apply was in progress https://bugs.launchpad.net/starlingx/+bug/1838101 Uncontrolled swact on standard system is slow https://bugs.launchpad.net/starlingx/+bug/1838411 tenant-mgmt-net not reachable from external network https://bugs.launchpad.net/starlingx/+bug/1836252 dedicated instance on low latency worker node not appearing in C1 state https://bugs.launchpad.net/starlingx/+bug/1838524 Intermittently the openstack server show indicates that the server does not exist (in live migration tests) https://bugs.launchpad.net/starlingx/+bug/1838676 Resize to swapless flavor still looking for swap https://bugs.launchpad.net/nova/+bug/1762423 SSH to VM failed by Permission denied (publickey) https://bugs.launchpad.net/starlingx/+bug/1824174 vSwitch 1G Hugepage available size cannot be changed https://bugs.launchpad.net/starlingx/+bug/1834530 hypervisor stays down after force lock and unlock due to pci-irq-affinity-agent process failure https://bugs.launchpad.net/starlingx/+bug/1839160 Image conversion fails with large qcow2 guest image due to insufficient filesystem size https://bugs.launchpad.net/starlingx/+bug/1819688 platform keystone account lockout feature is not enabled https://bugs.launchpad.net/starlingx/+bug/1838100 stx-openstack application-applying stuck at osh-openstack-placement https://bugs.launchpad.net/starlingx/+bug/1837769 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Thu Aug 22 23:58:53 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 22 Aug 2019 23:58:53 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190822 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-22(link) Status: Yellow =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] 11 TCs | FAIL Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Unlock after force lock enabled the worker according to maintenance but hypervisor remained down https://bugs.launchpad.net/starlingx/+bug/1824881 Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Fri Aug 23 01:17:04 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Fri, 23 Aug 2019 01:17:04 +0000 Subject: [Starlingx-discuss] Opinion wanted on STX 2.0 kernel memory leak bug 1836638 In-Reply-To: <7EE828B6-3A03-4496-89B1-202AB8BB39D0@windriver.com> References: <7EE828B6-3A03-4496-89B1-202AB8BB39D0@windriver.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601BC3E@SHSMSX104.ccr.corp.intel.com> Yes, I think we should do on both kernels, the chances that we didn't see std kernel leak might due to our test cases coverage, but as we now have root-causes, we shall get it fixed. -----Original Message----- From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com] Sent: Friday, August 23, 2019 5:54 AM To: Somerville, Jim Cc: starlingx ; Troyer, Dean ; Saul Wold ; Wang, Yi C Subject: Re: [Starlingx-discuss] Opinion wanted on STX 2.0 kernel memory leak bug 1836638 Nice work ! +1 on doing both kernels. Brent Sent from my iPhone > On Aug 23, 2019, at 5:58 AM, Jim Somerville wrote: > > Hi Folks, > > I have identified 3 kernel patches to fix the observed memory leak in the RT kernel. Namely listed here in comment 53: https://bugs.launchpad.net/starlingx/+bug/1836638/comments/53 > > While the leak is only really seen in RT, the fixes are not RT specific. The RT kernel has ferreted out many linux bugs in the past mainly due to its different scheduling points, and running irq handlers as kernel threads, causing things to run in a different order from std. > > So should these patches be applied to both of our kernels, even though we only see the leaking on RT? I would say yes, but don't want to waste my, and possibly Yi's, time if approvers disagree. > > Thanks, > > -Jim _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From haochuan.z.chen at intel.com Fri Aug 23 02:00:49 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Fri, 23 Aug 2019 02:00:49 +0000 Subject: [Starlingx-discuss] question about ceph.pp Message-ID: <56829C2A36C2E542B0CCB9854828E4D8562581D9@CDSMSX102.ccr.corp.intel.com> Hi I am checking LP183-736 https://bugs.launchpad.net/starlingx/+bug/1830736 I find if provision a storage node with add dedicated journal disk with such command, starlingx will assign journal to all osds on this storage node. system host-stor-add storage-0 journal 7cbc9885-476c-4ad2-9058-466f1e0f9667 system host-stor-add storage-0 osd 46393030-acbf-43f4-8ca9-f705f65bf457 --tier-uuid 4c672ca9-7c4b-472a-b049-eac115c8aef9 But after unlock, when as journal path has not been set to ceph:osds in osd.pp, osd will be created with journal this the same disk, for example, /dev/sdc will use /dev/sdc1 or /dev/sdc2 as journal, not journal from /dev/sdb ,which dedicated journal disk with host-strorage-add. Which make this bug. class platform::ceph::osds( $osd_config = {}, $journal_config = {}, ) inherits ::platform::ceph::params { # skip_osds_during_restore is set to true when the default primary # ceph backend "ceph-store" has "restore" as its task and it is # not an AIO system. if ! $skip_osds_during_restore { file { '/var/lib/ceph/osd': ensure => 'directory', path => '/var/lib/ceph/osd', owner => 'root', group => 'root', mode => '0755', } # Ensure ceph.conf is complete before configuring OSDs Class['::ceph'] -> Platform_ceph_osd <| |> # Journal disks need to be prepared before the OSDs are configured Platform_ceph_journal <| |> -> Platform_ceph_osd <| |> # Crush locations in ceph.conf need to be set before the OSDs are configured Osd_crush_location <| |> -> Platform_ceph_osd <| |> # default configuration for all ceph object resources Ceph::Osd { cluster => $cluster_name, cluster_uuid => $cluster_uuid, journal => "missing journal disk path" # which make this issue } create_resources('osd_crush_location', $osd_config) create_resources('platform_ceph_osd', $osd_config) create_resources('platform_ceph_journal', $journal_config) } } My question is how to set journal path to class ceph::osd? Request advice from puppet expert. Journal path in /opt/platform/puppet/19.01/hieradata/.yaml platform::ceph::osds::osd_config: stor-10: data_path: !!python/unicode '/dev/disk/by-path/pci-0000:00:17.0-ata-6.0-part1' disk_path: !!python/unicode '/dev/disk/by-path/pci-0000:00:17.0-ata-6.0' journal_path: !!python/unicode '/dev/disk/by-path/pci-0000:00:17.0-ata-2.0-part1' osd_id: 0 osd_uuid: !!python/unicode 'a5a0c4c6-207e-408e-a0dd-a7b385f8bab1' tier_name: !!python/unicode 'storage' Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From Frank.Miller at windriver.com Fri Aug 23 14:55:03 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 23 Aug 2019 14:55:03 +0000 Subject: [Starlingx-discuss] Containerization Meeting cancelled for Aug 26 Message-ID: FYI - The weekly meeting for containerization will not be held on Monday Aug 26. -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Aug 23 15:36:52 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 23 Aug 2019 08:36:52 -0700 Subject: [Starlingx-discuss] Opinion wanted on STX 2.0 kernel memory leak bug 1836638 In-Reply-To: References: Message-ID: Jim: Great work on tracking down these issues, I am curious are they addressed in the newer 3.10 series from CentOS or only from other upstream sources? If they are from other uptream sources, has it been reported to CentOS/Redhat? More curious than anything else, not a requirement. I do agree that we should make the changes to both kernel. Sau! On 8/22/19 1:58 PM, Jim Somerville wrote: > Hi Folks, > > I have identified 3 kernel patches to fix the observed memory leak in > the RT kernel.  Namely listed here in comment 53: > https://bugs.launchpad.net/starlingx/+bug/1836638/comments/53 > > While the leak is only really seen in RT, the fixes are not RT specific. >  The RT kernel has ferreted out many linux bugs in the past mainly due > to its different scheduling points, and running irq handlers as kernel > threads, causing things to run in a different order from std. > > So should these patches be applied to both of our kernels, even though > we only see the leaking on RT?  I would say yes, but don't want to waste > my, and possibly Yi's, time if approvers disagree. > > Thanks, > > -Jim > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From maria.g.perez.ibarra at intel.com Fri Aug 23 16:21:56 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Fri, 23 Aug 2019 16:21:56 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190823 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-23 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - ExternalStorage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri Aug 23 17:07:16 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 23 Aug 2019 17:07:16 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Networking Meeting - 08/22 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1599DC4@ALA-MBD.corp.ad.wrs.com> Meeting minutes/agenda are captured at: https://etherpad.openstack.org/p/stx-networking Team Meeting Agenda/Notes - Aug 22/2019 Bugs - stx.2.0 gating High: 6 - https://bugs.launchpad.net/starlingx/+bug/1835807 - Joseph, This is being seen in other queues (including neuton plugin queue and nova queue. Still no actual fix. testing disabling ha, as well as upversioning rabbit - https://bugs.launchpad.net/starlingx/+bug/1836252 - Joseph, looks to be from dhcp agent taking too long, and dhcp agent doesn't retry. don't have a fix yet. Right now the neutron agent retries up to 60 seconds (hard-coded), Joseph doesn't think proposing to make the timeout configurable will be accepted in neutron as this shouldn't be taking that long. - https://bugs.launchpad.net/starlingx/+bug/1836682 - Two ways provided by Chenjie and the way configuring PCI alias is confirmed by Sathish and Paulina. Is it necessary to test the way binding a port with vnic_type direct_physical? Not expecting to have a software fix for this. Follow-up actions are: (1) update the networking wiki with the steps, (2) Test a NIC that doesn't support sr-iov (example: I210 - which is being tested with containers: https://bugs.launchpad.net/starlingx/+bug/1838744). Chenje will check with XuYizhou to try the test on his system - https://bugs.launchpad.net/starlingx/+bug/1817936 - Under investigation by Austin and Matt. Code proposed, but Matt is on vacation this week. Will need to follow-up next week. - https://bugs.launchpad.net/starlingx/+bug/1834245 - Chenjie: Closed as the reporter confirmed that the issue is not reproducible. Medium: 4 -- Bugs that aren't fixed by August 23 will be moved to stx.3.0 - https://bugs.launchpad.net/starlingx/+bug/1818118 - Joseph, no update - https://bugs.launchpad.net/starlingx/+bug/1836969 - Teresa - In progress, but won't make the stx.2.0 cut-off of Aug 23. - https://bugs.launchpad.net/starlingx/+bug/1832892 - Steve - Code review in progress. Should make it before the stx.2.0 cut-off of Aug 23. - https://bugs.launchpad.net/starlingx/+bug/1834234 - Teresa - Not started. - https://bugs.launchpad.net/starlingx/+bug/1821026 - fpixe - From the notes, it looks like the developer cannot reproduce and he's working with the reporters to get the steps. - https://bugs.launchpad.net/starlingx/+bug/1830082 - Teresa - Not started Low: 0 none that are targeted for stx.2.0 Undecided / New: none - stx.2.0 Networking Test Status -- Elio - All good :) - Finished with final regression; no new issues found. stx.3.0 - OVS-DPDK Containerization - Prime: Cheng - Spec is under review - has got two +2 and still lacks one workflow +1. https://review.opendev.org/#/c/655830/ - The spec needs a majority vote from the TSC members - Ghada to follow up with Matt when he's back from vacation next week. Will also see if Ian is ok to give a +2 - TSN - Prime: Huifeng - Open on review options: what's the process to submit the document for TSN deployment in STX? - Agreed to put the information on a wiki linked to the Networking wiki. Then send to the networking primes (cc starlingx-discuss) for internal review. - To List: Matt Peters, Ruijin , Brent Rowsell - Once feedback is incorporated, we can ask the documentation team to include in the stx.3.0 (or include it following the process they provide) From Ghada.Khalil at windriver.com Fri Aug 23 17:36:15 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 23 Aug 2019 17:36:15 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - August 22/2019 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1599E3F@ALA-MBD.corp.ad.wrs.com> Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release Team Meeting - August 22 2019 stx.2.0 - Test Status - Feature Test - Tracker: https://docs.google.com/spreadsheets/d/15us6HWgcb0dmHHZe2SyOR_UI5BvVpoU8TCST0rQd4zg/edit#gid=1717644237 - Ironic >> Made good progress thi week. Fix submitted. Tested with a hot fix; able to deploy ironic. Having trouble launching VMs; test team is working with Mingyuan to investigate. - Final Regression - Tracker: https://docs.google.com/spreadsheets/d/1FxrwgivQCG3Ksvqm46zhKILJlZtucsNxGYG4a8d0LSs/edit#gid=838066175 - Tracker needs updating - Only 7 TCs left to execute - Final Candidate Build - Final Candidate Build for r/stx.2.0 is planned for Monday August 26 pm ET - This will also include a docker image build - at or near 11:30 PM UTC https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190826T2330 - Sanity / Test Plan - Agreed to have one day of testing on August 27 - Run sanity on all configurations - Run some additional TCs based on the discretion of the test teams - Example looking at recent commits: https://review.opendev.org/#/q/branch:+r/stx.2.0 - Ada to send the go-ahead once testing is done - What would make us rebuild? - Load is broken / RED sanity - Bugs that fail re-test will be re-opened and handled in the upcoming mtce release (if high priority) -- they will not result in a rebuild - Tagging / Posting on CENGN / Announcement - The build will be posted at: /export/mirror/starlingx/release/2.0.0/ - This is the same build done on Monday pm -- just copied over to the release directory and labeled properly stx.2.0 Maintenance Release - Agreed that monthly maintenance releases make sense. - We will target the week of Sept 30 as a tentative target for now - There is no infrastructure in place to generate binary updates against the release ISO - This should be noted to users of maintenance releases. Action: Dean to discuss with Ildiko regarding marketing information. Bill to add as a topic on the community call. - r/stx.2.0 branch expected to re-open for cherrypicks on Sept 3 - Builds will be on demand. Would like to target a weekly build on Monday -- assuming there is new content. - Sanity Schedule: Next day after the build - every Tuesday - Regression Schedule: Ada/Numan to work on proposal stx.3.0 - Milestone-2 -- wk of Sept 2 - Milestone-2 Criteria: - Spec freeze - Specs are in good shape. - Performance Framework spec is posted. R2 >> R3 spec will continue to be an exception. - Feature plans defined and feature development well underway - Release test plan defined - including test automation deliverables - Documentation plan defined - Test team is raising concerns about the scope of features for stx.3.0 and the release timeline - Regression is supposed to start on Sept 23. - Only 5 weeks for feature testing - Given the tight timeline, we need PLs to provide a risk indicator for landing their feature content. Ideally, a rough plan would help the test team plan accordingly. - If a feature is too big to land in stx.3.0, knowing this now helps the test team focus their efforts on the features that will make it. - Agreed that storyboard will have the list of features being worked for the release. - Bill will follow up with the PLs to ensure any missing story boards are created. From Ghada.Khalil at windriver.com Fri Aug 23 19:26:28 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 23 Aug 2019 19:26:28 +0000 Subject: [Starlingx-discuss] Release Plan - stx.2.0 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1599F21@ALA-MBD.corp.ad.wrs.com> Hello all, The stx.2.0 final candidate build is scheduled on Monday August 26 at 11:30 PM UTC. We are currently waiting for one commit to merge: https://review.opendev.org/#/c/678167/ Please do not cherry-pick any other commits to the r/stx.2.0 branch until further notice. The logistics for next week are as follows: - Developer and core reviewers merge the remaining commit. - Final candidate build run on the r/stx.2.0 branch on Monday August 26 - The test team executes one day of testing on the final candidate build -- sanity + additional testing based on the discretion of the test teams - Ada will send a report by EOD Tuesday -- either with the go-ahead for the release OR identifying any RED sanity/blocking issues - If there is a RED sanity, the bug will need to be resolved and the load will need to be re-built. And sanity will need to be re-executed. - If all is good, Scott and Dean will follow the steps to tag the branch and post the build artifacts in the release directory on CENGN. Other things to note: - All stx.2.0 unresolved medium priority bugs have been moved to stx.3.0 as per community agreement - This means fixes are only required in master from this point forward. - PLs/TLs have the discretion to raise the priority of a medium bug and bring it back as a high priority in stx.2.0 - All stx.3.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. - Developers are encouraged to continue working on those as a top priority and to post reviews on master - For more information on the stx.2.0 upcoming maintenance release, please read the minutes from the Release Meeting on 2019-08-22 If you have any questions/concerns, please reach out to Bill Zvonar as I am on vacation next week. And, finally, a big thank you for all community members who have contributed to StarlingX 2.0. Best Regards, Ghada From Ghada.Khalil at windriver.com Fri Aug 23 19:30:56 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 23 Aug 2019 19:30:56 +0000 Subject: [Starlingx-discuss] Release Plan - stx.2.0 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1599F37@ALA-MBD.corp.ad.wrs.com> My apologies for the typo below... The second last bullet should read: - All stx.2.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:26 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Release Plan - stx.2.0 Hello all, The stx.2.0 final candidate build is scheduled on Monday August 26 at 11:30 PM UTC. We are currently waiting for one commit to merge: https://review.opendev.org/#/c/678167/ Please do not cherry-pick any other commits to the r/stx.2.0 branch until further notice. The logistics for next week are as follows: - Developer and core reviewers merge the remaining commit. - Final candidate build run on the r/stx.2.0 branch on Monday August 26 - The test team executes one day of testing on the final candidate build -- sanity + additional testing based on the discretion of the test teams - Ada will send a report by EOD Tuesday -- either with the go-ahead for the release OR identifying any RED sanity/blocking issues - If there is a RED sanity, the bug will need to be resolved and the load will need to be re-built. And sanity will need to be re-executed. - If all is good, Scott and Dean will follow the steps to tag the branch and post the build artifacts in the release directory on CENGN. Other things to note: - All stx.2.0 unresolved medium priority bugs have been moved to stx.3.0 as per community agreement - This means fixes are only required in master from this point forward. - PLs/TLs have the discretion to raise the priority of a medium bug and bring it back as a high priority in stx.2.0 - All stx.3.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. - Developers are encouraged to continue working on those as a top priority and to post reviews on master - For more information on the stx.2.0 upcoming maintenance release, please read the minutes from the Release Meeting on 2019-08-22 If you have any questions/concerns, please reach out to Bill Zvonar as I am on vacation next week. And, finally, a big thank you for all community members who have contributed to StarlingX 2.0. Best Regards, Ghada From Ghada.Khalil at windriver.com Fri Aug 23 20:23:34 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 23 Aug 2019 20:23:34 +0000 Subject: [Starlingx-discuss] Release Plan - stx.3.0 Message-ID: <151EE31B9FCCA54397A757BC674650F0C1599F8F@ALA-MBD.corp.ad.wrs.com> Hello all, The stx.3.0 milestone-2 is planned for the week of Sept 3. This is a call to the various primes (dev, test, doc) to help close the milestone criteria. For a list of features targeted for stx.3.0 is available at: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 The criteria for the milestone are as follows: - Spec freeze - Specs are in good shape. - Performance Framework spec is posted. R2 >> R3 spec will continue to be an exception. - Feature plans defined and feature development well underway - To date, we do not have concrete plans from the feature PLs on risk, status, and expected code merge dates - PLs, please update your plans on the spreadsheet above. Please also create the corresponding stories in StoryBoard and tag them with the stx.3.0 label - Release test plan defined - including test automation deliverables - Test team is raising concerns about the scope of features for stx.3.0 and the release timeline. There are only 5wks left for feature testing and regression is supposed to start on Sept 23. - Given the tight timeline, we need PLs to provide a risk indicator for landing their feature content. Ideally, a rough plan would help the test team plan accordingly. If a feature is too big to land in stx.3.0, knowing this now helps the test team focus their efforts on the features that will make it. If a feature is delivering in chunks, please engage the test team so that they can test as content is delivered. - Documentation plan defined - Need confirmation from the doc team on what they have planned Regards, Ghada From maria.g.perez.ibarra at intel.com Sat Aug 24 00:49:18 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Sat, 24 Aug 2019 00:49:18 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190823 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-23(link) Status: Green =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From pavan.gupta at calsoftinc.com Wed Aug 21 11:44:08 2019 From: pavan.gupta at calsoftinc.com (Pavan Gupta) Date: Wed, 21 Aug 2019 17:14:08 +0530 Subject: [Starlingx-discuss] Issue in running ansible playbook In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007ACE73F@ALA-MBD.corp.ad.wrs.com> References: <03ab01d55416$8fe4fc20$afaef460$@calsoftinc.com> <586E8B730EA0DA4A9D6A80A10E486BC007ACE73F@ALA-MBD.corp.ad.wrs.com> Message-ID: <049201d55815$bee27f90$3ca77eb0$@calsoftinc.com> Hi Bill, We could resolve this issue, it was related to Openstack authentication. Instead of using standard username/password (St8rlingX*/ St8rlingX*), we were using our own credentials. At present, we are resolving issue with docker version, K8s doesn't accept latest version of docker, it is asking for version 18.06. Pavan From: Zvonar, Bill Sent: 21 August 2019 17:05 To: Pavan Gupta ; starlingx-discuss at lists.starlingx.io Cc: saichandu.behara at calsoftinc.com Subject: RE: Issue in running ansible playbook Hi folks - anyone have some insight for Pavan on this? Thanks, Bill... From: Pavan Gupta > Sent: Friday, August 16, 2019 5:40 AM To: Zvonar, Bill > Cc: saichandu.behara at calsoftinc.com Subject: Issue in running ansible playbook Hi Bill, With Stx2.0, we are looking in to the following issue after running this command: 'ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap/bootstrap.yml'. If you have any pointers, kindly let us know. TASK [persist-config : debug] **************************************************************************** *** ok: [localhost] => { "populate_result": { "changed": true, "failed": false, "failed_when_result": false, "msg": "non-zero return code", "rc": 1, "stderr": "No handlers could be found for logger \"controllerconfig.common.rest_api_utils\"\nTraceback (most recent call last):\n File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1565850999.9-200492702943299/popula te_initial_config.py\", line 950, in \n with openstack.OpenStack() as client:\n File \"/usr/lib64/python2.7/site-packages/controllerconfig/openstack.py\", line 62, in __enter__\n raise Exception('Failed to connect')\nException: Failed to connect\n", "stderr_lines": [ "No handlers could be found for logger \"controllerconfig.common.rest_api_utils\"", "Traceback (most recent call last):", " File \"/tmp/.ansible-sysadmin/tmp/ansible-tmp-1565850999.9-200492702943299/popula te_initial_config.py\", line 950, in ", " with openstack.OpenStack() as client:", " File \"/usr/lib64/python2.7/site-packages/controllerconfig/openstack.py\", line 62, in __enter__", " raise Exception('Failed to connect')", "Exception: Failed to connect" ], "stdout": "Failed to provision the initial system config.\n", "stdout_lines": [ "Failed to provision the initial system config." ] } } -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Sat Aug 24 03:46:40 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Fri, 23 Aug 2019 20:46:40 -0700 Subject: [Starlingx-discuss] PTG Attendance Message-ID: Hello Everyone! This is the list of groups that are plannning on attending the PTG. Thank you PTLs/Chairs/Leads that responded on time :) I hesitate to say it's the 'final' list since I had a few groups respond with a 'Maybe', but it's probably pretty close. Please also note that the tentative activities at the PTG vary by group. Some groups are planning on only doing onboarding for example. Without further ado, here is the list! Airship Auto-Scaling SIG Barbican Blazar CInder Cyborg Edge Computing Group Fenix First Contact SIG Gitea Glance Heat Horizon I18n Ironic K8s SIG Karbor Kata Containers Keystone Loci Manila Meta SIG Monasca Neutron Nova Octavia OpenStack Charms OpenStack Infra / OpenDev OpenStack TC Openstack-helm OpenStack Operators (Ops Docs SIG and Meetup group) Oslo Public Cloud SIG Quality Assurance Release Management Scientific SIG Self-healing SIG Storlets StoryBoard Swift Tacker StarlingX If your team is missing from the list, please let me know ASAP as we are starting to put together a draft schedule. -Kendall Nelson(diablo_rojo) -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Sun Aug 25 01:38:43 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Sat, 24 Aug 2019 18:38:43 -0700 Subject: [Starlingx-discuss] [multios][build] Build flock services with plan mock In-Reply-To: References: <8930615a-21c7-3f31-3d23-0615fe852f07@windriver.com> Message-ID: HI team/Marcela Following this experiment, here are the results of building the stx SRPMs with simple mock build system: https://docs.google.com/spreadsheets/d/1kWrV3A28tTc3xgKiYtbir3ymcI4pew3VosE0jfB9_Fo/edit?usp=sharing Marcela, we can work on fixing them one by one. If I am missing something on the list of SRPMs that need to be built, please let me know I also updated the script and Makefile based on the patch from Dean ( thanks ) Regards Victor Rodriguez On Mon, Aug 19, 2019 at 12:03 PM Victor Rodriguez wrote: > Awesome, thanks! > > On Mon, Aug 19, 2019 at 9:35 AM Scott Little > wrote: > > > > The server multi-thread, and only one server thread had lost > > connectivity of the ceph back end. It's fixed now. > > > > Scott > > > > On 2019-08-14 6:42 p.m., Dean Troyer wrote: > > > On Wed, Aug 14, 2019 at 2:18 PM Scott Little < > scott.little at windriver.com> wrote: > > >> I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > > >> outages attributed to know issues at cengn. > > > [...] > > >> How many folks have seen this? What was the time of the event? How > > >> long did it persist? Please report events in UTC. > > > So I've been poking at this for the last few minutes, so around > 2200-2230 UTC > > > > > > These links work: > > > > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053000Z/ > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033000Z/ > > > > > > These do not: > > > > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033004Z/ > > > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053000Z/ > > > > > > Until I tried them again to write this email, then they swapped. > > > > > > Is there perchance a load balancer in front of multiple web servers > > > and one of the backends is having trouble? Even if that isn't the > > > case that seems to describe the observed behaviour well enough. > > > > > > dt > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Sun Aug 25 03:57:11 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Sat, 24 Aug 2019 20:57:11 -0700 Subject: [Starlingx-discuss] [multi-os] Weekly meeting notes Message-ID: Hi team These are the latest meeting notes from the multi os meeting: https://etherpad.openstack.org/p/stx-multios Multi-OS team meeting Opens: - What's the best approach for configuration and run time dependencies of the flock services? - Zuul jobs for the flock services - Open Suse: - Erich: There are some patches for compilation fixes (These were on hold since the R2 freeze ) - Abraham: Is doing a deep analysis of high availability source code and how to generate the best testing in Open Suse Yocto: - Yocto team did a great presentation of where they are and in what phase they are: - link: CentOS: - Build with plan mock for centos srpm - currently being able to build any srpm: - There are some failures https://docs.google.com/spreadsheets/d/1kWrV3A28tTc3xgKiYtbir3ymcI4pew3VosE0jfB9_Fo/edit?usp=sharing An important notice to mention is that since next week Marcela Rosales will be leading the multi-OS meeting. I will keep working on multi os tasks as well as any other tasks that the project needs. Thanks a lot Victor Rodriguez -------------- next part -------------- An HTML attachment was scrubbed... URL: From huifeng.le at intel.com Sun Aug 25 06:38:39 2019 From: huifeng.le at intel.com (Le, Huifeng) Date: Sun, 25 Aug 2019 06:38:39 +0000 Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D60F245@SHSMSX104.ccr.corp.intel.com> Brent, Matt, Ruijing, As following up on story: [Feature] Time Sensitive Networking (https://storyboard.openstack.org/#!/story/2005516) and approved spec https://review.opendev.org/#/c/666768/, we had done the POC to deploy and run TSN application on STX environment, the detail process and learning are summarized at Wiki: https://wiki.openstack.org/wiki/StarlingX/Networking/TSN which can be served as deliverable of task "StarlingX user guide on how to deploy TSN in VM". Could you please help to review the Wiki and let me know if you have any comments. Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Aug 26 04:37:42 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 26 Aug 2019 04:37:42 +0000 Subject: [Starlingx-discuss] Guest OS support Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> Hi, Numan & Ada, Do we have test cases to cover what kind of guest OS can be supported for stx.2.0? For example, Windows (version?), Linux (flavor?) or VxWorks? I am wondering if we already have such test cases in stx-test but I didn't find them. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Aug 26 04:49:23 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 26 Aug 2019 04:49:23 +0000 Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking In-Reply-To: <76647BD697F40748B1FA4F56DA02AA0B4D60F245@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D60F245@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601DBEE@SHSMSX104.ccr.corp.intel.com> Huifeng, Seems like TSN support is based on Nova PCI pass-through for i210 network adaptor. Do you have any Nova patches or StarlingX patches pending merge (and need cherry pick) to allow the procedures in wiki page to be successful? If there are no patches pending, can we claim that TSN feature is already supported in stx.2.0? Thx. - cindy From: Le, Huifeng [mailto:huifeng.le at intel.com] Sent: Sunday, August 25, 2019 2:39 PM To: Rowsell, Brent ; Peters, Matt ; Guo, Ruijing Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Brent, Matt, Ruijing, As following up on story: [Feature] Time Sensitive Networking (https://storyboard.openstack.org/#!/story/2005516) and approved spec https://review.opendev.org/#/c/666768/, we had done the POC to deploy and run TSN application on STX environment, the detail process and learning are summarized at Wiki: https://wiki.openstack.org/wiki/StarlingX/Networking/TSN which can be served as deliverable of task "StarlingX user guide on how to deploy TSN in VM". Could you please help to review the Wiki and let me know if you have any comments. Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From huifeng.le at intel.com Mon Aug 26 06:21:50 2019 From: huifeng.le at intel.com (Le, Huifeng) Date: Mon, 26 Aug 2019 06:21:50 +0000 Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601DBEE@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D60F245@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F3601DBEE@SHSMSX104.ccr.corp.intel.com> Message-ID: <76647BD697F40748B1FA4F56DA02AA0B4D60F56B@SHSMSX104.ccr.corp.intel.com> Cindy, There is no patches pending for merging. TSN PTP feature had already been supported in STX 1.0, and the wiki verified how other TSN features can be deployed in STX through PCI-Passthrough (But there may still be confliction with STX PTP feature, e.g. STX PTP feature requires Nic be available in host, but due to the Nic e.g. Intel i210 does not support SRIOV, in case it is pass-through into VM to support TSN application, it will be not available in host then the STX PTP feature will not work). I have no concern and Core/TSC team can review and determine how to claim if no more concern about the process, thanks much! Best Regards, Le, Huifeng From: Xie, Cindy Sent: Monday, August 26, 2019 12:49 PM To: Le, Huifeng ; Rowsell, Brent ; Peters, Matt ; Guo, Ruijing Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Huifeng, Seems like TSN support is based on Nova PCI pass-through for i210 network adaptor. Do you have any Nova patches or StarlingX patches pending merge (and need cherry pick) to allow the procedures in wiki page to be successful? If there are no patches pending, can we claim that TSN feature is already supported in stx.2.0? Thx. - cindy From: Le, Huifeng [mailto:huifeng.le at intel.com] Sent: Sunday, August 25, 2019 2:39 PM To: Rowsell, Brent >; Peters, Matt >; Guo, Ruijing > Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Brent, Matt, Ruijing, As following up on story: [Feature] Time Sensitive Networking (https://storyboard.openstack.org/#!/story/2005516) and approved spec https://review.opendev.org/#/c/666768/, we had done the POC to deploy and run TSN application on STX environment, the detail process and learning are summarized at Wiki: https://wiki.openstack.org/wiki/StarlingX/Networking/TSN which can be served as deliverable of task "StarlingX user guide on how to deploy TSN in VM". Could you please help to review the Wiki and let me know if you have any comments. Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Mon Aug 26 07:42:23 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 26 Aug 2019 07:42:23 +0000 Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking In-Reply-To: <76647BD697F40748B1FA4F56DA02AA0B4D60F56B@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D60F245@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F3601DBEE@SHSMSX104.ccr.corp.intel.com> <76647BD697F40748B1FA4F56DA02AA0B4D60F56B@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601DEA7@SHSMSX104.ccr.corp.intel.com> + Ada for feature testing. It's so nice to know that basic TSN support shall be already in 2.0 (even in 1.0). This is one of the critical features to support Industrial use case. As TSN spec got approved as 3.0 feature, I guess we may have to run full feature testing according to your wiki pages before we can claim the full support. @Ada, do you have engineer assigned to work w/ network team on this important feature? Thx. - cindy From: Le, Huifeng Sent: Monday, August 26, 2019 2:22 PM To: Xie, Cindy ; Rowsell, Brent ; Peters, Matt ; Guo, Ruijing Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Cindy, There is no patches pending for merging. TSN PTP feature had already been supported in STX 1.0, and the wiki verified how other TSN features can be deployed in STX through PCI-Passthrough (But there may still be confliction with STX PTP feature, e.g. STX PTP feature requires Nic be available in host, but due to the Nic e.g. Intel i210 does not support SRIOV, in case it is pass-through into VM to support TSN application, it will be not available in host then the STX PTP feature will not work). I have no concern and Core/TSC team can review and determine how to claim if no more concern about the process, thanks much! Best Regards, Le, Huifeng From: Xie, Cindy Sent: Monday, August 26, 2019 12:49 PM To: Le, Huifeng >; Rowsell, Brent >; Peters, Matt >; Guo, Ruijing > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Huifeng, Seems like TSN support is based on Nova PCI pass-through for i210 network adaptor. Do you have any Nova patches or StarlingX patches pending merge (and need cherry pick) to allow the procedures in wiki page to be successful? If there are no patches pending, can we claim that TSN feature is already supported in stx.2.0? Thx. - cindy From: Le, Huifeng [mailto:huifeng.le at intel.com] Sent: Sunday, August 25, 2019 2:39 PM To: Rowsell, Brent >; Peters, Matt >; Guo, Ruijing > Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Brent, Matt, Ruijing, As following up on story: [Feature] Time Sensitive Networking (https://storyboard.openstack.org/#!/story/2005516) and approved spec https://review.opendev.org/#/c/666768/, we had done the POC to deploy and run TSN application on STX environment, the detail process and learning are summarized at Wiki: https://wiki.openstack.org/wiki/StarlingX/Networking/TSN which can be served as deliverable of task "StarlingX user guide on how to deploy TSN in VM". Could you please help to review the Wiki and let me know if you have any comments. Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Mon Aug 26 12:25:20 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 26 Aug 2019 08:25:20 -0400 Subject: [Starlingx-discuss] Guest OS support In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> Message-ID: On Mon, Aug 26, 2019 at 12:39 AM Xie, Cindy wrote: > > Hi, Numan & Ada, > > Do we have test cases to cover what kind of guest OS can be supported for stx.2.0? For example, Windows (version?), Linux (flavor?) or VxWorks? > Something like this : import os import platform system = (platform.system()) release = (platform.release()) print("System: %s " % (system)) print("Release: %s " % (release)) print("OS name: %s "% (os.name)) if ("Linux" in system): print("PASS") else: print("FAIL") ? > > > I am wondering if we already have such test cases in stx-test but I didn’t find them. > > > > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Mon Aug 26 12:34:38 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Mon, 26 Aug 2019 12:34:38 +0000 Subject: [Starlingx-discuss] Guest OS support In-Reply-To: References: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601E1FF@SHSMSX104.ccr.corp.intel.com> I mean the real functional testing to boot Windows or VxWorks or other OS as guestOS. I know Linux (Ubuntu or CentOS) got supported but we encountered issue to boot Android as GuestOS, which leads me to wondering if we've tested Windows or VxWorks or any other OSes. Thx. - cindy -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Monday, August 26, 2019 8:25 PM To: Xie, Cindy Cc: starlingx Subject: Re: [Starlingx-discuss] Guest OS support On Mon, Aug 26, 2019 at 12:39 AM Xie, Cindy wrote: > > Hi, Numan & Ada, > > Do we have test cases to cover what kind of guest OS can be supported for stx.2.0? For example, Windows (version?), Linux (flavor?) or VxWorks? > Something like this : import os import platform system = (platform.system()) release = (platform.release()) print("System: %s " % (system)) print("Release: %s " % (release)) print("OS name: %s "% (os.name)) if ("Linux" in system): print("PASS") else: print("FAIL") ? > > > I am wondering if we already have such test cases in stx-test but I didn’t find them. > > > > Thx. - cindy > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Mon Aug 26 13:44:35 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 26 Aug 2019 09:44:35 -0400 Subject: [Starlingx-discuss] Guest OS support In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601E1FF@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F3601E1FF@SHSMSX104.ccr.corp.intel.com> Message-ID: In that case, yes a more extensive test is needed. One question, on the documentation, what kind of Guest OS are supported? Regards Victor Rodriguez On Mon, Aug 26, 2019 at 8:34 AM Xie, Cindy wrote: > > I mean the real functional testing to boot Windows or VxWorks or other OS as guestOS. I know Linux (Ubuntu or CentOS) got supported but we encountered issue to boot Android as GuestOS, which leads me to wondering if we've tested Windows or VxWorks or any other OSes. > > Thx. - cindy > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Monday, August 26, 2019 8:25 PM > To: Xie, Cindy > Cc: starlingx > Subject: Re: [Starlingx-discuss] Guest OS support > > On Mon, Aug 26, 2019 at 12:39 AM Xie, Cindy wrote: > > > > Hi, Numan & Ada, > > > > Do we have test cases to cover what kind of guest OS can be supported for stx.2.0? For example, Windows (version?), Linux (flavor?) or VxWorks? > > > > Something like this : > > import os > import platform > > system = (platform.system()) > release = (platform.release()) > > print("System: %s " % (system)) > print("Release: %s " % (release)) > print("OS name: %s "% (os.name)) > > if ("Linux" in system): > print("PASS") > else: > print("FAIL") > > ? > > > > > > > I am wondering if we already have such test cases in stx-test but I didn’t find them. > > > > > > > > Thx. - cindy > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcela.a.rosales.jimenez at intel.com Mon Aug 26 17:20:11 2019 From: marcela.a.rosales.jimenez at intel.com (Rosales Jimenez, Marcela A) Date: Mon, 26 Aug 2019 17:20:11 +0000 Subject: [Starlingx-discuss] [MultiOS ] Team meeting 8/26/19 Message-ID: <23176896-7480-46B3-8FEC-8C6CFC5B4CE7@intel.com> Hi team, These are the notes from today’s MultiOS meeting. They’re already on the etherpad https://etherpad.openstack.org/p/stx-multios Multi-OS team meeting - 8/26/19 openSUSE to-do: * Write _service (to generate tar.gz) * Push openSUSE patches to gerrit * opensuse directory need to have: changelog, _service and spec file CentOS systemd * Maintenance: working on first package (pmon). systemctl start pmon is failing. Will continue debugging. Yocto * It’s expected to complete packaging this week. * In september they’ll be configuring the system. Let me know of any topic you’d like to discuss in this meeting. Thanks, Marcela -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Mon Aug 26 17:34:35 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 26 Aug 2019 17:34:35 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190826 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-26 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Mon Aug 26 20:33:04 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 26 Aug 2019 20:33:04 +0000 Subject: [Starlingx-discuss] [ Test ] meeting - not happening on 08/27 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEB47ED@FMSMSX114.amr.corp.intel.com> Hello, I hadn't noticed there was no meeting occurrence for tomorrow. As several other meetings came their way and landed in that spot, tomorrow we won't have testing meeting, but will resume on Sept 3rd. If you have concerns or update, let's handle those through email. Thanks Ada From ada.cabrales at intel.com Mon Aug 26 20:37:17 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Mon, 26 Aug 2019 20:37:17 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Test meeting - 9:00 Pacific time Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEB4817@FMSMSX114.amr.corp.intel.com> Weekly meetings on Tuesdays at 9am Pacific * Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2073 bytes Desc: not available URL: From maria.g.perez.ibarra at intel.com Mon Aug 26 22:45:39 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Mon, 26 Aug 2019 22:45:39 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190826 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-26(link) Status: Green =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Aug 27 10:10:01 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 10:10:01 +0000 Subject: [Starlingx-discuss] Reg. Weekly call details In-Reply-To: <056401d55cba$57b834a0$07289de0$@calsoftinc.com> References: <816bc48c-86a2-61ed-d32a-e27aa81186cd@calsoftinc.com> <586E8B730EA0DA4A9D6A80A10E486BC007AED180@ALA-MBD.corp.ad.wrs.com> <056401d55cba$57b834a0$07289de0$@calsoftinc.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEF8F0@ALA-MBD.corp.ad.wrs.com> Hi Pavan, thanks for confirming. Can I ask you to send the details of the issue you're seeing to the mailing list (starlingx-discuss at lists.starlingx.io). I'd do it, but better if it comes directly from you : ). Thanks, Bill... -----Original Message----- From: Pavan Gupta Sent: Tuesday, August 27, 2019 5:32 AM To: Zvonar, Bill ; 'Saichandu Behara' Subject: RE: Reg. Weekly call details Hi Bill, We got this email. We plan to attend weekly technical meetings. We have hit the following issue on the latest green build: keystone:log 2019-08-15 14:16:09.805 1402296 WARNING keystone.access_rules_config.backends.json [-] No config file found for access rules, application credential access rules will be unavailable.: IOError: [Errno 2] No such file or directory: '/etc/keystone/access_rules.json' This could be some issue with Keystone service. I am wondering if we should move to recent build that may have fixed this problem. Kindly let me know if you have faced this issue. Pavan -----Original Message----- From: Zvonar, Bill Sent: 26 August 2019 18:00 To: Saichandu Behara Cc: pavan.gupta at calsoftinc.com Subject: RE: Reg. Weekly call details Hi Saichandu - here they are - the first 2 links are constants, the 3rd is specifically for this week's call. Btw, my last email to you bounced - can you (or Pavan) confirm that you receive this email? Thanks, Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190828T1400 -----Original Message----- From: Saichandu Behara Sent: Monday, August 26, 2019 8:17 AM To: Zvonar, Bill Cc: pavan.gupta at calsoftinc.com Subject: Reg. Weekly call details Hi Bill, As we Discussed over the call, Can you please share the Wednesday Call Details. So, we can attend and discuss the queries. Thanks & Regards Sai Chandu Behara From Bill.Zvonar at windriver.com Tue Aug 27 10:15:46 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 10:15:46 +0000 Subject: [Starlingx-discuss] Reg. Weekly call details In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEF8F0@ALA-MBD.corp.ad.wrs.com> References: <816bc48c-86a2-61ed-d32a-e27aa81186cd@calsoftinc.com> <586E8B730EA0DA4A9D6A80A10E486BC007AED180@ALA-MBD.corp.ad.wrs.com> <056401d55cba$57b834a0$07289de0$@calsoftinc.com> <586E8B730EA0DA4A9D6A80A10E486BC007AEF8F0@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEF913@ALA-MBD.corp.ad.wrs.com> Ha, put the email address in the copy list & forgot to remove it before sending. So, while I'm here, can anyone advise on the issue below re: keystone? -----Original Message----- From: Zvonar, Bill Sent: Tuesday, August 27, 2019 6:10 AM To: Pavan Gupta ; 'Saichandu Behara' Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Reg. Weekly call details Hi Pavan, thanks for confirming. Can I ask you to send the details of the issue you're seeing to the mailing list (starlingx-discuss at lists.starlingx.io). I'd do it, but better if it comes directly from you : ). Thanks, Bill... -----Original Message----- From: Pavan Gupta Sent: Tuesday, August 27, 2019 5:32 AM To: Zvonar, Bill ; 'Saichandu Behara' Subject: RE: Reg. Weekly call details Hi Bill, We got this email. We plan to attend weekly technical meetings. We have hit the following issue on the latest green build: keystone:log 2019-08-15 14:16:09.805 1402296 WARNING keystone.access_rules_config.backends.json [-] No config file found for access rules, application credential access rules will be unavailable.: IOError: [Errno 2] No such file or directory: '/etc/keystone/access_rules.json' This could be some issue with Keystone service. I am wondering if we should move to recent build that may have fixed this problem. Kindly let me know if you have faced this issue. Pavan -----Original Message----- From: Zvonar, Bill Sent: 26 August 2019 18:00 To: Saichandu Behara Cc: pavan.gupta at calsoftinc.com Subject: RE: Reg. Weekly call details Hi Saichandu - here they are - the first 2 links are constants, the 3rd is specifically for this week's call. Btw, my last email to you bounced - can you (or Pavan) confirm that you receive this email? Thanks, Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190828T1400 -----Original Message----- From: Saichandu Behara Sent: Monday, August 26, 2019 8:17 AM To: Zvonar, Bill Cc: pavan.gupta at calsoftinc.com Subject: Reg. Weekly call details Hi Bill, As we Discussed over the call, Can you please share the Wednesday Call Details. So, we can attend and discuss the queries. Thanks & Regards Sai Chandu Behara _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Aug 27 11:24:36 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 11:24:36 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From cindy.xie at intel.com Tue Aug 27 12:52:23 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 27 Aug 2019 12:52:23 +0000 Subject: [Starlingx-discuss] StarlingX distro.openstack weekly meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35E6D264@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35E6D264@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601F5F2@SHSMSX104.ccr.corp.intel.com> All, Sorry for the short notice, but be noted that today's distro.openstack call is cancelled. Thanks. - cindy -----Original Appointment----- From: Jones, Bruce E Sent: Tuesday, January 29, 2019 10:31 AM To: Jones, Bruce E; Xie, Cindy; Chen, Yan; He, Yongli Subject: FW: StarlingX distro.openstack weekly meeting When: Tuesday, August 27, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 -----Original Appointment----- From: Jones, Bruce E Sent: Wednesday, December 12, 2018 10:26 AM To: Jones, Bruce E; Chen, Yan; He, Yongli Subject: StarlingX distro.openstack weekly meeting When: Occurs every Tuesday effective 12/18/2018 from 6:00 AM to 7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Aug 27 12:52:39 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 27 Aug 2019 12:52:39 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601F61A@SHSMSX104.ccr.corp.intel.com> - Intel QAT support for K8s: https://storyboard.openstack.org/#!/story/2005514 - Containerize Ceph - I assume no story yet since spec in progress : https://storyboard.openstack.org/#!/story/2005527 - SysInit >> system conversion/cleanup: https://storyboard.openstack.org/#!/story/2006192 - flock versioning: the spec is still pending review, thus no storyboard yet. -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From Bill.Zvonar at windriver.com Tue Aug 27 12:55:24 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 12:55:24 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601F61A@SHSMSX104.ccr.corp.intel.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F3601F61A@SHSMSX104.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEFA8E@ALA-MBD.corp.ad.wrs.com> Great, thanks Cindy. -----Original Message----- From: Xie, Cindy Sent: Tuesday, August 27, 2019 8:53 AM To: Zvonar, Bill ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features - Intel QAT support for K8s: https://storyboard.openstack.org/#!/story/2005514 - Containerize Ceph - I assume no story yet since spec in progress : https://storyboard.openstack.org/#!/story/2005527 - SysInit >> system conversion/cleanup: https://storyboard.openstack.org/#!/story/2006192 - flock versioning: the spec is still pending review, thus no storyboard yet. -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From Bill.Zvonar at windriver.com Tue Aug 27 13:14:06 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 13:14:06 +0000 Subject: [Starlingx-discuss] Community activity dashboard References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <7dbd4c93-af10-5cca-bc99-d6b204a83e8c@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007A851F3@ALA-MBD.corp.ad.wrs.com> <049c4da2-f6af-0146-43d4-a4c3b5d3b432@openstack.org> <47f1eb6a-20cb-8859-0e84-ceeb5663c437@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007A87657@ALA-MBD.corp.ad.wrs.com> <8e26b1c5-71e7-5f51-cec2-cf245784fb0e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB1EF3@ALA-MBD.corp.ad.wrs.com> <4bdc69a5-0779-1283-f689-2ce12cbb38c5@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB26A0@ALA-MBD.corp.ad.wrs.com> <89d14867-8583-d74a-1d25-74bd2f4be15a@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACA290@ALA-MBD.corp.ad.wrs.com> <434c68e9-77e5-66ff-3948-aedc8c76727f@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACB3B9@ALA-MBD.corp.ad.wrs.com> <9bbc5256-bd1d-dcb6-8c94-4dcafb76ebb8@openstack.org> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEFADC@ALA-MBD.corp.ad.wrs.com> Hi folks, more updates from Thierry on the Community activity dashboard (thanks Thierry). Please check the etherpad [0] for more info, and feel free to add comments/requests there. Bill... [0] https://etherpad.openstack.org/p/stx-bitergia -----Original Message----- From: Thierry Carrez Sent: Tuesday, August 27, 2019 9:00 AM To: Zvonar, Bill Subject: Re: [Starlingx-discuss] Community activity dashboard Zvonar, Bill wrote: > Thanks Thierry... OK I updated the etherpad with recent work... including: Individual contribution dashboard now includes reviews, as well as ability to filter per organization: https://starlingx.biterg.io/goto/f350492d69c8161873e687de2c57ad52 Submitted ticket to make links clickable: https://gitlab.com/Bitergia/c/OSF/support/issues/31 -- Thierry From cindy.xie at intel.com Tue Aug 27 13:18:11 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 27 Aug 2019 13:18:11 +0000 Subject: [Starlingx-discuss] Weekly StarlingX non-OpenStack distro meeting In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F35F28711@SHSMSX104.ccr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601F6C5@SHSMSX104.ccr.corp.intel.com> Agenda for 8/28 meeting: 1. stx.2.0 bug review (Cindy) - stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other+stx.2.0&field.tags_combinator=ALL - stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage+stx.2.0&field.tags_combinator=ALL 2. stx.3.0 top level storyboards for non-openstack-distr (Cindy/Saul) - https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.3.0&project_group_id=86 3. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent'; 'zhaos' Cc: Jones, Bruce E; 'Waines, Greg'; Cobbley, David A; Armstrong, Robert H; 'Badea, Daniel'; Hu, Wei W; 'Zhi Zhi2 Chang'; 'Seiler, Glenn'; Chen, Tingjie; 'Carlos Cebrian'; 'Chen, Jacky'; Gomez, Juan P; 'Peng Tan'; 'Eslimi, Dariush'; 'Komiyama, Takeo' Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 28, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Bill.Zvonar at windriver.com Tue Aug 27 13:46:38 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 13:46:38 +0000 Subject: [Starlingx-discuss] Release Plan - stx.2.0 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1599F37@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1599F37@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEFC2E@ALA-MBD.corp.ad.wrs.com> Quick update - the build went fine last night, so we can unleash the test team on sanity + additional testing. Ada - unleash! -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: RE: Release Plan - stx.2.0 My apologies for the typo below... The second last bullet should read: - All stx.2.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:26 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Release Plan - stx.2.0 Hello all, The stx.2.0 final candidate build is scheduled on Monday August 26 at 11:30 PM UTC. We are currently waiting for one commit to merge: https://review.opendev.org/#/c/678167/ Please do not cherry-pick any other commits to the r/stx.2.0 branch until further notice. The logistics for next week are as follows: - Developer and core reviewers merge the remaining commit. - Final candidate build run on the r/stx.2.0 branch on Monday August 26 - The test team executes one day of testing on the final candidate build -- sanity + additional testing based on the discretion of the test teams - Ada will send a report by EOD Tuesday -- either with the go-ahead for the release OR identifying any RED sanity/blocking issues - If there is a RED sanity, the bug will need to be resolved and the load will need to be re-built. And sanity will need to be re-executed. - If all is good, Scott and Dean will follow the steps to tag the branch and post the build artifacts in the release directory on CENGN. Other things to note: - All stx.2.0 unresolved medium priority bugs have been moved to stx.3.0 as per community agreement - This means fixes are only required in master from this point forward. - PLs/TLs have the discretion to raise the priority of a medium bug and bring it back as a high priority in stx.2.0 - All stx.3.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. - Developers are encouraged to continue working on those as a top priority and to post reviews on master - For more information on the stx.2.0 upcoming maintenance release, please read the minutes from the Release Meeting on 2019-08-22 If you have any questions/concerns, please reach out to Bill Zvonar as I am on vacation next week. And, finally, a big thank you for all community members who have contributed to StarlingX 2.0. Best Regards, Ghada From forrest.zhao at intel.com Tue Aug 27 14:15:23 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Tue, 27 Aug 2019 14:15:23 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> Message-ID: <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> Hi Bill, The spec for OVS-DPDK containerization already got 3 +2: https://review.opendev.org/#/c/655830/. Do you know who can help +1 to workflow? We'll create 'top level' story for it tomorrow and send you the link. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From forrest.zhao at intel.com Tue Aug 27 14:23:54 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Tue, 27 Aug 2019 14:23:54 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> Message-ID: <6345119E91D5C843A93D64F498ACFA1374F0086D@shsmsx102.ccr.corp.intel.com> Hi Bill, TSN enabling and its 'top level' story: https://storyboard.openstack.org/#!/story/2005516. It's being worked on by my team. Thanks, Forrest -----Original Message----- From: Zhao, Forrest [mailto:forrest.zhao at intel.com] Sent: Tuesday, August 27, 2019 10:15 PM To: Zvonar, Bill ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Top Level Stories for 3.0 Features Hi Bill, The spec for OVS-DPDK containerization already got 3 +2: https://review.opendev.org/#/c/655830/. Do you know who can help +1 to workflow? We'll create 'top level' story for it tomorrow and send you the link. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Tue Aug 27 15:07:51 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 27 Aug 2019 11:07:51 -0400 Subject: [Starlingx-discuss] [Build] Changes to timing of builds Message-ID: r/stx.2.0    - The last daily build was timestamped 20190826T233000Z    - Future builds will be on demand.  Please send such requires to this mailing list with '[Build]' in the title. master    - Daily build will move back to it's original time slot (23:00 UTC with images, 1:30 UTC without images)    - The goal is to have images ready for test by 4:00 UTC From ada.cabrales at intel.com Tue Aug 27 15:17:55 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 27 Aug 2019 15:17:55 +0000 Subject: [Starlingx-discuss] Release Plan - stx.2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEFC2E@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1599F37@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AEFC2E@ALA-MBD.corp.ad.wrs.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEB4DE7@FMSMSX114.amr.corp.intel.com> Work in Progress A. > -----Original Message----- > From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] > Sent: Tuesday, August 27, 2019 8:47 AM > To: starlingx-discuss at lists.starlingx.io; Cabrales, Ada > > Cc: Khalil, Ghada > Subject: RE: Release Plan - stx.2.0 > > Quick update - the build went fine last night, so we can unleash the test team > on sanity + additional testing. > > Ada - unleash! > > -----Original Message----- > From: Khalil, Ghada > Sent: Friday, August 23, 2019 3:31 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Zvonar, Bill > Subject: RE: Release Plan - stx.2.0 > > My apologies for the typo below... > > The second last bullet should read: > - All stx.2.0 unresolved high priority bugs remain gating stx.2.0 and will need > to be resolved in an upcoming maintenance release. > > -----Original Message----- > From: Khalil, Ghada > Sent: Friday, August 23, 2019 3:26 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Zvonar, Bill > Subject: Release Plan - stx.2.0 > > Hello all, > The stx.2.0 final candidate build is scheduled on Monday August 26 at 11:30 > PM UTC. > We are currently waiting for one commit to merge: > https://review.opendev.org/#/c/678167/ > > Please do not cherry-pick any other commits to the r/stx.2.0 branch until > further notice. > > The logistics for next week are as follows: > - Developer and core reviewers merge the remaining commit. > - Final candidate build run on the r/stx.2.0 branch on Monday August 26 > - The test team executes one day of testing on the final candidate build -- > sanity + additional testing based on the discretion of the test teams > - Ada will send a report by EOD Tuesday -- either with the go-ahead for the > release OR identifying any RED sanity/blocking issues > - If there is a RED sanity, the bug will need to be resolved and the load > will need to be re-built. And sanity will need to be re-executed. > - If all is good, Scott and Dean will follow the steps to tag the branch and post > the build artifacts in the release directory on CENGN. > > Other things to note: > - All stx.2.0 unresolved medium priority bugs have been moved to stx.3.0 as > per community agreement > - This means fixes are only required in master from this point forward. > - PLs/TLs have the discretion to raise the priority of a medium bug and > bring it back as a high priority in stx.2.0 > - All stx.3.0 unresolved high priority bugs remain gating stx.2.0 and will need > to be resolved in an upcoming maintenance release. > - Developers are encouraged to continue working on those as a top > priority and to post reviews on master > - For more information on the stx.2.0 upcoming maintenance release, please > read the minutes from the Release Meeting on 2019-08-22 > > If you have any questions/concerns, please reach out to Bill Zvonar as I am on > vacation next week. > > And, finally, a big thank you for all community members who have > contributed to StarlingX 2.0. > > > Best Regards, > Ghada > > From sgw at linux.intel.com Tue Aug 27 15:23:46 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 27 Aug 2019 08:23:46 -0700 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> Message-ID: <76158b8f-9ac5-9812-d0ba-89ec5d2ceb5c@linux.intel.com> On 8/27/19 4:24 AM, Zvonar, Bill wrote: > Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. > > Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. > > I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. > > Please check for yours below. > > Thanks, Bill... > > Cindy: > - Intel QAT support for K8s > - Containerize Ceph - I assume no story yet since spec in progress > > Forrest: > - Containerize OVS-DPDK - I assume no story yet since spec in progress > > Saul: > - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? That's the one for now, there may be other stories as we go. > - Sysvinit >> systemd Conversion/Cleanup Cindy gave you > - Flock Versioning > Spec is still ongoing, there will be stories one we settle on the specification. > Victor: > - Performance Testing/Measurement Framework > I think the specification is still ongoing here. > Yong: > - Upversion OpenStack to Train > > [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) > [1] https://storyboard.openstack.org/#!/board/186 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From sgw at linux.intel.com Tue Aug 27 15:26:21 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 27 Aug 2019 08:26:21 -0700 Subject: [Starlingx-discuss] systemd/sysvinit configuration scripts in config Message-ID: <6e121661-c6d4-dd1d-34f8-00d21da5b61a@linux.intel.com> Tee, I know you have done alot of the work on the ansible playbook. Is there work being planned for converting the various configuration scripts in the config repo such as storageconfig and workconfig? We have been reviewing the various systemd services that call other scripts rather than calling the actual service daemon directly and found that many of the config related scripts are just that, extended configuration. Thanks Sau! From Brent.Rowsell at windriver.com Tue Aug 27 15:28:21 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 27 Aug 2019 15:28:21 +0000 Subject: [Starlingx-discuss] systemd/sysvinit configuration scripts in config In-Reply-To: <6e121661-c6d4-dd1d-34f8-00d21da5b61a@linux.intel.com> References: <6e121661-c6d4-dd1d-34f8-00d21da5b61a@linux.intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EC2660C02@ALA-MBD.corp.ad.wrs.com> There is no plan to change storageconfig and workconfig etc. Brent -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Tuesday, August 27, 2019 11:26 AM To: starlingx-discuss at lists.starlingx.io; Ngo, Tee Subject: [Starlingx-discuss] systemd/sysvinit configuration scripts in config Tee, I know you have done alot of the work on the ansible playbook. Is there work being planned for converting the various configuration scripts in the config repo such as storageconfig and workconfig? We have been reviewing the various systemd services that call other scripts rather than calling the actual service daemon directly and found that many of the config related scripts are just that, extended configuration. Thanks Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ran1.an at intel.com Tue Aug 27 15:48:12 2019 From: ran1.an at intel.com (An, Ran1) Date: Tue, 27 Aug 2019 15:48:12 +0000 Subject: [Starlingx-discuss] Unable to generate distroless docker image Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE05321DD1@shsmsx102.ccr.corp.intel.com> Hi core reviewers: With patch[1] has been merged, a docker image "starlingx/intel-gpu-plugin docker" required by story/2005937 [2] should be generated and pushed to docker hub by Cengn automatically. However, there are no new repository "starlingx/intel-gpu-plugin docker" on docker hub after the latest docker images weekly built. I'm not sure if there is anything I missed or some other configurations are required. Could you help and check it? By the way, "starlingx/intel-gpu-plugin docker" is an image based on distro-less system, listed in file "starlingx/integ /distroless_stable_docker_images.inc". It can be built by following command. " WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/stx-centos-${BUILD_STREAM}-wheels.tar DOCKER_USER={my_docker_user_name} time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --os distroless \ --stream stable \ --base gcr.io/distroless/base \ --wheels ${WHEELS} \ --user ${DOCKER_USER} \ --push --latest \ " [1] https://review.opendev.org/#/c/668808 [2] https://storyboard.openstack.org/#!/story/2005937 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Aug 27 16:08:20 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 27 Aug 2019 16:08:20 +0000 Subject: [Starlingx-discuss] Guest OS support In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> References: <2FD5DDB5A04D264C80D42CA35194914F3601DB4D@SHSMSX104.ccr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEB4EAB@FMSMSX114.amr.corp.intel.com> Hello Cindy, We have tests with centos, ubuntu and cirros as guest OS. Nothing for Windows or VxWorks (yet). Regards Ada From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Sunday, August 25, 2019 11:38 PM To: starlingx Subject: [Starlingx-discuss] Guest OS support Hi, Numan & Ada, Do we have test cases to cover what kind of guest OS can be supported for stx.2.0? For example, Windows (version?), Linux (flavor?) or VxWorks? I am wondering if we already have such test cases in stx-test but I didn't find them. Thx. - cindy -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Tue Aug 27 16:42:54 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Tue, 27 Aug 2019 16:42:54 +0000 Subject: [Starlingx-discuss] Unable to generate distroless docker image In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CE05321DD1@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05321DD1@shsmsx102.ccr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FC1578268@ALA-MBD.corp.ad.wrs.com> That requires an update to the CENGN build scripts, as it would currently just be building --os centos. From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Tuesday, August 27, 2019 11:48 AM To: starlingx-discuss at lists.starlingx.io Cc: Penney, Don; Little, Scott Subject: [Starlingx-discuss]Unable to generate distroless docker image Hi core reviewers: With patch[1] has been merged, a docker image "starlingx/intel-gpu-plugin docker" required by story/2005937 [2] should be generated and pushed to docker hub by Cengn automatically. However, there are no new repository "starlingx/intel-gpu-plugin docker" on docker hub after the latest docker images weekly built. I'm not sure if there is anything I missed or some other configurations are required. Could you help and check it? By the way, "starlingx/intel-gpu-plugin docker" is an image based on distro-less system, listed in file "starlingx/integ /distroless_stable_docker_images.inc". It can be built by following command. " WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/stx-centos-${BUILD_STREAM}-wheels.tar DOCKER_USER={my_docker_user_name} time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --os distroless \ --stream stable \ --base gcr.io/distroless/base \ --wheels ${WHEELS} \ --user ${DOCKER_USER} \ --push --latest \ " [1] https://review.opendev.org/#/c/668808 [2] https://storyboard.openstack.org/#!/story/2005937 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Tue Aug 27 16:50:01 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 27 Aug 2019 16:50:01 +0000 Subject: [Starlingx-discuss] [RC1] Sanity Test - ISO 20190827 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-27 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] =========================================== Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Tue Aug 27 16:55:31 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Tue, 27 Aug 2019 12:55:31 -0400 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <76158b8f-9ac5-9812-d0ba-89ec5d2ceb5c@linux.intel.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> <76158b8f-9ac5-9812-d0ba-89ec5d2ceb5c@linux.intel.com> Message-ID: On Tue, Aug 27, 2019 at 11:24 AM Saul Wold wrote: > > > > On 8/27/19 4:24 AM, Zvonar, Bill wrote: > > Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. > > > > Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. > > > > I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. > > > > Please check for yours below. > > > > Thanks, Bill... > > > > Cindy: > > - Intel QAT support for K8s > > - Containerize Ceph - I assume no story yet since spec in progress > > > > Forrest: > > - Containerize OVS-DPDK - I assume no story yet since spec in progress > > > > Saul: > > - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? > That's the one for now, there may be other stories as we go. > > > - Sysvinit >> systemd Conversion/Cleanup > Cindy gave you > > > - Flock Versioning > > > Spec is still ongoing, there will be stories one we settle on the > specification. > > > Victor: > > - Performance Testing/Measurement Framework > > > I think the specification is still ongoing here. > It is: https://review.opendev.org/#/c/677287/ waiting for approval, all the comments have been addressed Regards > > Yong: > > - Upversion OpenStack to Train > > > > [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) > > [1] https://storyboard.openstack.org/#!/board/186 > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Bill.Zvonar at windriver.com Tue Aug 27 17:07:59 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Aug 2019 17:07:59 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AEFEE3@ALA-MBD.corp.ad.wrs.com> Thanks Forrest... I'm not sure off the top of my head, but assume it's the usual suspects for that sub-project: https://wiki.openstack.org/wiki/StarlingX/Networking. -----Original Message----- From: Zhao, Forrest Sent: Tuesday, August 27, 2019 10:15 AM To: Zvonar, Bill ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features Hi Bill, The spec for OVS-DPDK containerization already got 3 +2: https://review.opendev.org/#/c/655830/. Do you know who can help +1 to workflow? We'll create 'top level' story for it tomorrow and send you the link. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From maria.g.perez.ibarra at intel.com Tue Aug 27 23:14:00 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Tue, 27 Aug 2019 23:14:00 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190827 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-27(link) Status: Green =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Aug 28 00:35:20 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 28 Aug 2019 00:35:20 +0000 Subject: [Starlingx-discuss] Community activity dashboard In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEFADC@ALA-MBD.corp.ad.wrs.com> References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <7dbd4c93-af10-5cca-bc99-d6b204a83e8c@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007A851F3@ALA-MBD.corp.ad.wrs.com> <049c4da2-f6af-0146-43d4-a4c3b5d3b432@openstack.org> <47f1eb6a-20cb-8859-0e84-ceeb5663c437@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007A87657@ALA-MBD.corp.ad.wrs.com> <8e26b1c5-71e7-5f51-cec2-cf245784fb0e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB1EF3@ALA-MBD.corp.ad.wrs.com> <4bdc69a5-0779-1283-f689-2ce12cbb38c5@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB26A0@ALA-MBD.corp.ad.wrs.com> <89d14867-8583-d74a-1d25-74bd2f4be15a@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACA290@ALA-MBD.corp.ad.wrs.com> <434c68e9-77e5-66ff-3948-aedc8c76727f@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACB3B9@ALA-MBD.corp.ad.wrs.com> <9bbc5256-bd1d-dcb6-8c94-4dcafb76ebb8@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AEFADC@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F3601FF9B@SHSMSX104.ccr.corp.intel.com> Just curious, in the "key metrics" panel, are the "Changes" only track the certain period of time? As I was seeing 31xx last month, and today it is 3056. Thx. - cindy -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 9:14 PM To: starlingx-discuss at lists.starlingx.io Cc: Thierry Carrez Subject: Re: [Starlingx-discuss] Community activity dashboard Hi folks, more updates from Thierry on the Community activity dashboard (thanks Thierry). Please check the etherpad [0] for more info, and feel free to add comments/requests there. Bill... [0] https://etherpad.openstack.org/p/stx-bitergia -----Original Message----- From: Thierry Carrez Sent: Tuesday, August 27, 2019 9:00 AM To: Zvonar, Bill Subject: Re: [Starlingx-discuss] Community activity dashboard Zvonar, Bill wrote: > Thanks Thierry... OK I updated the etherpad with recent work... including: Individual contribution dashboard now includes reviews, as well as ability to filter per organization: https://starlingx.biterg.io/goto/f350492d69c8161873e687de2c57ad52 Submitted ticket to make links clickable: https://gitlab.com/Bitergia/c/OSF/support/issues/31 -- Thierry _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From elio.martinez.monroy at intel.com Wed Aug 28 01:58:35 2019 From: elio.martinez.monroy at intel.com (Martinez Monroy, Elio) Date: Wed, 28 Aug 2019 01:58:35 +0000 Subject: [Starlingx-discuss] Release Plan - stx.2.0 In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEFC2E@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1599F37@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AEFC2E@ALA-MBD.corp.ad.wrs.com> Message-ID: <1466AF2176E6F040BD63860D0A241BBD495C3D48@FMSMSX109.amr.corp.intel.com> Hello guys! Just to let you know that we just finished our "additional testing". Objective: Verify the latest changes included in today's release in order to check that those changes are not going to affect other features. Testplan and results location: https://docs.google.com/spreadsheets/d/1qq4CuoWuKZcHbau84YKKcpckzFpzgci9anj1Ey18B3k/edit?usp=sharing Configuration: a) Iso http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190826T233000Z/outputs/iso/ b) Bare metal configuration according with patches and/or bugs related patches. Results Overview: Total Test Cases included according with patches: 25 Test Cases Passed: 18 All related bugs to each TC is updated as well Failed: 1 " Use ntpq refid to tell if peer controller reaches reliable time source" . Related bug: https://launchpad.net/bugs/1834071 Priority: Medium, Alarm Related. Deferred: 1 "Rework Ansible Docker Registry Structure" Reason: we need to modify our docker registry host in order to request user and password. Not executed: 5 Reasons: 2 not merged, 1 related to ipv6 (no infra ready), Ironic is tested separately , and one abandoned patch. Conclusions: According with these results, we don't consider that the patches included during last week can compromise or affect Stx 2.0 release. Even the failed TC has some progress showing a different result better than the original failure. BR Elio Martinez QA Engineer -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 8:47 AM To: starlingx-discuss at lists.starlingx.io; Cabrales, Ada Cc: Khalil, Ghada Subject: Re: [Starlingx-discuss] Release Plan - stx.2.0 Quick update - the build went fine last night, so we can unleash the test team on sanity + additional testing. Ada - unleash! -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: RE: Release Plan - stx.2.0 My apologies for the typo below... The second last bullet should read: - All stx.2.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:26 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Release Plan - stx.2.0 Hello all, The stx.2.0 final candidate build is scheduled on Monday August 26 at 11:30 PM UTC. We are currently waiting for one commit to merge: https://review.opendev.org/#/c/678167/ Please do not cherry-pick any other commits to the r/stx.2.0 branch until further notice. The logistics for next week are as follows: - Developer and core reviewers merge the remaining commit. - Final candidate build run on the r/stx.2.0 branch on Monday August 26 - The test team executes one day of testing on the final candidate build -- sanity + additional testing based on the discretion of the test teams - Ada will send a report by EOD Tuesday -- either with the go-ahead for the release OR identifying any RED sanity/blocking issues - If there is a RED sanity, the bug will need to be resolved and the load will need to be re-built. And sanity will need to be re-executed. - If all is good, Scott and Dean will follow the steps to tag the branch and post the build artifacts in the release directory on CENGN. Other things to note: - All stx.2.0 unresolved medium priority bugs have been moved to stx.3.0 as per community agreement - This means fixes are only required in master from this point forward. - PLs/TLs have the discretion to raise the priority of a medium bug and bring it back as a high priority in stx.2.0 - All stx.3.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. - Developers are encouraged to continue working on those as a top priority and to post reviews on master - For more information on the stx.2.0 upcoming maintenance release, please read the minutes from the Release Meeting on 2019-08-22 If you have any questions/concerns, please reach out to Bill Zvonar as I am on vacation next week. And, finally, a big thank you for all community members who have contributed to StarlingX 2.0. Best Regards, Ghada _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From forrest.zhao at intel.com Wed Aug 28 03:30:37 2019 From: forrest.zhao at intel.com (Zhao, Forrest) Date: Wed, 28 Aug 2019 03:30:37 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC007AEFEE3@ALA-MBD.corp.ad.wrs.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AEFEE3@ALA-MBD.corp.ad.wrs.com> Message-ID: <6345119E91D5C843A93D64F498ACFA1374F00BB6@shsmsx102.ccr.corp.intel.com> Hi Bill, The 'top level' story for OVS-DPDK containerization was created and updated at https://storyboard.openstack.org/#!/story/2005496. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Wednesday, August 28, 2019 1:08 AM To: Zhao, Forrest ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features Thanks Forrest... I'm not sure off the top of my head, but assume it's the usual suspects for that sub-project: https://wiki.openstack.org/wiki/StarlingX/Networking. -----Original Message----- From: Zhao, Forrest Sent: Tuesday, August 27, 2019 10:15 AM To: Zvonar, Bill ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features Hi Bill, The spec for OVS-DPDK containerization already got 3 +2: https://review.opendev.org/#/c/655830/. Do you know who can help +1 to workflow? We'll create 'top level' story for it tomorrow and send you the link. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From thierry at openstack.org Wed Aug 28 08:25:00 2019 From: thierry at openstack.org (Thierry Carrez) Date: Wed, 28 Aug 2019 10:25:00 +0200 Subject: [Starlingx-discuss] Community activity dashboard In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601FF9B@SHSMSX104.ccr.corp.intel.com> References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <049c4da2-f6af-0146-43d4-a4c3b5d3b432@openstack.org> <47f1eb6a-20cb-8859-0e84-ceeb5663c437@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007A87657@ALA-MBD.corp.ad.wrs.com> <8e26b1c5-71e7-5f51-cec2-cf245784fb0e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB1EF3@ALA-MBD.corp.ad.wrs.com> <4bdc69a5-0779-1283-f689-2ce12cbb38c5@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB26A0@ALA-MBD.corp.ad.wrs.com> <89d14867-8583-d74a-1d25-74bd2f4be15a@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACA290@ALA-MBD.corp.ad.wrs.com> <434c68e9-77e5-66ff-3948-aedc8c76727f@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACB3B9@ALA-MBD.corp.ad.wrs.com> <9bbc5256-bd1d-dcb6-8c94-4dcafb76ebb8@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AEFADC@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F3601FF9B@SHSMSX104.ccr.corp.intel.com> Message-ID: <070c2bed-f9bf-cd1e-9147-6312c662b4e3@openstack.org> Xie, Cindy wrote: > Just curious, in the "key metrics" panel, are the "Changes" only track the certain period of time? As I was seeing 31xx last month, and today it is 3056. You can set the timeframe using a widget on the top right. By default it shows the "last 1 year". -- Thierry Carrez From cindy.xie at intel.com Wed Aug 28 09:10:19 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 28 Aug 2019 09:10:19 +0000 Subject: [Starlingx-discuss] Community activity dashboard In-Reply-To: <070c2bed-f9bf-cd1e-9147-6312c662b4e3@openstack.org> References: <469af93d-2d15-0043-1931-81a66be2278e@openstack.org> <049c4da2-f6af-0146-43d4-a4c3b5d3b432@openstack.org> <47f1eb6a-20cb-8859-0e84-ceeb5663c437@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007A87657@ALA-MBD.corp.ad.wrs.com> <8e26b1c5-71e7-5f51-cec2-cf245784fb0e@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB1EF3@ALA-MBD.corp.ad.wrs.com> <4bdc69a5-0779-1283-f689-2ce12cbb38c5@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AB26A0@ALA-MBD.corp.ad.wrs.com> <89d14867-8583-d74a-1d25-74bd2f4be15a@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACA290@ALA-MBD.corp.ad.wrs.com> <434c68e9-77e5-66ff-3948-aedc8c76727f@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007ACB3B9@ALA-MBD.corp.ad.wrs.com> <9bbc5256-bd1d-dcb6-8c94-4dcafb76ebb8@openstack.org> <586E8B730EA0DA4A9D6A80A10E486BC007AEFADC@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F3601FF9B@SHSMSX104.ccr.corp.intel.com> <070c2bed-f9bf-cd1e-9147-6312c662b4e3@openstack.org> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36020893@SHSMSX104.ccr.corp.intel.com> Thanks Thierry, it works for me! :-) - cindy -----Original Message----- From: Thierry Carrez [mailto:thierry at openstack.org] Sent: Wednesday, August 28, 2019 4:25 PM To: starlingx-discuss at lists.starlingx.io Cc: Xie, Cindy Subject: Re: [Starlingx-discuss] Community activity dashboard Xie, Cindy wrote: > Just curious, in the "key metrics" panel, are the "Changes" only track the certain period of time? As I was seeing 31xx last month, and today it is 3056. You can set the timeframe using a widget on the top right. By default it shows the "last 1 year". -- Thierry Carrez From Bill.Zvonar at windriver.com Wed Aug 28 10:33:48 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 28 Aug 2019 10:33:48 +0000 Subject: [Starlingx-discuss] Top Level Stories for 3.0 Features In-Reply-To: <6345119E91D5C843A93D64F498ACFA1374F00BB6@shsmsx102.ccr.corp.intel.com> References: <586E8B730EA0DA4A9D6A80A10E486BC007AEF94A@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA1374F00846@shsmsx102.ccr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC007AEFEE3@ALA-MBD.corp.ad.wrs.com> <6345119E91D5C843A93D64F498ACFA1374F00BB6@shsmsx102.ccr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AF02E9@ALA-MBD.corp.ad.wrs.com> Thanks Forrest. -----Original Message----- From: Zhao, Forrest Sent: Tuesday, August 27, 2019 11:31 PM To: Zvonar, Bill ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features Hi Bill, The 'top level' story for OVS-DPDK containerization was created and updated at https://storyboard.openstack.org/#!/story/2005496. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Wednesday, August 28, 2019 1:08 AM To: Zhao, Forrest ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features Thanks Forrest... I'm not sure off the top of my head, but assume it's the usual suspects for that sub-project: https://wiki.openstack.org/wiki/StarlingX/Networking. -----Original Message----- From: Zhao, Forrest Sent: Tuesday, August 27, 2019 10:15 AM To: Zvonar, Bill ; Xie, Cindy ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: RE: Top Level Stories for 3.0 Features Hi Bill, The spec for OVS-DPDK containerization already got 3 +2: https://review.opendev.org/#/c/655830/. Do you know who can help +1 to workflow? We'll create 'top level' story for it tomorrow and send you the link. Thanks, Forrest -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 7:25 PM To: Xie, Cindy ; Zhao, Forrest ; Saul Wold ; Victor Rodriguez ; Hu, Yong Cc: Khalil, Ghada ; starlingx-discuss at lists.starlingx.io Subject: Top Level Stories for 3.0 Features Hi folks, I'm trying to identify a 'top level' story for each of the features in release 3. Most are already present in the release spreadsheet [0], but some are not, some because the spec is in progress. I'd like to complete the set, and am also trying to see how we might use a StoryBoard board [1] to track our progress. Please check for yours below. Thanks, Bill... Cindy: - Intel QAT support for K8s - Containerize Ceph - I assume no story yet since spec in progress Forrest: - Containerize OVS-DPDK - I assume no story yet since spec in progress Saul: - Multi-OS - I used https://storyboard.openstack.org/#!/story/2006191, is there a more appropriate story? - Sysvinit >> systemd Conversion/Cleanup - Flock Versioning Victor: - Performance Testing/Measurement Framework Yong: - Upversion OpenStack to Train [0] https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 (the "Release Candidates" tab) [1] https://storyboard.openstack.org/#!/board/186 From Bill.Zvonar at windriver.com Wed Aug 28 10:59:13 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 28 Aug 2019 10:59:13 +0000 Subject: [Starlingx-discuss] Community Call (August 28, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AF0358@ALA-MBD.corp.ad.wrs.com> Hi all - reminder of the Community Call later today. Topics on the agenda include... - 2.0 declaration imminent - 3.0 planning - Milestone 3 is next week Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190828T1400 From Bill.Zvonar at windriver.com Wed Aug 28 11:25:38 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 28 Aug 2019 11:25:38 +0000 Subject: [Starlingx-discuss] Release Plan - stx.2.0 In-Reply-To: <1466AF2176E6F040BD63860D0A241BBD495C3D48@FMSMSX109.amr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0C1599F37@ALA-MBD.corp.ad.wrs.com> <586E8B730EA0DA4A9D6A80A10E486BC007AEFC2E@ALA-MBD.corp.ad.wrs.com> <1466AF2176E6F040BD63860D0A241BBD495C3D48@FMSMSX109.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AF03B8@ALA-MBD.corp.ad.wrs.com> Excellent, thanks for the update Elio, good news. -----Original Message----- From: Martinez Monroy, Elio Sent: Tuesday, August 27, 2019 9:59 PM To: Zvonar, Bill ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada Cc: Khalil, Ghada Subject: RE: Release Plan - stx.2.0 Hello guys! Just to let you know that we just finished our "additional testing". Objective: Verify the latest changes included in today's release in order to check that those changes are not going to affect other features. Testplan and results location: https://docs.google.com/spreadsheets/d/1qq4CuoWuKZcHbau84YKKcpckzFpzgci9anj1Ey18B3k/edit?usp=sharing Configuration: a) Iso http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190826T233000Z/outputs/iso/ b) Bare metal configuration according with patches and/or bugs related patches. Results Overview: Total Test Cases included according with patches: 25 Test Cases Passed: 18 All related bugs to each TC is updated as well Failed: 1 " Use ntpq refid to tell if peer controller reaches reliable time source" . Related bug: https://launchpad.net/bugs/1834071 Priority: Medium, Alarm Related. Deferred: 1 "Rework Ansible Docker Registry Structure" Reason: we need to modify our docker registry host in order to request user and password. Not executed: 5 Reasons: 2 not merged, 1 related to ipv6 (no infra ready), Ironic is tested separately , and one abandoned patch. Conclusions: According with these results, we don't consider that the patches included during last week can compromise or affect Stx 2.0 release. Even the failed TC has some progress showing a different result better than the original failure. BR Elio Martinez QA Engineer -----Original Message----- From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, August 27, 2019 8:47 AM To: starlingx-discuss at lists.starlingx.io; Cabrales, Ada Cc: Khalil, Ghada Subject: Re: [Starlingx-discuss] Release Plan - stx.2.0 Quick update - the build went fine last night, so we can unleash the test team on sanity + additional testing. Ada - unleash! -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:31 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: RE: Release Plan - stx.2.0 My apologies for the typo below... The second last bullet should read: - All stx.2.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 3:26 PM To: starlingx-discuss at lists.starlingx.io Cc: Zvonar, Bill Subject: Release Plan - stx.2.0 Hello all, The stx.2.0 final candidate build is scheduled on Monday August 26 at 11:30 PM UTC. We are currently waiting for one commit to merge: https://review.opendev.org/#/c/678167/ Please do not cherry-pick any other commits to the r/stx.2.0 branch until further notice. The logistics for next week are as follows: - Developer and core reviewers merge the remaining commit. - Final candidate build run on the r/stx.2.0 branch on Monday August 26 - The test team executes one day of testing on the final candidate build -- sanity + additional testing based on the discretion of the test teams - Ada will send a report by EOD Tuesday -- either with the go-ahead for the release OR identifying any RED sanity/blocking issues - If there is a RED sanity, the bug will need to be resolved and the load will need to be re-built. And sanity will need to be re-executed. - If all is good, Scott and Dean will follow the steps to tag the branch and post the build artifacts in the release directory on CENGN. Other things to note: - All stx.2.0 unresolved medium priority bugs have been moved to stx.3.0 as per community agreement - This means fixes are only required in master from this point forward. - PLs/TLs have the discretion to raise the priority of a medium bug and bring it back as a high priority in stx.2.0 - All stx.3.0 unresolved high priority bugs remain gating stx.2.0 and will need to be resolved in an upcoming maintenance release. - Developers are encouraged to continue working on those as a top priority and to post reviews on master - For more information on the stx.2.0 upcoming maintenance release, please read the minutes from the Release Meeting on 2019-08-22 If you have any questions/concerns, please reach out to Bill Zvonar as I am on vacation next week. And, finally, a big thank you for all community members who have contributed to StarlingX 2.0. Best Regards, Ghada _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Wed Aug 28 13:37:39 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 28 Aug 2019 13:37:39 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack distro meeting, 8/28 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F36020C96@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 8/28 meeting: 1. stx.2.0 bug review (Cindy) stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other+stx.2.0&field.tags_combinator=ALL 1840838: Shuai to work w/ Shuicheng for the log analysis. 1836638: Jim Somerville has identifed upstream patch and now under Gerrit review. stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage+stx.2.0&field.tags_combinator=ALL 1830736: Ovidiu to help Martin on puppet config 1839181: Shuicheng to repro on bare-metal and analyze the log 2. stx.3.0 top level storyboards for non-openstack-distr (Cindy/Saul) https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.3.0&project_group_id=86 2006192: not blocking, but welcome for contribution. 2006387: welcome community for review. 2005527: spec under review, not blocking feature for 3.0. Tingjie & Martin are sign-up. Pushing the stories for patch reduction to stx.4.0 as we will not commit the work until centOS 8.0 upgrade. 3. Opens (all) kernel minor upgrade to 3.10.0-957.27.2: according to Jim Somerville, kernel 3.10.0-957.21.3 already have the fix. Recommendation is not to upgrade this minor version; stx.3.0 needs to do kernel minor upgrade so that .27 content can be included. Curl upgrade patch is already uploaded for review: https://review.opendev.org/678980 -----Original Message----- From: Xie, Cindy Sent: Tuesday, August 27, 2019 9:18 PM To: 'starlingx-discuss at lists.starlingx.io' ; Wold, Saul ; 'Rowsell, Brent' Subject: RE: Weekly StarlingX non-OpenStack distro meeting Agenda for 8/28 meeting: 1. stx.2.0 bug review (Cindy) - stx.distro.others: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.distro.other+stx.2.0&field.tags_combinator=ALL - stx.storage: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.storage+stx.2.0&field.tags_combinator=ALL 2. stx.3.0 top level storyboards for non-openstack-distr (Cindy/Saul) - https://storyboard.openstack.org/#!/story/list?status=active&tags=stx.distro.other&tags=stx.3.0&project_group_id=86 3. Opens (all) -----Original Appointment----- From: Xie, Cindy Sent: Thursday, April 25, 2019 5:42 PM To: Xie, Cindy; 'starlingx-discuss at lists.starlingx.io'; Wold, Saul; 'Rowsell, Brent'; 'zhaos' Cc: Jones, Bruce E; 'Waines, Greg'; Cobbley, David A; Armstrong, Robert H; 'Badea, Daniel'; Hu, Wei W; 'Zhi Zhi2 Chang'; 'Seiler, Glenn'; Chen, Tingjie; 'Carlos Cebrian'; 'Chen, Jacky'; Gomez, Juan P; 'Peng Tan'; 'Eslimi, Dariush'; 'Komiyama, Takeo' Subject: Weekly StarlingX non-OpenStack distro meeting When: Wednesday, August 28, 2019 9:00 PM-10:00 PM (UTC+08:00) Beijing, Chongqing, Hong Kong, Urumqi. Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From Bill.Zvonar at windriver.com Wed Aug 28 15:28:27 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 28 Aug 2019 15:28:27 +0000 Subject: [Starlingx-discuss] Community Call (August 28, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AF06DE@ALA-MBD.corp.ad.wrs.com> Notes from today's call... standing topics - reviews that need attention - none - sanity - any reds since last call? - nope - calls for assistance on the mailing list - issues with keystone from Sai Chandu at Calsoftinc: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005802.html 2.0 release build/sanity - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005781.html - got through the build & sanity / additional test ok, on to Scott/Dean doing their thing re: tagging the branch - also, Scott's email about build timing: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005812.html - 2.0 announcement - the point about not being able to 'upgrade' from 2.0 to a 2.0 mtc release (per last week's release team meeting) - release notes - landing page - high level summary: https://docs.starlingx.io/releasenotes/index.html - per Mike, they'll update this to differentiate the 2.0 material from 1.0 - if anyone has any requests/updates, please use this review to request/comment: https://review.opendev.org/#/c/677805/ - sub-project specific release notes - linked from the landing page, e.g. https://docs.starlingx.io/releasenotes/stx-metal/index.html - per Dean, these will be organized correctly when we have the pbr stuff (tagging, some scripting) sorted out in each repo - contributor guides - for the main page: https://docs.starlingx.io/contributor/doc_contribute_guide.html - for the sub-project pages: https://docs.starlingx.io/contributor/release_note_contribute_guide.html - general consensus (silence?) on having the 2.0 release notes as a work in progress at the announcement date - but, for 3.0, we'll endeavour to have them updated as we go --- Bill to capture an AR on this - communication - Ildiko reiterated that the press releases, web site updates & presentation materials will go out on the 3rd 3.0 plans - http://lists.starlingx.io/pipermail/starlingx-discuss/2019-August/005782.html - PLs need to assess the status of their 3.0 feature work - what's the plan for completion & feature test readiness - release plan: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit?usp=sharing - milestone definitions: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones - 3.0 board in StoryBoard: https://storyboard.openstack.org/#!/board/186 -----Original Message----- From: Zvonar, Bill Sent: Wednesday, August 28, 2019 6:59 AM To: starlingx-discuss at lists.starlingx.io Subject: Community Call (August 28, 2019) Hi all - reminder of the Community Call later today. Topics on the agenda include... - 2.0 declaration imminent - 3.0 planning - Milestone 3 is next week Please feel free to add topics on the etherpad [0]. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_PDT_.2F_1400_UTC_-_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20190828T1400 From Tao.Liu at windriver.com Wed Aug 28 16:50:59 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Wed, 28 Aug 2019 16:50:59 +0000 Subject: [Starlingx-discuss] Support single huge page size for openstack worker node Message-ID: <7242A3DC72E453498E3D783BBB134C3EA4EA7770@ALA-MBD.corp.ad.wrs.com> Hi All, The change that removes auto-provision of huge pages has been merged into the master branch. A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. Configure vSwitch memory per NUMA node: system host-memory-modify -f -1G <1G hugepages number> i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. Please make any necessary documentation changes. Regards, Tao From: Liu, Tao Sent: Tuesday, August 20, 2019 8:39 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: Re: Pending: Support single huge page size for openstack worker node Hi All, The changes to support single huge page size have been merged into master. In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. Regards, Tao From: Liu, Tao Sent: Thursday, August 15, 2019 1:06 PM To: starlingx-discuss at lists.starlingx.io Subject: Pending: Support single huge page size for openstack worker node Hi All, Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. The code changes are available here: https://review.opendev.org/#/c/676710/ Regards, Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Aug 28 19:05:52 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 28 Aug 2019 15:05:52 -0400 Subject: [Starlingx-discuss] Support single huge page size for openstack worker node In-Reply-To: <7242A3DC72E453498E3D783BBB134C3EA4EA7770@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3EA4EA7770@ALA-MBD.corp.ad.wrs.com> Message-ID: Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao wrote: > > Hi All, > > > > The change that removes auto-provision of huge pages has been merged into the master branch. > > > > A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. > > > > Configure vSwitch memory per NUMA node: > > system host-memory-modify -f -1G <1G hugepages number> > > i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 > Thanks, this enables the users to choose accordingly to their needs and performance requirements > > > If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. > > > > Please make any necessary documentation changes. > I will encourage that important code changes like this came with the same patch to documentation Regards Victor R. > > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Tuesday, August 20, 2019 8:39 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: Re: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > The changes to support single huge page size have been merged into master. > > > > In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. > > > > A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. > > If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. > > Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Thursday, August 15, 2019 1:06 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size > > per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. > > > > The code changes are available here: > > https://review.opendev.org/#/c/676710/ > > > > > > Regards, > > > > Tao Liu, Member of Technical Staff, Engineering,, Wind River > > direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home > > 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cristopher.j.lemus.contreras at intel.com Wed Aug 28 19:27:38 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Wed, 28 Aug 2019 19:27:38 +0000 Subject: [Starlingx-discuss] Support single huge page size for openstack worker node In-Reply-To: References: <7242A3DC72E453498E3D783BBB134C3EA4EA7770@ALA-MBD.corp.ad.wrs.com> Message-ID: Hello, This change will break the automation that we have to do the setup. We'll adapt it, seems to be a quick change, tomorrow, with the build that contains this change, we can do the required testing (not sure, but most likely, we won't have sanity results tomorrow for BareMetal). The issue that we faced last time, was on virtual environments. We don't change the vswitch on virtual environments, I expect that this won't have an impact. Tao, could you please point us to the documentation to properly use " system host-memory-modify -f -1G <1G hugepages number> " ? We'd need to know which values are adequate for these two values: <1G hugepages number> I'm thinking that these are related to the actual amount of memory and cpus on each baremetal server. Are there best practices? Recommended values? Should we stick to <1G hugepages number>=1 and =0 no matter our hardware specs? Thanks in advance. Cristopher Lemus On 8/28/19, 2:06 PM, "Victor Rodriguez" wrote: Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao wrote: > > Hi All, > > > > The change that removes auto-provision of huge pages has been merged into the master branch. > > > > A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. > > > > Configure vSwitch memory per NUMA node: > > system host-memory-modify -f -1G <1G hugepages number> > > i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 > Thanks, this enables the users to choose accordingly to their needs and performance requirements > > > If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. > > > > Please make any necessary documentation changes. > I will encourage that important code changes like this came with the same patch to documentation Regards Victor R. > > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Tuesday, August 20, 2019 8:39 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: Re: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > The changes to support single huge page size have been merged into master. > > > > In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. > > > > A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. > > If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. > > Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Thursday, August 15, 2019 1:06 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size > > per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. > > > > The code changes are available here: > > https://review.opendev.org/#/c/676710/ > > > > > > Regards, > > > > Tao Liu, Member of Technical Staff, Engineering,, Wind River > > direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home > > 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Tao.Liu at windriver.com Wed Aug 28 20:26:36 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Wed, 28 Aug 2019 20:26:36 +0000 Subject: [Starlingx-discuss] Support single huge page size for openstack worker node In-Reply-To: References: <7242A3DC72E453498E3D783BBB134C3EA4EA7770@ALA-MBD.corp.ad.wrs.com> Message-ID: <7242A3DC72E453498E3D783BBB134C3EA4EA87F5@ALA-MBD.corp.ad.wrs.com> Hi Cristopher, OVS-DPDK is NOT supported on virtual environment, and VM huge pages were not auto-provisioned previously, so there is no impact for virtual environment. For Bare Meta, you would need to allocate 1 1G huge page per NUMA node for vSwitch memory, if vswitch_type is set to OVS-DPDK (assuming all Bare Meta servers support 1G nowadays). Use 'system host-memory-list ' to discover how many processors are supported on the host Then allocate 1 1G huge pages per processor for vswitch, for example: system host-memory-modify -f vswitch -1G 1 compute-0 0 system host-memory-modify -f vswitch -1G 1 compute-0 1 For VM huge pages, we used to auto-provisioned the possible VM huge pages to 2M pages, i.e. VM possible = (node total memory - platform reserved) * 0.9 - vswitch. With the single huge page size support, you would need to allocate X number of 1G huge pages for VMs to satisfy the automated test cases ( if the test cases launch VMs using the huge pages). This depends on how many VMs are launched during the test. I think 6 to 10 of 1G huge page should be enough and it is safe for small Bare Meta servers. For example: system host-memory-modify -1G 6 compute-0 0 system host-memory-modify -1G 6 compute-0 1 Regards, Tao -----Original Message----- From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Wednesday, August 28, 2019 3:28 PM To: Victor Rodriguez; Liu, Tao Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Support single huge page size for openstack worker node Hello, This change will break the automation that we have to do the setup. We'll adapt it, seems to be a quick change, tomorrow, with the build that contains this change, we can do the required testing (not sure, but most likely, we won't have sanity results tomorrow for BareMetal). The issue that we faced last time, was on virtual environments. We don't change the vswitch on virtual environments, I expect that this won't have an impact. Tao, could you please point us to the documentation to properly use " system host-memory-modify -f -1G <1G hugepages number> " ? We'd need to know which values are adequate for these two values: <1G hugepages number> I'm thinking that these are related to the actual amount of memory and cpus on each baremetal server. Are there best practices? Recommended values? Should we stick to <1G hugepages number>=1 and =0 no matter our hardware specs? Thanks in advance. Cristopher Lemus On 8/28/19, 2:06 PM, "Victor Rodriguez" wrote: Hi Tao Thanks for sharing the information Cristopher, I was wondering if this change with huge pages of 1 GB affected the testing environment. Last time you, Erich and myself were debugging the lack of memory during sanity and was mainly because of huge pages reservations On Wed, Aug 28, 2019 at 12:51 PM Liu, Tao wrote: > > Hi All, > > > > The change that removes auto-provision of huge pages has been merged into the master branch. > > > > A new provisioning step is required prior to unlocking an AIO controller or a worker node, if vswitch_type is set to OVS-DPDK. > > > > Configure vSwitch memory per NUMA node: > > system host-memory-modify -f -1G <1G hugepages number> > > i.e. system host-memory-modify -f vswitch -1G 1 compute-0 0 > Thanks, this enables the users to choose accordingly to their needs and performance requirements > > > If you plan to create VMs using the huge pages, you would need to configure VM huge pages as well. > > > > Please make any necessary documentation changes. > I will encourage that important code changes like this came with the same patch to documentation Regards Victor R. > > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Tuesday, August 20, 2019 8:39 PM > To: 'starlingx-discuss at lists.starlingx.io' > Subject: Re: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > The changes to support single huge page size have been merged into master. > > > > In this first update, the auto-provision of VM huge pages has been changed from 2M to 1G. > > > > A subsequent update, will remove the auto-provisioning of huge pages for both VM and vSwitch. > > If any test case is dependent on the default VM huge pages, a new step will be required to allocate huge pages for VM on a openstack worker node. > > Prior to unlocking a worker node , user provisioning of the vSwitch memory would be required, if vswitch_type is set to OVS-DPDK. > > > > Regards, > > Tao > > > > From: Liu, Tao > Sent: Thursday, August 15, 2019 1:06 PM > To: starlingx-discuss at lists.starlingx.io > Subject: Pending: Support single huge page size for openstack worker node > > > > Hi All, > > > > Per story 2006295, we are in the process of supporting single huge page size for openstack worker node. This means we will enforce the provisioning of a single huge page size > > per worker, which aligns with the non-openstack worker behavior. The automated test cases that attempt to allocate both 2M and 1G huge pages on a worker node should be updated. > > > > The code changes are available here: > > https://review.opendev.org/#/c/676710/ > > > > > > Regards, > > > > Tao Liu, Member of Technical Staff, Engineering,, Wind River > > direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home > > 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Wed Aug 28 20:26:40 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 28 Aug 2019 20:26:40 +0000 Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F3601DEA7@SHSMSX104.ccr.corp.intel.com> References: <76647BD697F40748B1FA4F56DA02AA0B4D60F245@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F3601DBEE@SHSMSX104.ccr.corp.intel.com> <76647BD697F40748B1FA4F56DA02AA0B4D60F56B@SHSMSX104.ccr.corp.intel.com> <2FD5DDB5A04D264C80D42CA35194914F3601DEA7@SHSMSX104.ccr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CEB5B1F@FMSMSX114.amr.corp.intel.com> Hello Cindy Yes, Elio will be the one. Thank you Ada From: Xie, Cindy Sent: Monday, August 26, 2019 2:42 AM To: Le, Huifeng ; Rowsell, Brent ; Peters, Matt ; Guo, Ruijing ; Cabrales, Ada Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking + Ada for feature testing. It's so nice to know that basic TSN support shall be already in 2.0 (even in 1.0). This is one of the critical features to support Industrial use case. As TSN spec got approved as 3.0 feature, I guess we may have to run full feature testing according to your wiki pages before we can claim the full support. @Ada, do you have engineer assigned to work w/ network team on this important feature? Thx. - cindy From: Le, Huifeng Sent: Monday, August 26, 2019 2:22 PM To: Xie, Cindy >; Rowsell, Brent >; Peters, Matt >; Guo, Ruijing > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Cindy, There is no patches pending for merging. TSN PTP feature had already been supported in STX 1.0, and the wiki verified how other TSN features can be deployed in STX through PCI-Passthrough (But there may still be confliction with STX PTP feature, e.g. STX PTP feature requires Nic be available in host, but due to the Nic e.g. Intel i210 does not support SRIOV, in case it is pass-through into VM to support TSN application, it will be not available in host then the STX PTP feature will not work). I have no concern and Core/TSC team can review and determine how to claim if no more concern about the process, thanks much! Best Regards, Le, Huifeng From: Xie, Cindy Sent: Monday, August 26, 2019 12:49 PM To: Le, Huifeng >; Rowsell, Brent >; Peters, Matt >; Guo, Ruijing > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Huifeng, Seems like TSN support is based on Nova PCI pass-through for i210 network adaptor. Do you have any Nova patches or StarlingX patches pending merge (and need cherry pick) to allow the procedures in wiki page to be successful? If there are no patches pending, can we claim that TSN feature is already supported in stx.2.0? Thx. - cindy From: Le, Huifeng [mailto:huifeng.le at intel.com] Sent: Sunday, August 25, 2019 2:39 PM To: Rowsell, Brent >; Peters, Matt >; Guo, Ruijing > Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Please help to review STX Wiki for Time Sensitive Networking Brent, Matt, Ruijing, As following up on story: [Feature] Time Sensitive Networking (https://storyboard.openstack.org/#!/story/2005516) and approved spec https://review.opendev.org/#/c/666768/, we had done the POC to deploy and run TSN application on STX environment, the detail process and learning are summarized at Wiki: https://wiki.openstack.org/wiki/StarlingX/Networking/TSN which can be served as deliverable of task "StarlingX user guide on how to deploy TSN in VM". Could you please help to review the Wiki and let me know if you have any comments. Thanks much! Best Regards, Le, Huifeng -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Aug 28 20:49:57 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 28 Aug 2019 20:49:57 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 8/28/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1BB8EE9@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From maria.g.perez.ibarra at intel.com Wed Aug 28 23:04:56 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Wed, 28 Aug 2019 23:04:56 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190828 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-28(link) Status: Green =========================================== Sanity Test is executed in a Containers - Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Aug 28 23:24:19 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 28 Aug 2019 23:24:19 +0000 Subject: [Starlingx-discuss] Release notes for R2 Message-ID: <3808363B39586544A6839C76CF81445EA1BB905F@ORSMSX104.amr.corp.intel.com> As we discussed in the community and docs meetings today, we need to deliver release notes for R2. Please contribute! For the release notes high-level summary (crossing all projects), we’ve created the R2 stub page in our ongoing review at: https://review.opendev.org/#/c/677805/ This will go live Monday end of day at Central US time to give it some time to merge for the Tuesday announcement. To add content to the R2 release notes stub page, add a new patch set (commit) to the existing review above. For context, you can preview the updated release notes landing page here from the Zuul build for patch set 15: https://c260d403358ae79c8281-b484a2b89d5d2c358f068133dfb2fa14.ssl.cf1.rackcdn.com/677805/15/check/openstack-tox-docs/80f74da/docs/releasenotes/index.html And the stub page for the R2 summary is here: https://c260d403358ae79c8281-b484a2b89d5d2c358f068133dfb2fa14.ssl.cf1.rackcdn.com/677805/15/check/openstack-tox-docs/80f74da/docs/releasenotes/r2_release.html That will take care of the summary notes. For contributing to the detailed, project-specific release notes, see: https://docs.starlingx.io/contributor/release_note_contribute_guide.html Thx. -- Docs team -------------- next part -------------- An HTML attachment was scrubbed... URL: From ran1.an at intel.com Thu Aug 29 01:43:56 2019 From: ran1.an at intel.com (An, Ran1) Date: Thu, 29 Aug 2019 01:43:56 +0000 Subject: [Starlingx-discuss] Unable to generate distroless docker image In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FC1578268@ALA-MBD.corp.ad.wrs.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05321DD1@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC1578268@ALA-MBD.corp.ad.wrs.com> Message-ID: <9BAB5B7CAF57C3459E4636391F1071CE0532222A@shsmsx102.ccr.corp.intel.com> Is the CENGN build script in our starlingX project? Or it require an admin to updated? Thanks Ran From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Wednesday, August 28, 2019 12:43 AM To: An, Ran1 ; starlingx-discuss at lists.starlingx.io Cc: Little, Scott Subject: RE: [Starlingx-discuss]Unable to generate distroless docker image That requires an update to the CENGN build scripts, as it would currently just be building --os centos. From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Tuesday, August 27, 2019 11:48 AM To: starlingx-discuss at lists.starlingx.io Cc: Penney, Don; Little, Scott Subject: [Starlingx-discuss]Unable to generate distroless docker image Hi core reviewers: With patch[1] has been merged, a docker image "starlingx/intel-gpu-plugin docker" required by story/2005937 [2] should be generated and pushed to docker hub by Cengn automatically. However, there are no new repository "starlingx/intel-gpu-plugin docker" on docker hub after the latest docker images weekly built. I'm not sure if there is anything I missed or some other configurations are required. Could you help and check it? By the way, "starlingx/intel-gpu-plugin docker" is an image based on distro-less system, listed in file "starlingx/integ /distroless_stable_docker_images.inc". It can be built by following command. " WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/stx-centos-${BUILD_STREAM}-wheels.tar DOCKER_USER={my_docker_user_name} time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --os distroless \ --stream stable \ --base gcr.io/distroless/base \ --wheels ${WHEELS} \ --user ${DOCKER_USER} \ --push --latest \ " [1] https://review.opendev.org/#/c/668808 [2] https://storyboard.openstack.org/#!/story/2005937 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Aug 29 11:59:30 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 29 Aug 2019 11:59:30 +0000 Subject: [Starlingx-discuss] Release Plan - stx.3.0 In-Reply-To: <151EE31B9FCCA54397A757BC674650F0C1599F8F@ALA-MBD.corp.ad.wrs.com> References: <151EE31B9FCCA54397A757BC674650F0C1599F8F@ALA-MBD.corp.ad.wrs.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AF0D68@ALA-MBD.corp.ad.wrs.com> Hello stx.3.0 feature primes... As discussed in the Community Call yesterday, we're looking for your forecast for dev/test completion. Per below, the list of features targeted for stx.3.0 is available at: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020. We've also started a board on StoryBoard to track the plans for the features by top-level story: https://storyboard.openstack.org/#!/board/186. Please provide an update for your feature(s) as to when you plan to complete your development & testing, and can hand off to the test team for feature testing. Thanks, Bill... -----Original Message----- From: Khalil, Ghada Sent: Friday, August 23, 2019 4:24 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Release Plan - stx.3.0 Hello all, The stx.3.0 milestone-2 is planned for the week of Sept 3. This is a call to the various primes (dev, test, doc) to help close the milestone criteria. For a list of features targeted for stx.3.0 is available at: https://docs.google.com/spreadsheets/d/1ZFR-9-riwhIwiBYBmWmi1qOVyHtgJ8Xk_FaF2jp5rY0/edit#gid=1010323020 The criteria for the milestone are as follows: - Spec freeze - Specs are in good shape. - Performance Framework spec is posted. R2 >> R3 spec will continue to be an exception. - Feature plans defined and feature development well underway - To date, we do not have concrete plans from the feature PLs on risk, status, and expected code merge dates - PLs, please update your plans on the spreadsheet above. Please also create the corresponding stories in StoryBoard and tag them with the stx.3.0 label - Release test plan defined - including test automation deliverables - Test team is raising concerns about the scope of features for stx.3.0 and the release timeline. There are only 5wks left for feature testing and regression is supposed to start on Sept 23. - Given the tight timeline, we need PLs to provide a risk indicator for landing their feature content. Ideally, a rough plan would help the test team plan accordingly. If a feature is too big to land in stx.3.0, knowing this now helps the test team focus their efforts on the features that will make it. If a feature is delivering in chunks, please engage the test team so that they can test as content is delivered. - Documentation plan defined - Need confirmation from the doc team on what they have planned Regards, Ghada _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From abraham.arce.moreno at intel.com Thu Aug 29 14:26:07 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Thu, 29 Aug 2019 14:26:07 +0000 Subject: [Starlingx-discuss] Unable to generate distroless docker image In-Reply-To: <9BAB5B7CAF57C3459E4636391F1071CE0532222A@shsmsx102.ccr.corp.intel.com> References: <9BAB5B7CAF57C3459E4636391F1071CE05321DD1@shsmsx102.ccr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FC1578268@ALA-MBD.corp.ad.wrs.com> <9BAB5B7CAF57C3459E4636391F1071CE0532222A@shsmsx102.ccr.corp.intel.com> Message-ID: Hi Ran, We will bring your request in today's build meeting. Best Regards Abraham From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Wednesday, August 28, 2019 8:44 PM To: 'Penney, Don' ; starlingx-discuss at lists.starlingx.io Cc: Little, Scott Subject: Re: [Starlingx-discuss] Unable to generate distroless docker image Is the CENGN build script in our starlingX project? Or it require an admin to updated? Thanks Ran From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Wednesday, August 28, 2019 12:43 AM To: An, Ran1 >; starlingx-discuss at lists.starlingx.io Cc: Little, Scott > Subject: RE: [Starlingx-discuss]Unable to generate distroless docker image That requires an update to the CENGN build scripts, as it would currently just be building --os centos. From: An, Ran1 [mailto:ran1.an at intel.com] Sent: Tuesday, August 27, 2019 11:48 AM To: starlingx-discuss at lists.starlingx.io Cc: Penney, Don; Little, Scott Subject: [Starlingx-discuss]Unable to generate distroless docker image Hi core reviewers: With patch[1] has been merged, a docker image "starlingx/intel-gpu-plugin docker" required by story/2005937 [2] should be generated and pushed to docker hub by Cengn automatically. However, there are no new repository "starlingx/intel-gpu-plugin docker" on docker hub after the latest docker images weekly built. I'm not sure if there is anything I missed or some other configurations are required. Could you help and check it? By the way, "starlingx/intel-gpu-plugin docker" is an image based on distro-less system, listed in file "starlingx/integ /distroless_stable_docker_images.inc". It can be built by following command. " WHEELS=http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/stx-centos-${BUILD_STREAM}-wheels.tar DOCKER_USER={my_docker_user_name} time $MY_REPO/build-tools/build-docker-images/build-stx-images.sh \ --os distroless \ --stream stable \ --base gcr.io/distroless/base \ --wheels ${WHEELS} \ --user ${DOCKER_USER} \ --push --latest \ " [1] https://review.opendev.org/#/c/668808 [2] https://storyboard.openstack.org/#!/story/2005937 Thanks Ran -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Thu Aug 29 20:08:49 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Thu, 29 Aug 2019 20:08:49 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - August 29/2019 Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC007AF15D6@ALA-MBD.corp.ad.wrs.com> Notes from today's release team meeting below and at [0] - we're finalizing the release branch work for 2.0, and are otherwise good to go; for 3.0, we will be chasing for feature completion plans. Bill. Release Team Meeting - August 28 2019 . stx.2.0 o branch logistics . Scott has delivered release/2.0.0 on CENGN & tidied the directory and updated the latest_release symlink: http://mirror.starlingx.cengn.ca/mirror/starlingx/release/ . tagging to follow shortly  o documentation . release notes landing page is ready and we can start making contents by updating: https://review.opendev.org/#/c/677805/ . Bill to ask Ildiko about adding the overview deck to the release notes (per https://github.com/StarlingXWeb/starlingx-website/issues/39) o testing . regression testing has finished; pass rate is 95.2% . feature testing: the ironic test missing has been completed and passed; we can close feature testing, too . stx.3.0 o dashboard: https://storyboard.openstack.org/#!/board/186 - looking for forecast dates from feature owners  o testing: started to engage with feature owners in order to scope the testing & will be paying attention to their updates on the stx.3.0 dashboard o documentation: the 2 top-level stories that are on the board now cover the Dist Cloud documentation and the other miscellaneous config topics  [0] https://etherpad.openstack.org/p/stx-releases From maria.g.perez.ibarra at intel.com Thu Aug 29 22:17:28 2019 From: maria.g.perez.ibarra at intel.com (Perez Ibarra, Maria G) Date: Thu, 29 Aug 2019 22:17:28 +0000 Subject: [Starlingx-discuss] [MASTER] Sanity Test - ISO 20190829 Message-ID: Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-AUG-29(link) Status: Green =========================================== Sanity Test is executed in a Containers - Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS] Standard - External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Regards Maria G. -------------- next part -------------- An HTML attachment was scrubbed... URL: From chenjie.xu at intel.com Fri Aug 30 02:48:46 2019 From: chenjie.xu at intel.com (Xu, Chenjie) Date: Fri, 30 Aug 2019 02:48:46 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: <8CB39FEB-D0AC-46B4-97B8-60CEA4E95E24@windriver.com> References: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> <2D45BE88-FE4A-4B96-9934-48E24B0AAB7C@windriver.com> <8CB39FEB-D0AC-46B4-97B8-60CEA4E95E24@windriver.com> Message-ID: Hi Matt, I have finished my testing with I210 NIC and the result shows that this way should be only used for SR-IOV physical function. And the blueprint "SR-IOV physical functions assignment with Neutron port" also indicates the same thing. https://blueprints.launchpad.net/nova/+spec/sriov-pf-passthrough-neutron-port https://bugs.launchpad.net/starlingx/+bug/1836682 For this limitation, which way do you suggest? 1. Record as a known limitation and document how to pass through NICs which don't support SR-IOV. Like below: users need to override helm with PCI alias like following: cat > nova-overrides.yaml <; Webster, Steven ; Kopec, Gerald (Gerry) Cc: Khalil, Ghada ; Zhao, Forrest ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Chenjie, The latest openstack release should support using the port vnic_type for type-PCI devices. The only device I know of that doesn’t support PF/VF is the i210, which is what I think was used in reporting the bug. I think your approach of having them retest with this method is the correct thing to do given you don’t have access to the hardware. -Matt From: "Xu, Chenjie" > Date: Tuesday, August 6, 2019 at 8:17 PM To: "Peters, Matt" >, "Webster, Steven" >, "Kopec, Gerald (Gerry)" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Yes, I mean the port that does not report itself as a PF. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 7, 2019 12:55 AM To: Xu, Chenjie >; Webster, Steven >; Kopec, Gerald (Gerry) > Cc: Khalil, Ghada >; Zhao, Forrest >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hello Chenjie, We typically configure the PCI-PT devices using the vnic-type option for manually created ports. This replaces the older mechanism of being able to specify the vif-type (which was a StarlingX specific extension that was dropped). For your question about the NIC type that does not support SR-IOV, do you mean a port that does not report itself as a PF (from a libvirt/nova perspective that would be device with Type-PCI vs Type-PF)? -Matt From: "Xu, Chenjie" > Date: Tuesday, August 6, 2019 at 11:14 AM To: "Peters, Matt" >, "Webster, Steven" >, "Kopec, Gerald (Gerry)" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, I find another way to pass through SR-IOV capable physical NIC to VM. This new way doesn't require to configure "PCI alias". The key point is to create a port whose vnic_type is direct-physical. The following link can be referenced: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/sr-iov-support-for-virtual-networking However if we try to pass through a physical NIC which doesn't support SR-IOV, we may still need to configure "PCI alias". Because I don't have a physical NIC which doesn't support SR-IOV on my server, I can't test passing such NIC to VM by creating port whose vnic_type is direct-physical. Do you think StarlingX needs to configure “PCI alias” automatically for physical NIC which doesn’t support SR-IOV or not? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 1, 2019 6:34 PM To: Xu, Chenjie >; Webster, Steven >; Kopec, Gerald (Gerry) > Cc: Khalil, Ghada >; Zhao, Forrest >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" > Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Fri Aug 30 11:40:11 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 30 Aug 2019 11:40:11 +0000 Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs In-Reply-To: References: <35423D55-043D-4F11-A2D7-86B8D4E4CCDB@windriver.com> <2D45BE88-FE4A-4B96-9934-48E24B0AAB7C@windriver.com> <8CB39FEB-D0AC-46B4-97B8-60CEA4E95E24@windriver.com> Message-ID: <85FFB864-A407-4590-917E-10CBD4735D10@windriver.com> Hi Chenjie, What were the issues you encountered that prevented it from working with the port vnic_type? Thanks, Matt From: "Xu, Chenjie" Date: Thursday, August 29, 2019 at 10:49 PM To: "Peters, Matt" , "Webster, Steven" , "Kopec, Gerald (Gerry)" Cc: Ghada Khalil , "Zhao, Forrest" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, I have finished my testing with I210 NIC and the result shows that this way should be only used for SR-IOV physical function. And the blueprint "SR-IOV physical functions assignment with Neutron port" also indicates the same thing. https://blueprints.launchpad.net/nova/+spec/sriov-pf-passthrough-neutron-port https://bugs.launchpad.net/starlingx/+bug/1836682 For this limitation, which way do you suggest? 1. Record as a known limitation and document how to pass through NICs which don't support SR-IOV. Like below: users need to override helm with PCI alias like following: cat > nova-overrides.yaml <; Webster, Steven ; Kopec, Gerald (Gerry) Cc: Khalil, Ghada ; Zhao, Forrest ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Chenjie, The latest openstack release should support using the port vnic_type for type-PCI devices. The only device I know of that doesn’t support PF/VF is the i210, which is what I think was used in reporting the bug. I think your approach of having them retest with this method is the correct thing to do given you don’t have access to the hardware. -Matt From: "Xu, Chenjie" > Date: Tuesday, August 6, 2019 at 8:17 PM To: "Peters, Matt" >, "Webster, Steven" >, "Kopec, Gerald (Gerry)" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Yes, I mean the port that does not report itself as a PF. Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Wednesday, August 7, 2019 12:55 AM To: Xu, Chenjie >; Webster, Steven >; Kopec, Gerald (Gerry) > Cc: Khalil, Ghada >; Zhao, Forrest >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hello Chenjie, We typically configure the PCI-PT devices using the vnic-type option for manually created ports. This replaces the older mechanism of being able to specify the vif-type (which was a StarlingX specific extension that was dropped). For your question about the NIC type that does not support SR-IOV, do you mean a port that does not report itself as a PF (from a libvirt/nova perspective that would be device with Type-PCI vs Type-PF)? -Matt From: "Xu, Chenjie" > Date: Tuesday, August 6, 2019 at 11:14 AM To: "Peters, Matt" >, "Webster, Steven" >, "Kopec, Gerald (Gerry)" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: RE: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, I find another way to pass through SR-IOV capable physical NIC to VM. This new way doesn't require to configure "PCI alias". The key point is to create a port whose vnic_type is direct-physical. The following link can be referenced: https://access.redhat.com/documentation/en-us/red_hat_openstack_platform/10/html/networking_guide/sr-iov-support-for-virtual-networking However if we try to pass through a physical NIC which doesn't support SR-IOV, we may still need to configure "PCI alias". Because I don't have a physical NIC which doesn't support SR-IOV on my server, I can't test passing such NIC to VM by creating port whose vnic_type is direct-physical. Do you think StarlingX needs to configure “PCI alias” automatically for physical NIC which doesn’t support SR-IOV or not? Best Regards, Xu, Chenjie From: Peters, Matt [mailto:Matt.Peters at windriver.com] Sent: Thursday, August 1, 2019 6:34 PM To: Xu, Chenjie >; Webster, Steven >; Kopec, Gerald (Gerry) > Cc: Khalil, Ghada >; Zhao, Forrest >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs +Steve +Gerry Do you have any additional information to add here? I don’t believe we had to setup an alias in the past to do PCI-PT, so is this something that is new to the latest OpenStack nova release? Did we drop some functionality to align with upstream nova (that use to be in starlingx-staging)? -Matt From: "Xu, Chenjie" > Date: Thursday, August 1, 2019 at 3:36 AM To: "Peters, Matt" > Cc: Ghada Khalil >, "Zhao, Forrest" >, "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] [Bug] flavor "pci_passthrough:alias" should be required to passthrough physical NICs Hi Matt, Based on my testing, the flavor with property “pci_passthrough:alias” should be required for passing a physical NIC to the VM. But it should not be required for passing a VF to the VM. So I think the alias information should contain physical NICs which are configured with “pci-passthrough” by following command: system host-if-modify -m 1500 -n pcipass -c pci-passthrough ${COMPUTE} ${IFUUID} system interface-datanetwork-assign ${COMPUTE} pcipass ${PHYSNET2} Could you please let me know your opinions and leave a comment in the below bug: https://bugs.launchpad.net/starlingx/+bug/1836682 Best Regards, Xu, Chenjie -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Fri Aug 30 17:10:49 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 30 Aug 2019 13:10:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_publish_docker_images - Build # 103 - Failure! Message-ID: <422177191.192.1567185050822.JavaMail.javamailuser@localhost> Project: STX_publish_docker_images Build #: 103 Status: Failure Timestamp: 20190830T171043Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190826T233000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190826T233000Z OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190826T233000Z/logs BUILD_STREAM: stable TIMESTAMP: 20190826T233000Z PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190826T233000Z/logs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190826T233000Z/outputs MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos From build.starlingx at gmail.com Fri Aug 30 17:10:53 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 30 Aug 2019 13:10:53 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] STX_build_docker_images - Build # 123 - Failure! Message-ID: <1186007546.195.1567185053974.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 123 Status: Failure Timestamp: 20190830T150530Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190826T233000Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.2.0 MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-2.0/20190826T233000Z OS: centos MUNGED_BRANCH: rc-2.0 MY_REPO: /localdisk/designer/jenkins/rc-2.0/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/2.0/centos/20190826T233000Z/logs MASTER_BUILD_NUMBER: 40 PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190826T233000Z/logs MASTER_JOB_NAME: STX_BUILD_2.0 MY_REPO_ROOT: /localdisk/designer/jenkins/rc-2.0 PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/2.0/centos PUBLISH_TIMESTAMP: 20190826T233000Z DOCKER_BUILD_ID: jenkins-rc-2.0-20190826T233000Z-builder TIMESTAMP: 20190826T233000Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190826T233000Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/rc/2.0/centos/20190826T233000Z/outputs From Frank.Miller at windriver.com Fri Aug 30 17:56:26 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 30 Aug 2019 17:56:26 +0000 Subject: [Starlingx-discuss] Containerization Meeting cancelled for Sep 2 Message-ID: FYI - The weekly meeting for containerization will not be held on Monday Sept 2nd as this is a holiday in Canada and the US. Our next meeting will be on Sept 9th. Frank -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Aug 30 20:09:45 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 30 Aug 2019 16:09:45 -0400 Subject: [Starlingx-discuss] repo restructuring Message-ID: <8b8d87c3-495d-0267-14b8-dfee39b8955f@windriver.com> The layered build feature is getting ready for its initial required changes [1] [2]. The first phase is a restructuring of the StarlingX git repos to enable layered builds in the next phase.  In light of new package additions in the last few weeks, there has been a few modifications and additions to the spreadsheet [3] documenting all the intended moves.  Edits are in blue text.  The intent is that all package relocations will be history preserving. We plan to implement the git restructuring on the week of September 3-6. My initial ask of the StarlingX community is that we *temporarily freeze the addition of any new packages* while we make a final test run.  This means that any updates that touch a centos_pkgs_dir file should not receive a WF+1 until the relocation is complete.  After the relocation, you may need to re-issue your code review. Thanks for your co-operation. Scott Little [1] https://review.opendev.org/#/c/672288/ [2] https://storyboard.openstack.org/#!/story/2006166 [3] https://docs.google.com/spreadsheets/d/1zURL1UlDST8lnvw3dMlNWN6pkLX6EVF6TDBwNR9TQik/edit#gid=1697053891 -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Fri Aug 30 20:59:32 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Fri, 30 Aug 2019 20:59:32 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190830 Message-ID: <169E61F6-8CAD-4810-9D88-B9794E12B873@intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-August-30 (link) Status: GREEN =========================================== Sanity Test is executed in a Containers – Bare Metal Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Sanity Test is executed in a Containers – Virtual Environment AIO – Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs PASS ] AIO – Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs PASS ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs PASS ] Standard - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs PASS ] =========================================== Best Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Fri Aug 30 23:56:28 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Fri, 30 Aug 2019 18:56:28 -0500 Subject: [Starlingx-discuss] Community Marketing Planning call prepping for Release 2.0 tomorrow - Notes In-Reply-To: <9D1818B9-94B8-4900-AD4B-4D830556C44A@gmail.com> References: <9D1818B9-94B8-4900-AD4B-4D830556C44A@gmail.com> Message-ID: <6599A7A5-0C96-4718-95A7-473ADA999B0D@gmail.com> Hi StarlingX Community, I’m reaching out to give an update on the release 2.0 communication timeline. Due to a pending item in the press release we moved the announcement date to September 4. This also give us a day to make sure the website updates and the blog post about the release are all thoroughly reviewed and all the relevant pieces are up to date. We are working on to update the overview slide deck. We are planning to review and work on the proposed version on the community marketing planning call to polish some of the content and diagrams before we publish it to the website. The call is on __Wednesday (September 4) at 8am PST / 1500 UTC__. Please join if you’re interested in working on the slide deck and diagrams. Please let me know if you have any questions. Thanks, Ildikó > On 2019. Aug 21., at 11:56, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > During our call today we mainly talked about the starlingx.io website updates and the timeline for the communications. > > As the release is planned fro August 30 which is a Friday of a long weekend (Monday is a holiday in the US) __the press release and website updates will go live on September 3rd__. > > We are finalizing the website updates on this pull request: https://github.com/StarlingXWeb/starlingx-website/pull/42 > > There will be further pull requests for blog posts to highlight new features and functionality in the 2.0 release. > > You can also see the new overview slide deck proposal here which is still in progress: https://github.com/StarlingXWeb/starlingx-website/issues/39 > > Please reply to this mail thread or leave notes on the GitHub items if you have any questions or comments to any of the above. > > Thanks and Best Regards, > Ildikó > > >> On 2019. Aug 20., at 16:40, Ildiko Vancsa wrote: >> >> Hi StarlingX Community, >> >> We have our next Community Marketing Planning call tomorrow to finalize preparations for communications around the upcoming 2.0 release of StarlingX. >> >> The call will be at a slightly different time at 9am Pacific Time / 1600 UTC tomorrow. >> >> You can find dial in information and meeting agenda here: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans >> >> Please feel free to add items to the agenda for tomorrow. Add your name next to your item so we know who to give the floor to. >> >> Please let me know if you have questions. >> >> Thanks and Best Regards, >> Ildikó >> >> > From saul.wold at intel.com Tue Aug 27 00:38:40 2019 From: saul.wold at intel.com (Saul Wold) Date: Tue, 27 Aug 2019 00:38:40 -0000 Subject: [Starlingx-discuss] systemd/sysvinit configuration scripts in config Message-ID: <9912d4e7-cedb-c238-5967-9d19c7548a1f@intel.com> Tee, I know you have done alot of the work on the ansible playbook. Is there work being planned for converting the various configuration scripts in the config repo such as storageconfig and workconfig? We have been reviewing the various systemd services that call other scripts rather than calling the actual service daemon directly and found that many of the config related scripts are just that, extended configuration. Thanks Sau! From pavan.gupta at calsoftinc.com Tue Aug 27 10:18:05 2019 From: pavan.gupta at calsoftinc.com (Pavan Gupta) Date: Tue, 27 Aug 2019 10:18:05 -0000 Subject: [Starlingx-discuss] Keystone access rule error Message-ID: <056d01d55cc0$b3073d50$1915b7f0$@calsoftinc.com> Hi, We are seeing the following error on running the bootstrap.yml file for the green build: keystone:log 2019-08-15 14:16:09.805 1402296 WARNING keystone.access_rules_config.backends.json [-] No config file found for access rules, application credential access rules will be unavailable.: IOError: [Errno 2] No such file or directory: '/etc/keystone/access_rules.json' I am wondering to use latest build (27th August), which may have fix for the above issue. If anyone is aware of this issue, please let us know. Pavan -------------- next part -------------- An HTML attachment was scrubbed... URL: