From build.starlingx at gmail.com Fri Feb 1 06:00:37 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 1 Feb 2019 01:00:37 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 149 - Failure! Message-ID: <1220202081.354.1549000838276.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 149 Status: Failure Timestamp: 20190201T060034Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190201T060000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190201T060000Z DOCKER_DL_ID: jenkins-master-20190201T060000Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190201T060000Z/logs DOCKER_DL_TAG: master-20190201T060000Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190201T060000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Fri Feb 1 06:00:40 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 1 Feb 2019 01:00:40 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 124 - Failure! Message-ID: <729108415.357.1549000841725.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 124 Status: Failure Timestamp: 20190201T060000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190201T060000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From mingyuan.qi at intel.com Fri Feb 1 07:10:50 2019 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Fri, 1 Feb 2019 07:10:50 +0000 Subject: [Starlingx-discuss] [Containers] Deployment status In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A936853@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A936603@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A936853@fmsmsx101.amr.corp.intel.com> Message-ID: Jose, Looking into your log, it seems that coredns is not ready. Here is the workaround for it: 1. kubectl -n kube-system edit configmap coredns 2. remove "loop" line and save 3. kubectl -n kube-system delete rs {your-coredns-replicaset-name} Bart & Al, It's seems to be a real issue for users behind a proxy which I guess coredns is not able to access external nameserver and falls in loop. Could user specified nameserver to be updated to coredns? Mingyuan -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: Friday, February 1, 2019 1:31 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Deployment status Hi, I did some tests with the most recent ISO (01/31) that has already the changes to add a proxy on config_controller, still facing the below mentioned issues, I created a Launchpad [1] to track this. https://bugs.launchpad.net/starlingx/+bug/1814142 Regards, José > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Wednesday, January 30, 2019 4:31 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Containers] Deployment status > > Hi All > > ** Virtual Environment ** > > I'm working on setting up a StarlingX deployment with containers as > described on [1] using proxies but I have faces some issues described below: > > - I have successfully completed all the steps until "system > application-apply stx-openstack" when the system becomes unstable, the > application-apply gets freezed in random pods until timeout of 30 min > is reached, I discovered that killing armada process and start again the "system application-apply" > the application advance until again is freezed in a random pod. > After some cycles the application is successfully completed. > > Here is a log [2] of one point when the process is freeze, the error > is always the same, the only element that changes is the pod and its > random, not always is the same pod. > > ===== > 019-01-30 17:24:13.364 11824 ERROR sysinv.conductor.kube_app [-] > Received a false positive response from Docker/Armada. Failed to apply > application manifest /manifests/stx-openstack-manifest-no-tests.yaml: > 2019- > 01-30 16:54:07.367 20554 DEBUG armada.handlers.document [-] Resolving > reference /manifests/stx-openstack-manifest-no-tests.yaml. > resolve_reference /usr/local/lib/python3.5/site- > packages/armada/handlers/document.py:49 > ===== > > -After is completed correctly I'm receiving failures on "Verify the > cluster endpoints" when running `openstack endpoint list`, I updated > the .yml file with the correct password but connection is not > established correctly. Here is the output [3]. > > ====== > controller-0:~$ openstack endpoint list Failed to discover available > identity versions when contacting http://keystone.openstack.svc.cluster.local/v3. > Attempting to parse version from URL. > Unable to establish connection to > http://keystone.openstack.svc.cluster.local/v3/auth/tokens: > HTTPConnectionPool(host='keystone.openstack.svc.cluster.local', port=80): > Max retries exceeded with url: /v3/auth/tokens (Caused by > NewConnectionError(' o n object at 0x7fe65ffb90d0>: Failed to establish a new connection: > [Errno -3] Temporary failure in name resolution',)) ====== > > 1. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation > 2. http://paste.openstack.org/show/744277 > 3. http://paste.openstack.org/show/744281 > > Regards, > José > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Fri Feb 1 09:00:36 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 1 Feb 2019 04:00:36 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 150 - Still Failing! In-Reply-To: <1630216181.352.1549000834781.JavaMail.javamailuser@localhost> References: <1630216181.352.1549000834781.JavaMail.javamailuser@localhost> Message-ID: <1680742667.360.1549011637136.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 150 Status: Still Failing Timestamp: 20190201T090033Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190201T090000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190201T090000Z DOCKER_DL_ID: jenkins-f-stein-20190201T090000Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190201T090000Z/logs DOCKER_DL_TAG: f-stein-20190201T090000Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190201T090000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein From build.starlingx at gmail.com Fri Feb 1 09:00:40 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Fri, 1 Feb 2019 04:00:40 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 38 - Failure! Message-ID: <1617447486.363.1549011641029.JavaMail.javamailuser@localhost> Project: STX_build_stein_master Build #: 38 Status: Failure Timestamp: 20190201T090000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190201T090000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From Barton.Wensley at windriver.com Fri Feb 1 13:31:53 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Fri, 1 Feb 2019 13:31:53 +0000 Subject: [Starlingx-discuss] [Containers] Deployment status In-Reply-To: References: <0A5D9A624DF90343892F8F3FE7DE525A2A936603@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A936853@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA60E38@ALA-MBD.corp.ad.wrs.com> This is likely happening due to a bug in coredns when an upstream DNS server is not reachable (see https://github.com/coredns/coredns/pull/2255). We don't have this fix yet, so for now, the user must only configure nameservers that are reachable from the system being installed. Bart -----Original Message----- From: Qi, Mingyuan [mailto:mingyuan.qi at intel.com] Sent: February 1, 2019 2:11 AM To: Perez Carranza, Jose; starlingx-discuss at lists.starlingx.io; Bailey, Henry Albert (Al); Wensley, Barton Subject: RE: [Starlingx-discuss] [Containers] Deployment status Jose, Looking into your log, it seems that coredns is not ready. Here is the workaround for it: 1. kubectl -n kube-system edit configmap coredns 2. remove "loop" line and save 3. kubectl -n kube-system delete rs {your-coredns-replicaset-name} Bart & Al, It's seems to be a real issue for users behind a proxy which I guess coredns is not able to access external nameserver and falls in loop. Could user specified nameserver to be updated to coredns? Mingyuan -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: Friday, February 1, 2019 1:31 To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Deployment status Hi, I did some tests with the most recent ISO (01/31) that has already the changes to add a proxy on config_controller, still facing the below mentioned issues, I created a Launchpad [1] to track this. https://bugs.launchpad.net/starlingx/+bug/1814142 Regards, José > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Wednesday, January 30, 2019 4:31 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Containers] Deployment status > > Hi All > > ** Virtual Environment ** > > I'm working on setting up a StarlingX deployment with containers as > described on [1] using proxies but I have faces some issues described below: > > - I have successfully completed all the steps until "system > application-apply stx-openstack" when the system becomes unstable, the > application-apply gets freezed in random pods until timeout of 30 min > is reached, I discovered that killing armada process and start again the "system application-apply" > the application advance until again is freezed in a random pod. > After some cycles the application is successfully completed. > > Here is a log [2] of one point when the process is freeze, the error > is always the same, the only element that changes is the pod and its > random, not always is the same pod. > > ===== > 019-01-30 17:24:13.364 11824 ERROR sysinv.conductor.kube_app [-] > Received a false positive response from Docker/Armada. Failed to apply > application manifest /manifests/stx-openstack-manifest-no-tests.yaml: > 2019- > 01-30 16:54:07.367 20554 DEBUG armada.handlers.document [-] Resolving > reference /manifests/stx-openstack-manifest-no-tests.yaml. > resolve_reference /usr/local/lib/python3.5/site- > packages/armada/handlers/document.py:49 > ===== > > -After is completed correctly I'm receiving failures on "Verify the > cluster endpoints" when running `openstack endpoint list`, I updated > the .yml file with the correct password but connection is not > established correctly. Here is the output [3]. > > ====== > controller-0:~$ openstack endpoint list Failed to discover available > identity versions when contacting http://keystone.openstack.svc.cluster.local/v3. > Attempting to parse version from URL. > Unable to establish connection to > http://keystone.openstack.svc.cluster.local/v3/auth/tokens: > HTTPConnectionPool(host='keystone.openstack.svc.cluster.local', port=80): > Max retries exceeded with url: /v3/auth/tokens (Caused by > NewConnectionError(' o n object at 0x7fe65ffb90d0>: Failed to establish a new connection: > [Errno -3] Temporary failure in name resolution',)) ====== > > 1. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation > 2. http://paste.openstack.org/show/744277 > 3. http://paste.openstack.org/show/744281 > > Regards, > José > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Fri Feb 1 14:49:32 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 1 Feb 2019 09:49:32 -0500 Subject: [Starlingx-discuss] [build-report] STX_retag_docker_images - Build # 1 - Failure! In-Reply-To: <2011582452.350.1548970828277.JavaMail.javamailuser@localhost> References: <2011582452.350.1548970828277.JavaMail.javamailuser@localhost> Message-ID: <04a5aeab-17e5-eb7b-2cb7-70997c725c62@windriver.com> script fixed and rerun successfully. f-stein build at timestamp 20190131T182613Z is now complete. Scott On 2019-01-31 4:40 p.m., build.starlingx at gmail.com wrote: > Project: STX_retag_docker_images > Build #: 1 > Status: Failure > Timestamp: 20190131T214007Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190131T182613Z/logs > -------------------------------------------------------------------------------- > Parameters > > HOST_PORT: 80 > OLD_LATEST_PREFIX: dev > MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190131T182613Z > OS: centos > MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root > BASE_VERSION: f-stein-20190131T182613Z > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190131T182613Z/logs > REGISTRY_USERID: slittlewrs > HOST: build.starlingx.cengn.ca > LATEST_PREFIX: f-stein > PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190131T182613Z/logs > RETAG_IMAGE_LIST: stx-libvirt > FLOCK_VERSION: f-stein-centos-master-20190131T182613Z > PREFIX: f-stein > OPENSTACK_RELEASE: master > TIMESTAMP: 20190131T182613Z > REGISTRY_ORG: starlingx > OLD_OPENSTACK_RELEASE: pike > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190131T182613Z/outputs > REGISTRY: docker.io > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Feb 1 14:55:43 2019 From: scott.little at windriver.com (Scott Little) Date: Fri, 1 Feb 2019 09:55:43 -0500 Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 149 - Failure! In-Reply-To: <1220202081.354.1549000838276.JavaMail.javamailuser@localhost> References: <1220202081.354.1549000838276.JavaMail.javamailuser@localhost> Message-ID: Apologies for the noise.  This was a result of the previously mentioned experiment to double the retries and timeouts used by yum during docker build on cengn. Somehow a '}' became ')' during the cut-n-paste from my test script into jenkins. Scott On 2019-02-01 1:00 a.m., build.starlingx at gmail.com wrote: > Project: STX_DL_container_setup > Build #: 149 > Status: Failure > Timestamp: 20190201T060034Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190201T060000Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190201T060000Z > DOCKER_DL_ID: jenkins-master-20190201T060000Z-downloader > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190201T060000Z/logs > DOCKER_DL_TAG: master-20190201T060000Z-downloader-image > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190201T060000Z/logs > MY_REPO_ROOT: /localdisk/designer/jenkins/master > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Fri Feb 1 15:31:35 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Fri, 1 Feb 2019 16:31:35 +0100 (CET) Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration Message-ID: <1484346064.187533.1549035095571@communicator.strato.com> Hi, during installation when appling the configuration I'll get an error message from hiera: Configuration failed: Failed to update hiera configuration Configuration log and apply_manifet.log is attached below. Any idea or hint? Thanks Marcel localhost:~# config_controller System Configuration ==================== Enter Q at any prompt to abort... System date and time: --------------------- The system date and time must be set now. Note that UTC time must be used and that the date and time must be set as accurately as possible, even if NTP/PTP is to be configured later. Current system date and time (UTC): 2019-02-01 15:15:25 Is the current date and time correct? [y/n]: y Current system date and time will be used. System timezone: ---------------- The system timezone must be set now. The timezone must be a valid timezone from /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) Please input the timezone[UTC]:Europe/Berlin System Configuration: --------------------- System mode. Available options are: 1) duplex-direct - two node redundant configuration. Management and infrastructure networks are directly connected to peer ports 2) duplex - two node redundant configuration. 3) simplex - single node non-redundant configuration. System mode [duplex-direct]: PXEBoot Network: ---------------- The PXEBoot network is used for initial booting and installation of each node. IP addresses on this network are reachable only within the data center. The default configuration combines the PXEBoot network and the management network. If a separate PXEBoot network is used, it will share the management interface, which requires the management network to be placed on a VLAN. Configure a separate PXEBoot network [y/N]: Aborting configuration localhost:~# config_controller System Configuration ==================== Enter Q at any prompt to abort... System date and time: --------------------- The system date and time must be set now. Note that UTC time must be used and that the date and time must be set as accurately as possible, even if NTP/PTP is to be configured later. Current system date and time (UTC): 2019-02-01 15:15:40 Is the current date and time correct? [y/n]: y Current system date and time will be used. System timezone: ---------------- The system timezone must be set now. The timezone must be a valid timezone from /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) Please input the timezone[UTC]:Europe/Berlin System Configuration: --------------------- System mode. Available options are: 1) duplex-direct - two node redundant configuration. Management and infrastructure networks are directly connected to peer ports 2) duplex - two node redundant configuration. 3) simplex - single node non-redundant configuration. System mode [duplex-direct]: 1 PXEBoot Network: ---------------- The PXEBoot network is used for initial booting and installation of each node. IP addresses on this network are reachable only within the data center. The default configuration combines the PXEBoot network and the management network. If a separate PXEBoot network is used, it will share the management interface, which requires the management network to be placed on a VLAN. Configure a separate PXEBoot network [y/N]: N Management Network: ------------------- The management network is used for internal communication between platform components. IP addresses on this network are reachable only within the data center. A management bond interface provides redundant connections for the management network. It is strongly recommended to configure Management interface link aggregation, for All-in-one duplex-direct. Management interface link aggregation [y/N]: y Management interface [bond0]: Management interface MTU [1500]: Specify one of the bonding policies. Possible values are: 1) 802.3ad (LACP) policy 2) Active-backup policy Management interface bonding policy [802.3ad]: A maximum of 2 physical interfaces can be attached to the management interface. First management interface member [enp3s0f1]: enp8s20f4 Second management interface member []: enp8s20f5 Management subnet [192.168.204.0/24]: 172.27.1.0/24 Use entire management subnet [Y/n]: Y IP addresses can be assigned to hosts dynamically or a static IP address can be specified for each host. This choice applies to both the management network and infrastructure network (if configured). Warning: Selecting 'N', or static IP address allocation, disables automatic provisioning of new hosts in System Inventory, requiring the user to manually provision using the 'system host-add' command. Dynamic IP address allocation [Y/n]: Y Management Network Multicast subnet [239.1.1.0/28]: Infrastructure Network: ----------------------- The infrastructure network is used for internal communication between platform components to offload the management network of high bandwidth services. IP addresses on this network are reachable only within the data center. If a separate infrastructure interface is not configured the management network will be used. It is NOT recommended to configure infrastructure network for All-in- one duplex-direct. Configure an infrastructure interface [y/N]: N External OAM Network: --------------------- The external OAM network is used for management of the cloud. It also provides access to the platform APIs. IP addresses on this network are reachable outside the data center. An external OAM bond interface provides redundant connections for the OAM network. External OAM interface link aggregation [y/N]: y External OAM interface [bond1]: Configure an external OAM VLAN [y/N]: External OAM interface MTU [1500]: Specify one of the bonding policies. Possible values are: 1) Active-backup policy 2) Balanced XOR policy 3) 802.3ad (LACP) policy External OAM interface bonding policy [active-backup]: A maximum of 2 physical interfaces can be attached to the external OAM interface. First external OAM interface member [enp3s0f0]: enp7s16f1 Second external oam interface member []: enp7s16f7 External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 External OAM gateway address [10.62.150.1]: External OAM floating address [10.62.150.2]: External OAM address for first controller node [10.62.150.3]: 10.62.150.210 External OAM address for second controller node [10.62.150.211]: Cloud Authentication: ------------------------------- Configure a password for the Cloud admin user The Password must have a minimum length of 7 character, and conform to password complexity rules Create admin user password: Repeat admin user password: The following configuration will be applied: System Configuration -------------------- Time Zone: Europe/Berlin System mode: duplex-direct PXEBoot Network Configuration ----------------------------- Separate PXEBoot network not configured PXEBoot Controller floating hostname: pxecontroller Management Network Configuration -------------------------------- Management interface name: bond0 Management interface: bond0 Management interface MTU: 1500 Management ae member 0: enp8s20f4 Management ae member 1: enp8s20f5 Management ae policy : 802.3ad Management subnet: 172.27.1.0/24 Controller floating address: 172.27.1.2 Controller 0 address: 172.27.1.3 Controller 1 address: 172.27.1.4 NFS Management Address 1: 172.27.1.5 NFS Management Address 2: 172.27.1.6 Controller floating hostname: controller Controller hostname prefix: controller- OAM Controller floating hostname: oamcontroller Dynamic IP address allocation is selected Management multicast subnet: 239.1.1.0/28 Infrastructure Network Configuration ------------------------------------ Infrastructure interface not configured External OAM Network Configuration ---------------------------------- External OAM interface name: bond1 External OAM interface: bond1 External OAM interface MTU: 1500 External OAM ae member 0: enp7s16f1 External OAM ae member 1: enp7s16f7 External OAM ae policy : active-backup External OAM subnet: 10.62.150.0/24 External OAM gateway address: 10.62.150.1 External OAM floating address: 10.62.150.2 External OAM 0 address: 10.62.150.210 External OAM 1 address: 10.62.150.211 Apply the above configuration? [y/n]: y Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 2019-02-01 15:20:44.053 25508 TRACE sysinv Failed to update puppet hiera host config Configuration failed: Failed to update hiera configuration localhost:~# /tmp/apply_manifest.log: ======================== cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory Applying puppet bootstrap manifest... [DONE] ls /tmp/puppet/hieradata/ ========================= global.yaml personality.yaml secure_static.yaml static.yaml localhost:~# From serverascode at gmail.com Fri Feb 1 15:32:20 2019 From: serverascode at gmail.com (Curtis) Date: Fri, 1 Feb 2019 10:32:20 -0500 Subject: [Starlingx-discuss] Contribution to the project. In-Reply-To: References: <9A85D2917C58154C960D95352B22818BBFD18EB2@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD18F0D@fmsmsx123.amr.corp.intel.com> Message-ID: Hi Javier, This doesn't help you much right now, but just to let you know we are in the process of trying to set up some tagging for our issues and work items that will show if they are good for new contributors and such, like what many other open source projects do. Watch the list for updates related to that project. Thanks, Curtis On Thu, Jan 31, 2019 at 6:04 PM Javier Romero wrote: > Hi Bruce, > > Thanks for your information. > > Will join the weekly call next Wednesday, now I'm checking the wiki and > docs of the project. > > I can use a VM with KVM hypervisor to run StarlingX, it has 8 cores and > 16 GB of RAM, but seens that 32 GB are needed at least. Will have to find > something else... > > > > > El jueves, 31 de enero de 2019, Jones, Bruce E > escribió: > >> Javier, thank you. I’m going to suggest a couple of things for you to >> try out, to get started. >> >> >> >> You can join our weekly community call. It’s on Wednesdays at 1400 UTC. >> Details are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. >> StarlingX is a big project so we’ve divided it up into sub-projects – each >> has a call of their own, listed on that page, that you could join as well. >> >> >> >> You should check out our wiki: https://wiki.openstack.org/wiki/StarlingX >> and our documentation https://docs.starlingx.io/ >> >> >> >> You can download and run a pre-built StarlingX image from here: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ >> >> >> >> You can find some instructions on how to install the image here: >> https://docs.starlingx.io/installation_guide/index.html. The document >> needs some changes, which we are working on as part of the Documentation >> sub-project. You will need a VM on a machine with lots of memory and/or a >> dedicated system to boot the image on – the image installs an OS and all of >> StarlingX. >> >> >> >> Myself and the community will be happy to answer any questions you have. >> >> >> >> brucej >> >> >> >> *From:* Javier Romero [mailto:xavinux at gmail.com] >> *Sent:* Thursday, January 31, 2019 1:41 PM >> *To:* Jones, Bruce E >> *Cc:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: Contribution to the project. >> >> >> >> Thanks for your answer. >> >> >> >> Well maybe testing but please, let me know those parts where you need >> more help to see if I can be useful in any of them. >> >> >> >> Best Regards, >> >> >> >> >> >> El jueves, 31 de enero de 2019, Jones, Bruce E >> escribió: >> >> We’d love to have your help! What part of the project are you interested >> in? >> >> >> >> brucej >> >> >> >> *From:* Javier Romero [mailto:xavinux at gmail.com] >> *Sent:* Thursday, January 31, 2019 12:42 PM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Contribution to the project. >> >> Hi Team, >> >> >> >> My name is Javier and live in Buenos Aires, Argentina. >> >> >> >> Work in the Network Operations Center of an Internet Service Provider >> where I deploy and manage mission critical Linux Servers. >> >> >> >> Would like to know if there is something I can help with on the StarlingX >> project . >> >> >> >> Thanks for your attention. >> >> >> >> Best Regards, >> >> >> >> >> >> >> >> -- >> >> *Javier Romero* >> >> >> >> >> >> >> >> >> >> -- >> >> *Javier Romero* >> >> >> >> >> >> >> > > > -- > *Javier Romero* > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Matt.Peters at windriver.com Fri Feb 1 16:07:31 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 1 Feb 2019 16:07:31 +0000 Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: <1484346064.187533.1549035095571@communicator.strato.com> References: <1484346064.187533.1549035095571@communicator.strato.com> Message-ID: <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> Hell Marcel, If your system is still in that state, can you run the following commands? source /etc/nova/openrc system host-ethernet-port-list controller-0 system host-if-list -a controller-0 On 2019-02-01, 10:32 AM, "Marcel Schaible" wrote: Hi, during installation when appling the configuration I'll get an error message from hiera: Configuration failed: Failed to update hiera configuration Configuration log and apply_manifet.log is attached below. Any idea or hint? Thanks Marcel localhost:~# config_controller System Configuration ==================== Enter Q at any prompt to abort... System date and time: --------------------- The system date and time must be set now. Note that UTC time must be used and that the date and time must be set as accurately as possible, even if NTP/PTP is to be configured later. Current system date and time (UTC): 2019-02-01 15:15:25 Is the current date and time correct? [y/n]: y Current system date and time will be used. System timezone: ---------------- The system timezone must be set now. The timezone must be a valid timezone from /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) Please input the timezone[UTC]:Europe/Berlin System Configuration: --------------------- System mode. Available options are: 1) duplex-direct - two node redundant configuration. Management and infrastructure networks are directly connected to peer ports 2) duplex - two node redundant configuration. 3) simplex - single node non-redundant configuration. System mode [duplex-direct]: PXEBoot Network: ---------------- The PXEBoot network is used for initial booting and installation of each node. IP addresses on this network are reachable only within the data center. The default configuration combines the PXEBoot network and the management network. If a separate PXEBoot network is used, it will share the management interface, which requires the management network to be placed on a VLAN. Configure a separate PXEBoot network [y/N]: Aborting configuration localhost:~# config_controller System Configuration ==================== Enter Q at any prompt to abort... System date and time: --------------------- The system date and time must be set now. Note that UTC time must be used and that the date and time must be set as accurately as possible, even if NTP/PTP is to be configured later. Current system date and time (UTC): 2019-02-01 15:15:40 Is the current date and time correct? [y/n]: y Current system date and time will be used. System timezone: ---------------- The system timezone must be set now. The timezone must be a valid timezone from /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) Please input the timezone[UTC]:Europe/Berlin System Configuration: --------------------- System mode. Available options are: 1) duplex-direct - two node redundant configuration. Management and infrastructure networks are directly connected to peer ports 2) duplex - two node redundant configuration. 3) simplex - single node non-redundant configuration. System mode [duplex-direct]: 1 PXEBoot Network: ---------------- The PXEBoot network is used for initial booting and installation of each node. IP addresses on this network are reachable only within the data center. The default configuration combines the PXEBoot network and the management network. If a separate PXEBoot network is used, it will share the management interface, which requires the management network to be placed on a VLAN. Configure a separate PXEBoot network [y/N]: N Management Network: ------------------- The management network is used for internal communication between platform components. IP addresses on this network are reachable only within the data center. A management bond interface provides redundant connections for the management network. It is strongly recommended to configure Management interface link aggregation, for All-in-one duplex-direct. Management interface link aggregation [y/N]: y Management interface [bond0]: Management interface MTU [1500]: Specify one of the bonding policies. Possible values are: 1) 802.3ad (LACP) policy 2) Active-backup policy Management interface bonding policy [802.3ad]: A maximum of 2 physical interfaces can be attached to the management interface. First management interface member [enp3s0f1]: enp8s20f4 Second management interface member []: enp8s20f5 Management subnet [192.168.204.0/24]: 172.27.1.0/24 Use entire management subnet [Y/n]: Y IP addresses can be assigned to hosts dynamically or a static IP address can be specified for each host. This choice applies to both the management network and infrastructure network (if configured). Warning: Selecting 'N', or static IP address allocation, disables automatic provisioning of new hosts in System Inventory, requiring the user to manually provision using the 'system host-add' command. Dynamic IP address allocation [Y/n]: Y Management Network Multicast subnet [239.1.1.0/28]: Infrastructure Network: ----------------------- The infrastructure network is used for internal communication between platform components to offload the management network of high bandwidth services. IP addresses on this network are reachable only within the data center. If a separate infrastructure interface is not configured the management network will be used. It is NOT recommended to configure infrastructure network for All-in- one duplex-direct. Configure an infrastructure interface [y/N]: N External OAM Network: --------------------- The external OAM network is used for management of the cloud. It also provides access to the platform APIs. IP addresses on this network are reachable outside the data center. An external OAM bond interface provides redundant connections for the OAM network. External OAM interface link aggregation [y/N]: y External OAM interface [bond1]: Configure an external OAM VLAN [y/N]: External OAM interface MTU [1500]: Specify one of the bonding policies. Possible values are: 1) Active-backup policy 2) Balanced XOR policy 3) 802.3ad (LACP) policy External OAM interface bonding policy [active-backup]: A maximum of 2 physical interfaces can be attached to the external OAM interface. First external OAM interface member [enp3s0f0]: enp7s16f1 Second external oam interface member []: enp7s16f7 External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 External OAM gateway address [10.62.150.1]: External OAM floating address [10.62.150.2]: External OAM address for first controller node [10.62.150.3]: 10.62.150.210 External OAM address for second controller node [10.62.150.211]: Cloud Authentication: ------------------------------- Configure a password for the Cloud admin user The Password must have a minimum length of 7 character, and conform to password complexity rules Create admin user password: Repeat admin user password: The following configuration will be applied: System Configuration -------------------- Time Zone: Europe/Berlin System mode: duplex-direct PXEBoot Network Configuration ----------------------------- Separate PXEBoot network not configured PXEBoot Controller floating hostname: pxecontroller Management Network Configuration -------------------------------- Management interface name: bond0 Management interface: bond0 Management interface MTU: 1500 Management ae member 0: enp8s20f4 Management ae member 1: enp8s20f5 Management ae policy : 802.3ad Management subnet: 172.27.1.0/24 Controller floating address: 172.27.1.2 Controller 0 address: 172.27.1.3 Controller 1 address: 172.27.1.4 NFS Management Address 1: 172.27.1.5 NFS Management Address 2: 172.27.1.6 Controller floating hostname: controller Controller hostname prefix: controller- OAM Controller floating hostname: oamcontroller Dynamic IP address allocation is selected Management multicast subnet: 239.1.1.0/28 Infrastructure Network Configuration ------------------------------------ Infrastructure interface not configured External OAM Network Configuration ---------------------------------- External OAM interface name: bond1 External OAM interface: bond1 External OAM interface MTU: 1500 External OAM ae member 0: enp7s16f1 External OAM ae member 1: enp7s16f7 External OAM ae policy : active-backup External OAM subnet: 10.62.150.0/24 External OAM gateway address: 10.62.150.1 External OAM floating address: 10.62.150.2 External OAM 0 address: 10.62.150.210 External OAM 1 address: 10.62.150.211 Apply the above configuration? [y/n]: y Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 2019-02-01 15:20:44.053 25508 TRACE sysinv Failed to update puppet hiera host config Configuration failed: Failed to update hiera configuration localhost:~# /tmp/apply_manifest.log: ======================== cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory Applying puppet bootstrap manifest... [DONE] ls /tmp/puppet/hieradata/ ========================= global.yaml personality.yaml secure_static.yaml static.yaml localhost:~# _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Fri Feb 1 16:13:26 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Fri, 1 Feb 2019 17:13:26 +0100 (CET) Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: References: Message-ID: <373798617.189988.1549037606760@communicator.strato.com> Additional information: We are using the following iso: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190109T162801Z/outputs/iso/bootimage.iso Marcel > ------------------------------ > > Message: 2 > Date: Fri, 1 Feb 2019 16:31:35 +0100 (CET) > From: Marcel Schaible > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Installation problem: Configuration > failed: Failed to update hiera configuration > Message-ID: <1484346064.187533.1549035095571 at communicator.strato.com> > Content-Type: text/plain; charset=UTF-8 > > Hi, > > during installation when appling the configuration I'll get an error message from hiera: > > Configuration failed: Failed to update hiera configuration > > Configuration log and apply_manifet.log is attached below. > > Any idea or hint? > > Thanks > > Marcel > > > > localhost:~# config_controller > System Configuration > ==================== > Enter Q at any prompt to abort... > > System date and time: > --------------------- > > The system date and time must be set now. Note that UTC time must be used and > that the date and time must be set as accurately as possible, even if NTP/PTP is > to be configured later. > > Current system date and time (UTC): 2019-02-01 15:15:25 > > Is the current date and time correct? [y/n]: y > Current system date and time will be used. > > System timezone: > ---------------- > > The system timezone must be set now. The timezone must be a valid timezone from > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > Please input the timezone[UTC]:Europe/Berlin > > System Configuration: > --------------------- > > System mode. Available options are: > > 1) duplex-direct - two node redundant configuration. Management and > infrastructure networks are directly connected to peer ports > 2) duplex - two node redundant configuration. > 3) simplex - single node non-redundant configuration. > System mode [duplex-direct]: > > PXEBoot Network: > ---------------- > > The PXEBoot network is used for initial booting and installation of each node. > IP addresses on this network are reachable only within the data center. > > The default configuration combines the PXEBoot network and the management > network. If a separate PXEBoot network is used, it will share the management > interface, which requires the management network to be placed on a VLAN. > > Configure a separate PXEBoot network [y/N]: > Aborting configuration > localhost:~# config_controller > System Configuration > ==================== > Enter Q at any prompt to abort... > > System date and time: > --------------------- > > The system date and time must be set now. Note that UTC time must be used and > that the date and time must be set as accurately as possible, even if NTP/PTP is > to be configured later. > > Current system date and time (UTC): 2019-02-01 15:15:40 > > Is the current date and time correct? [y/n]: y > Current system date and time will be used. > > System timezone: > ---------------- > > The system timezone must be set now. The timezone must be a valid timezone from > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > Please input the timezone[UTC]:Europe/Berlin > > System Configuration: > --------------------- > > System mode. Available options are: > > 1) duplex-direct - two node redundant configuration. Management and > infrastructure networks are directly connected to peer ports > 2) duplex - two node redundant configuration. > 3) simplex - single node non-redundant configuration. > System mode [duplex-direct]: 1 > > PXEBoot Network: > ---------------- > > The PXEBoot network is used for initial booting and installation of each node. > IP addresses on this network are reachable only within the data center. > > The default configuration combines the PXEBoot network and the management > network. If a separate PXEBoot network is used, it will share the management > interface, which requires the management network to be placed on a VLAN. > > Configure a separate PXEBoot network [y/N]: N > > Management Network: > ------------------- > > The management network is used for internal communication between platform > components. IP addresses on this network are reachable only within the data > center. > > A management bond interface provides redundant connections for the management > network. > It is strongly recommended to configure Management interface link > aggregation, for All-in-one duplex-direct. > > Management interface link aggregation [y/N]: y > Management interface [bond0]: > Management interface MTU [1500]: > > Specify one of the bonding policies. Possible values are: > 1) 802.3ad (LACP) policy > 2) Active-backup policy > > Management interface bonding policy [802.3ad]: > A maximum of 2 physical interfaces can be attached to the management interface. > > First management interface member [enp3s0f1]: enp8s20f4 > Second management interface member []: enp8s20f5 > Management subnet [192.168.204.0/24]: 172.27.1.0/24 > Use entire management subnet [Y/n]: Y > > IP addresses can be assigned to hosts dynamically or a static IP address can be > specified for each host. This choice applies to both the management network and > infrastructure network (if configured). > Warning: Selecting 'N', or static IP address allocation, disables automatic > provisioning of new hosts in System Inventory, requiring the user to manually > provision using the 'system host-add' command. > Dynamic IP address allocation [Y/n]: Y > Management Network Multicast subnet [239.1.1.0/28]: > > Infrastructure Network: > ----------------------- > > The infrastructure network is used for internal communication between platform > components to offload the management network of high bandwidth services. IP > addresses on this network are reachable only within the data center. > > If a separate infrastructure interface is not configured the management network > will be used. > > It is NOT recommended to configure infrastructure network for All-in- > one duplex-direct. > Configure an infrastructure interface [y/N]: N > > External OAM Network: > --------------------- > > The external OAM network is used for management of the cloud. It also provides > access to the platform APIs. IP addresses on this network are reachable outside > the data center. > > An external OAM bond interface provides redundant connections for the OAM > network. > > External OAM interface link aggregation [y/N]: y > External OAM interface [bond1]: > Configure an external OAM VLAN [y/N]: > External OAM interface MTU [1500]: > > Specify one of the bonding policies. Possible values are: > 1) Active-backup policy > 2) Balanced XOR policy > 3) 802.3ad (LACP) policy > > External OAM interface bonding policy [active-backup]: > A maximum of 2 physical interfaces can be attached to the external OAM > interface. > > First external OAM interface member [enp3s0f0]: enp7s16f1 > Second external oam interface member []: enp7s16f7 > External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 > External OAM gateway address [10.62.150.1]: > External OAM floating address [10.62.150.2]: > External OAM address for first controller node [10.62.150.3]: 10.62.150.210 > External OAM address for second controller node [10.62.150.211]: > > Cloud Authentication: > ------------------------------- > > Configure a password for the Cloud admin user The Password must have a minimum > length of 7 character, and conform to password complexity rules > Create admin user password: > Repeat admin user password: > > > > The following configuration will be applied: > > System Configuration > -------------------- > Time Zone: Europe/Berlin > System mode: duplex-direct > > PXEBoot Network Configuration > ----------------------------- > Separate PXEBoot network not configured > PXEBoot Controller floating hostname: pxecontroller > > Management Network Configuration > -------------------------------- > Management interface name: bond0 > Management interface: bond0 > Management interface MTU: 1500 > Management ae member 0: enp8s20f4 > Management ae member 1: enp8s20f5 > Management ae policy : 802.3ad > Management subnet: 172.27.1.0/24 > Controller floating address: 172.27.1.2 > Controller 0 address: 172.27.1.3 > Controller 1 address: 172.27.1.4 > NFS Management Address 1: 172.27.1.5 > NFS Management Address 2: 172.27.1.6 > Controller floating hostname: controller > Controller hostname prefix: controller- > OAM Controller floating hostname: oamcontroller > Dynamic IP address allocation is selected > Management multicast subnet: 239.1.1.0/28 > > Infrastructure Network Configuration > ------------------------------------ > Infrastructure interface not configured > > External OAM Network Configuration > ---------------------------------- > External OAM interface name: bond1 > External OAM interface: bond1 > External OAM interface MTU: 1500 > External OAM ae member 0: enp7s16f1 > External OAM ae member 1: enp7s16f7 > External OAM ae policy : active-backup > External OAM subnet: 10.62.150.0/24 > External OAM gateway address: 10.62.150.1 > External OAM floating address: 10.62.150.2 > External OAM 0 address: 10.62.150.210 > External OAM 1 address: 10.62.150.211 > > Apply the above configuration? [y/n]: y > > Applying configuration (this will take several minutes): > > 01/08: Creating bootstrap configuration ... DONE > 02/08: Applying bootstrap manifest ... DONE > 03/08: Persisting local configuration ... DONE > 04/08: Populating initial system inventory ... DONE > 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 > 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in > 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main > 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action > 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper > 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name > 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port > 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] > 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 > 2019-02-01 15:20:44.053 25508 TRACE sysinv > Failed to update puppet hiera host config > > Configuration failed: Failed to update hiera configuration > localhost:~# > > /tmp/apply_manifest.log: > ======================== > > cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory > cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory > cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory > Applying puppet bootstrap manifest... > [DONE] > > > ls /tmp/puppet/hieradata/ > ========================= > global.yaml personality.yaml secure_static.yaml static.yaml > localhost:~# > > > > From marcel at schaible-consulting.de Fri Feb 1 16:19:21 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Fri, 1 Feb 2019 17:19:21 +0100 (CET) Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> References: <1484346064.187533.1549035095571@communicator.strato.com> <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> Message-ID: <1883615762.190291.1549037961751@communicator.strato.com> Hi Matt, thanks for your response. controller-0:~# source /etc/nova/openrc -0oot at controller-0 ~(keystone_admin)]# system host-ethernet-port-list controller internalURL endpoint for smapi service in RegionOne region not found [root at controller-0 ~(keystone_admin)]# system host-if-list -a controller-0 internalURL endpoint for smapi service in RegionOne region not found [root at controller-0 ~(keystone_admin)]# Any idea what "RegionOne" mean? Thanks Marcel > "Peters, Matt" hat am 1. Februar 2019 um 17:07 geschrieben: > > > Hell Marcel, > If your system is still in that state, can you run the following commands? > > source /etc/nova/openrc > system host-ethernet-port-list controller-0 > system host-if-list -a controller-0 > > > On 2019-02-01, 10:32 AM, "Marcel Schaible" wrote: > > Hi, > > during installation when appling the configuration I'll get an error message from hiera: > > Configuration failed: Failed to update hiera configuration > > Configuration log and apply_manifet.log is attached below. > > Any idea or hint? > > Thanks > > Marcel > > > > localhost:~# config_controller > System Configuration > ==================== > Enter Q at any prompt to abort... > > System date and time: > --------------------- > > The system date and time must be set now. Note that UTC time must be used and > that the date and time must be set as accurately as possible, even if NTP/PTP is > to be configured later. > > Current system date and time (UTC): 2019-02-01 15:15:25 > > Is the current date and time correct? [y/n]: y > Current system date and time will be used. > > System timezone: > ---------------- > > The system timezone must be set now. The timezone must be a valid timezone from > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > Please input the timezone[UTC]:Europe/Berlin > > System Configuration: > --------------------- > > System mode. Available options are: > > 1) duplex-direct - two node redundant configuration. Management and > infrastructure networks are directly connected to peer ports > 2) duplex - two node redundant configuration. > 3) simplex - single node non-redundant configuration. > System mode [duplex-direct]: > > PXEBoot Network: > ---------------- > > The PXEBoot network is used for initial booting and installation of each node. > IP addresses on this network are reachable only within the data center. > > The default configuration combines the PXEBoot network and the management > network. If a separate PXEBoot network is used, it will share the management > interface, which requires the management network to be placed on a VLAN. > > Configure a separate PXEBoot network [y/N]: > Aborting configuration > localhost:~# config_controller > System Configuration > ==================== > Enter Q at any prompt to abort... > > System date and time: > --------------------- > > The system date and time must be set now. Note that UTC time must be used and > that the date and time must be set as accurately as possible, even if NTP/PTP is > to be configured later. > > Current system date and time (UTC): 2019-02-01 15:15:40 > > Is the current date and time correct? [y/n]: y > Current system date and time will be used. > > System timezone: > ---------------- > > The system timezone must be set now. The timezone must be a valid timezone from > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > Please input the timezone[UTC]:Europe/Berlin > > System Configuration: > --------------------- > > System mode. Available options are: > > 1) duplex-direct - two node redundant configuration. Management and > infrastructure networks are directly connected to peer ports > 2) duplex - two node redundant configuration. > 3) simplex - single node non-redundant configuration. > System mode [duplex-direct]: 1 > > PXEBoot Network: > ---------------- > > The PXEBoot network is used for initial booting and installation of each node. > IP addresses on this network are reachable only within the data center. > > The default configuration combines the PXEBoot network and the management > network. If a separate PXEBoot network is used, it will share the management > interface, which requires the management network to be placed on a VLAN. > > Configure a separate PXEBoot network [y/N]: N > > Management Network: > ------------------- > > The management network is used for internal communication between platform > components. IP addresses on this network are reachable only within the data > center. > > A management bond interface provides redundant connections for the management > network. > It is strongly recommended to configure Management interface link > aggregation, for All-in-one duplex-direct. > > Management interface link aggregation [y/N]: y > Management interface [bond0]: > Management interface MTU [1500]: > > Specify one of the bonding policies. Possible values are: > 1) 802.3ad (LACP) policy > 2) Active-backup policy > > Management interface bonding policy [802.3ad]: > A maximum of 2 physical interfaces can be attached to the management interface. > > First management interface member [enp3s0f1]: enp8s20f4 > Second management interface member []: enp8s20f5 > Management subnet [192.168.204.0/24]: 172.27.1.0/24 > Use entire management subnet [Y/n]: Y > > IP addresses can be assigned to hosts dynamically or a static IP address can be > specified for each host. This choice applies to both the management network and > infrastructure network (if configured). > Warning: Selecting 'N', or static IP address allocation, disables automatic > provisioning of new hosts in System Inventory, requiring the user to manually > provision using the 'system host-add' command. > Dynamic IP address allocation [Y/n]: Y > Management Network Multicast subnet [239.1.1.0/28]: > > Infrastructure Network: > ----------------------- > > The infrastructure network is used for internal communication between platform > components to offload the management network of high bandwidth services. IP > addresses on this network are reachable only within the data center. > > If a separate infrastructure interface is not configured the management network > will be used. > > It is NOT recommended to configure infrastructure network for All-in- > one duplex-direct. > Configure an infrastructure interface [y/N]: N > > External OAM Network: > --------------------- > > The external OAM network is used for management of the cloud. It also provides > access to the platform APIs. IP addresses on this network are reachable outside > the data center. > > An external OAM bond interface provides redundant connections for the OAM > network. > > External OAM interface link aggregation [y/N]: y > External OAM interface [bond1]: > Configure an external OAM VLAN [y/N]: > External OAM interface MTU [1500]: > > Specify one of the bonding policies. Possible values are: > 1) Active-backup policy > 2) Balanced XOR policy > 3) 802.3ad (LACP) policy > > External OAM interface bonding policy [active-backup]: > A maximum of 2 physical interfaces can be attached to the external OAM > interface. > > First external OAM interface member [enp3s0f0]: enp7s16f1 > Second external oam interface member []: enp7s16f7 > External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 > External OAM gateway address [10.62.150.1]: > External OAM floating address [10.62.150.2]: > External OAM address for first controller node [10.62.150.3]: 10.62.150.210 > External OAM address for second controller node [10.62.150.211]: > > Cloud Authentication: > ------------------------------- > > Configure a password for the Cloud admin user The Password must have a minimum > length of 7 character, and conform to password complexity rules > Create admin user password: > Repeat admin user password: > > > > The following configuration will be applied: > > System Configuration > -------------------- > Time Zone: Europe/Berlin > System mode: duplex-direct > > PXEBoot Network Configuration > ----------------------------- > Separate PXEBoot network not configured > PXEBoot Controller floating hostname: pxecontroller > > Management Network Configuration > -------------------------------- > Management interface name: bond0 > Management interface: bond0 > Management interface MTU: 1500 > Management ae member 0: enp8s20f4 > Management ae member 1: enp8s20f5 > Management ae policy : 802.3ad > Management subnet: 172.27.1.0/24 > Controller floating address: 172.27.1.2 > Controller 0 address: 172.27.1.3 > Controller 1 address: 172.27.1.4 > NFS Management Address 1: 172.27.1.5 > NFS Management Address 2: 172.27.1.6 > Controller floating hostname: controller > Controller hostname prefix: controller- > OAM Controller floating hostname: oamcontroller > Dynamic IP address allocation is selected > Management multicast subnet: 239.1.1.0/28 > > Infrastructure Network Configuration > ------------------------------------ > Infrastructure interface not configured > > External OAM Network Configuration > ---------------------------------- > External OAM interface name: bond1 > External OAM interface: bond1 > External OAM interface MTU: 1500 > External OAM ae member 0: enp7s16f1 > External OAM ae member 1: enp7s16f7 > External OAM ae policy : active-backup > External OAM subnet: 10.62.150.0/24 > External OAM gateway address: 10.62.150.1 > External OAM floating address: 10.62.150.2 > External OAM 0 address: 10.62.150.210 > External OAM 1 address: 10.62.150.211 > > Apply the above configuration? [y/n]: y > > Applying configuration (this will take several minutes): > > 01/08: Creating bootstrap configuration ... DONE > 02/08: Applying bootstrap manifest ... DONE > 03/08: Persisting local configuration ... DONE > 04/08: Populating initial system inventory ... DONE > 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 > 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in > 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main > 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action > 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper > 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name > 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port > 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] > 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 > 2019-02-01 15:20:44.053 25508 TRACE sysinv > Failed to update puppet hiera host config > > Configuration failed: Failed to update hiera configuration > localhost:~# > > /tmp/apply_manifest.log: > ======================== > > cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory > cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory > cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory > Applying puppet bootstrap manifest... > [DONE] > > > ls /tmp/puppet/hieradata/ > ========================= > global.yaml personality.yaml secure_static.yaml static.yaml > localhost:~# > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From erich.cordoba.malibran at intel.com Fri Feb 1 16:22:56 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Fri, 1 Feb 2019 16:22:56 +0000 Subject: [Starlingx-discuss] [Containers] Minimum memory requirements for containerized environment. Message-ID: Hi, After two days of having a containerized system up (without running any edge application, just the steps in the wiki) I started to see some Out of Memory errors like this: Out of memory: Kill process 17906 (kube-apiserver) score 1017 or sacrifice child Killed process 17906 (kube-apiserver) total-vm:460684kB, anon-rss:335088kB, file-rss:0kB, shmem-rss:0kB I have in my 16 GB in my virtual environment as the wiki says, but I'm wondering if that should be enough and this behavior could be some kind of memory leak or the minimum requirement would be a higher memory. Thanks! -Erich Here is some additional information: controller-0:~$ uptime 16:01:40 up 2 days, 22:30, 2 users, load average: 93.28, 45.36, 36.84 controller-0:~$ free -h total used free shared buff/cache available Mem: 17G 12G 206M 62M 4.8G 1.8G Swap: 0B 0B 0B controller-0:~$ top top - 16:00:10 up 2 days, 22:28, 2 users, load average: 12.57, 22.80, 29.58 Tasks: 705 total, 2 running, 703 sleeping, 0 stopped, 0 zombie %Cpu(s): 30.3 us, 18.3 sy, 0.1 ni, 27.6 id, 16.5 wa, 0.0 hi, 4.8 si, 2.4 st KiB Mem : 18330568 total, 247252 free, 13073324 used, 5009992 buff/cache KiB Swap: 0 total, 0 free, 0 used. 1925172 avail Mem PID USER PR NI VIRT RES SHR S %CPU %MEM TIME+ COMMAND 18306 root 10 -10 2333224 185064 13216 S 100.3 1.0 4222:53 ovs-vswitchd 71 root 20 0 0 0 0 D 26.8 0.0 18:57.26 kswapd0 7202 root 20 0 1657284 311532 3400 S 19.9 1.7 335:52.58 kubelet 16311 root 20 0 460276 330576 2884 S 12.6 1.8 0:40.81 kube-apiserver 4291 root 20 0 66832 12224 2024 R 12.3 0.1 0:00.37 heat-manage 1116 root 20 0 1362488 151664 1728 S 11.6 0.8 265:32.20 dockerd 3 root 20 0 0 0 0 S 9.9 0.0 12:20.95 ksoftirqd/0 1 root 20 0 542540 354140 1436 S 7.9 1.9 12:23.91 systemd 21081 42424 20 0 626124 122172 1476 S 7.3 0.7 2:28.49 apache2 3795 999 20 0 5168964 258240 0 S 5.3 1.4 17:06.77 mysqld 26085 999 20 0 4965476 153132 0 S 4.6 0.8 54:16.75 beam.smp 16717 root 20 0 47140 13592 0 S 4.0 0.1 0:16.91 calico-felix 673 root 20 0 50868 12528 512 S 3.6 0.1 0:01.02 kube-scheduler 20893 root 20 0 1049800 201616 0 S 3.3 1.1 40:30.02 ceph-osd 10666 root 20 0 10.7g 112996 868 S 3.0 0.6 46:28.91 etcd 14146 root 20 0 45972 11968 0 S 3.0 0.1 10:09.31 kube-proxy 3435 root 20 0 142216 9708 0 S 2.3 0.1 8:28.87 coredns 19481 root 20 0 324232 112556 1404 S 2.3 0.6 13:20.23 nova-conductor 1162 influxdb 20 0 674068 9604 1020 S 2.0 0.1 8:44.77 influxd 1358 root 20 0 3469568 25344 0 S 2.0 0.1 13:21.31 docker-containe 13365 42424 20 0 2642820 104684 0 S 2.0 0.6 91:03.49 cinder-volume 16108 root 20 0 41044 12812 0 S 2.0 0.1 6:22.57 confd 18862 root 20 0 321820 109656 832 S 2.0 0.6 4:51.59 nova-scheduler 22478 root 20 0 248220 84272 608 S 2.0 0.5 82:31.64 neutron-dhcp-ag 402 root 20 0 60372 29536 32 S 1.7 0.2 0:45.64 nginx-ingress-c 12942 42424 20 0 892092 94848 572 S 1.7 0.5 86:12.08 cinder-backup 7367 root 18 -2 480460 9792 2436 S 1.3 0.1 18:51.83 sm 10762 root 20 0 2297792 116944 0 S 1.3 0.6 11:23.95 nova-compute 12122 root 20 0 142216 10148 296 S 1.3 0.1 8:23.28 coredns 15942 root 20 0 388912 61312 2716 S 1.3 0.3 28:51.34 fm-api 19825 root 19 -1 3313564 52676 1432 S 1.3 0.3 0:05.07 beam.smp 20357 sysinv 19 -1 382232 79644 2948 S 1.3 0.4 0:08.44 sysinv-api 313 root 0 -20 0 0 0 S 1.0 0.0 1:03.17 kworker/0:1H 720 dbus 20 0 60452 1696 896 S 1.0 0.0 3:50.63 dbus-daemon 4073 root 20 0 113788 1684 1092 S 1.0 0.0 0:00.03 ceph 4245 root 20 0 11828 776 440 S 1.0 0.0 0:00.03 heat-engine-cle 4991 root 20 0 408012 39660 304 S 1.0 0.2 5:57.15 ceph-mon 9171 root 20 0 0 0 0 S 1.0 0.0 0:01.48 kworker/0:47 12036 42424 20 0 666056 97616 448 S 1.0 0.5 26:53.04 cinder-volume 14554 root 20 0 59324 31316 0 S 1.0 0.2 7:19.50 nginx-ingress-c 18727 wrsroot 20 0 160588 2428 1100 S 1.0 0.0 0:02.46 top 20515 42424 20 0 678900 110492 428 S 1.0 0.6 26:26.14 cinder-api 20571 42424 20 0 627436 94572 312 S 1.0 0.5 26:41.77 glance-api 8 root 20 0 0 0 0 S 0.7 0.0 9:28.58 rcu_preempt 4429 snmpd 19 -1 245388 4620 480 S 0.7 0.0 0:08.51 snmpd From Matt.Peters at windriver.com Fri Feb 1 16:38:15 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 1 Feb 2019 16:38:15 +0000 Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: <1883615762.190291.1549037961751@communicator.strato.com> References: <1484346064.187533.1549035095571@communicator.strato.com> <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> <1883615762.190291.1549037961751@communicator.strato.com> Message-ID: <87F75100-3132-4085-B3CA-E851E0725B6C@windriver.com> Hello, I'm not sure why it is giving that error response. Are you able to run the following? openstack endpoint list On 2019-02-01, 11:19 AM, "Marcel Schaible" wrote: Hi Matt, thanks for your response. controller-0:~# source /etc/nova/openrc -0oot at controller-0 ~(keystone_admin)]# system host-ethernet-port-list controller internalURL endpoint for smapi service in RegionOne region not found [root at controller-0 ~(keystone_admin)]# system host-if-list -a controller-0 internalURL endpoint for smapi service in RegionOne region not found [root at controller-0 ~(keystone_admin)]# Any idea what "RegionOne" mean? Thanks Marcel > "Peters, Matt" hat am 1. Februar 2019 um 17:07 geschrieben: > > > Hell Marcel, > If your system is still in that state, can you run the following commands? > > source /etc/nova/openrc > system host-ethernet-port-list controller-0 > system host-if-list -a controller-0 > > > On 2019-02-01, 10:32 AM, "Marcel Schaible" wrote: > > Hi, > > during installation when appling the configuration I'll get an error message from hiera: > > Configuration failed: Failed to update hiera configuration > > Configuration log and apply_manifet.log is attached below. > > Any idea or hint? > > Thanks > > Marcel > > > > localhost:~# config_controller > System Configuration > ==================== > Enter Q at any prompt to abort... > > System date and time: > --------------------- > > The system date and time must be set now. Note that UTC time must be used and > that the date and time must be set as accurately as possible, even if NTP/PTP is > to be configured later. > > Current system date and time (UTC): 2019-02-01 15:15:25 > > Is the current date and time correct? [y/n]: y > Current system date and time will be used. > > System timezone: > ---------------- > > The system timezone must be set now. The timezone must be a valid timezone from > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > Please input the timezone[UTC]:Europe/Berlin > > System Configuration: > --------------------- > > System mode. Available options are: > > 1) duplex-direct - two node redundant configuration. Management and > infrastructure networks are directly connected to peer ports > 2) duplex - two node redundant configuration. > 3) simplex - single node non-redundant configuration. > System mode [duplex-direct]: > > PXEBoot Network: > ---------------- > > The PXEBoot network is used for initial booting and installation of each node. > IP addresses on this network are reachable only within the data center. > > The default configuration combines the PXEBoot network and the management > network. If a separate PXEBoot network is used, it will share the management > interface, which requires the management network to be placed on a VLAN. > > Configure a separate PXEBoot network [y/N]: > Aborting configuration > localhost:~# config_controller > System Configuration > ==================== > Enter Q at any prompt to abort... > > System date and time: > --------------------- > > The system date and time must be set now. Note that UTC time must be used and > that the date and time must be set as accurately as possible, even if NTP/PTP is > to be configured later. > > Current system date and time (UTC): 2019-02-01 15:15:40 > > Is the current date and time correct? [y/n]: y > Current system date and time will be used. > > System timezone: > ---------------- > > The system timezone must be set now. The timezone must be a valid timezone from > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > Please input the timezone[UTC]:Europe/Berlin > > System Configuration: > --------------------- > > System mode. Available options are: > > 1) duplex-direct - two node redundant configuration. Management and > infrastructure networks are directly connected to peer ports > 2) duplex - two node redundant configuration. > 3) simplex - single node non-redundant configuration. > System mode [duplex-direct]: 1 > > PXEBoot Network: > ---------------- > > The PXEBoot network is used for initial booting and installation of each node. > IP addresses on this network are reachable only within the data center. > > The default configuration combines the PXEBoot network and the management > network. If a separate PXEBoot network is used, it will share the management > interface, which requires the management network to be placed on a VLAN. > > Configure a separate PXEBoot network [y/N]: N > > Management Network: > ------------------- > > The management network is used for internal communication between platform > components. IP addresses on this network are reachable only within the data > center. > > A management bond interface provides redundant connections for the management > network. > It is strongly recommended to configure Management interface link > aggregation, for All-in-one duplex-direct. > > Management interface link aggregation [y/N]: y > Management interface [bond0]: > Management interface MTU [1500]: > > Specify one of the bonding policies. Possible values are: > 1) 802.3ad (LACP) policy > 2) Active-backup policy > > Management interface bonding policy [802.3ad]: > A maximum of 2 physical interfaces can be attached to the management interface. > > First management interface member [enp3s0f1]: enp8s20f4 > Second management interface member []: enp8s20f5 > Management subnet [192.168.204.0/24]: 172.27.1.0/24 > Use entire management subnet [Y/n]: Y > > IP addresses can be assigned to hosts dynamically or a static IP address can be > specified for each host. This choice applies to both the management network and > infrastructure network (if configured). > Warning: Selecting 'N', or static IP address allocation, disables automatic > provisioning of new hosts in System Inventory, requiring the user to manually > provision using the 'system host-add' command. > Dynamic IP address allocation [Y/n]: Y > Management Network Multicast subnet [239.1.1.0/28]: > > Infrastructure Network: > ----------------------- > > The infrastructure network is used for internal communication between platform > components to offload the management network of high bandwidth services. IP > addresses on this network are reachable only within the data center. > > If a separate infrastructure interface is not configured the management network > will be used. > > It is NOT recommended to configure infrastructure network for All-in- > one duplex-direct. > Configure an infrastructure interface [y/N]: N > > External OAM Network: > --------------------- > > The external OAM network is used for management of the cloud. It also provides > access to the platform APIs. IP addresses on this network are reachable outside > the data center. > > An external OAM bond interface provides redundant connections for the OAM > network. > > External OAM interface link aggregation [y/N]: y > External OAM interface [bond1]: > Configure an external OAM VLAN [y/N]: > External OAM interface MTU [1500]: > > Specify one of the bonding policies. Possible values are: > 1) Active-backup policy > 2) Balanced XOR policy > 3) 802.3ad (LACP) policy > > External OAM interface bonding policy [active-backup]: > A maximum of 2 physical interfaces can be attached to the external OAM > interface. > > First external OAM interface member [enp3s0f0]: enp7s16f1 > Second external oam interface member []: enp7s16f7 > External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 > External OAM gateway address [10.62.150.1]: > External OAM floating address [10.62.150.2]: > External OAM address for first controller node [10.62.150.3]: 10.62.150.210 > External OAM address for second controller node [10.62.150.211]: > > Cloud Authentication: > ------------------------------- > > Configure a password for the Cloud admin user The Password must have a minimum > length of 7 character, and conform to password complexity rules > Create admin user password: > Repeat admin user password: > > > > The following configuration will be applied: > > System Configuration > -------------------- > Time Zone: Europe/Berlin > System mode: duplex-direct > > PXEBoot Network Configuration > ----------------------------- > Separate PXEBoot network not configured > PXEBoot Controller floating hostname: pxecontroller > > Management Network Configuration > -------------------------------- > Management interface name: bond0 > Management interface: bond0 > Management interface MTU: 1500 > Management ae member 0: enp8s20f4 > Management ae member 1: enp8s20f5 > Management ae policy : 802.3ad > Management subnet: 172.27.1.0/24 > Controller floating address: 172.27.1.2 > Controller 0 address: 172.27.1.3 > Controller 1 address: 172.27.1.4 > NFS Management Address 1: 172.27.1.5 > NFS Management Address 2: 172.27.1.6 > Controller floating hostname: controller > Controller hostname prefix: controller- > OAM Controller floating hostname: oamcontroller > Dynamic IP address allocation is selected > Management multicast subnet: 239.1.1.0/28 > > Infrastructure Network Configuration > ------------------------------------ > Infrastructure interface not configured > > External OAM Network Configuration > ---------------------------------- > External OAM interface name: bond1 > External OAM interface: bond1 > External OAM interface MTU: 1500 > External OAM ae member 0: enp7s16f1 > External OAM ae member 1: enp7s16f7 > External OAM ae policy : active-backup > External OAM subnet: 10.62.150.0/24 > External OAM gateway address: 10.62.150.1 > External OAM floating address: 10.62.150.2 > External OAM 0 address: 10.62.150.210 > External OAM 1 address: 10.62.150.211 > > Apply the above configuration? [y/n]: y > > Applying configuration (this will take several minutes): > > 01/08: Creating bootstrap configuration ... DONE > 02/08: Applying bootstrap manifest ... DONE > 03/08: Persisting local configuration ... DONE > 04/08: Populating initial system inventory ... DONE > 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 > 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in > 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main > 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action > 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper > 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name > 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port > 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] > 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 > 2019-02-01 15:20:44.053 25508 TRACE sysinv > Failed to update puppet hiera host config > > Configuration failed: Failed to update hiera configuration > localhost:~# > > /tmp/apply_manifest.log: > ======================== > > cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory > cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory > cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory > Applying puppet bootstrap manifest... > [DONE] > > > ls /tmp/puppet/hieradata/ > ========================= > global.yaml personality.yaml secure_static.yaml static.yaml > localhost:~# > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From marcel at schaible-consulting.de Fri Feb 1 16:47:54 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Fri, 1 Feb 2019 17:47:54 +0100 (CET) Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: <87F75100-3132-4085-B3CA-E851E0725B6C@windriver.com> References: <1484346064.187533.1549035095571@communicator.strato.com> <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> <1883615762.190291.1549037961751@communicator.strato.com> <87F75100-3132-4085-B3CA-E851E0725B6C@windriver.com> Message-ID: <420746737.191750.1549039674257@communicator.strato.com> [root at controller-0 ~(keystone_admin)]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 8bca85c075e74635a4c81c978da9434c | RegionOne | keystone | identity | True | admin | http://127.0.0.1:5000/v3 | | edc1d5eb056744adba859c0d4630ba5b | RegionOne | keystone | identity | True | internal | http://127.0.0.1:5000/v3 | | 73e43933bd6744d7ae37bb938194be88 | RegionOne | keystone | identity | True | public | http://127.0.0.1:5000/v3 | | f5b6a70a9a3a42b99230462fd72188e1 | RegionOne | sysinv | platform | True | admin | http://127.0.0.1:6385/v1 | | 99c8461427b54839a0ca647815e0a469 | RegionOne | sysinv | platform | True | internal | http://127.0.0.1:6385/v1 | | a8a78b29d3054189bd734261c442ad4e | RegionOne | sysinv | platform | True | public | http://127.0.0.1:6385/v1 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ > "Peters, Matt" hat am 1. Februar 2019 um 17:38 geschrieben: > > > Hello, > I'm not sure why it is giving that error response. > > Are you able to run the following? > openstack endpoint list > > > On 2019-02-01, 11:19 AM, "Marcel Schaible" wrote: > > Hi Matt, > > thanks for your response. > > controller-0:~# source /etc/nova/openrc > -0oot at controller-0 ~(keystone_admin)]# system host-ethernet-port-list controller > internalURL endpoint for smapi service in RegionOne region not found > [root at controller-0 ~(keystone_admin)]# system host-if-list -a controller-0 > internalURL endpoint for smapi service in RegionOne region not found > [root at controller-0 ~(keystone_admin)]# > > Any idea what "RegionOne" mean? > Thanks > Marcel > > > > "Peters, Matt" hat am 1. Februar 2019 um 17:07 geschrieben: > > > > > > Hell Marcel, > > If your system is still in that state, can you run the following commands? > > > > source /etc/nova/openrc > > system host-ethernet-port-list controller-0 > > system host-if-list -a controller-0 > > > > > > On 2019-02-01, 10:32 AM, "Marcel Schaible" wrote: > > > > Hi, > > > > during installation when appling the configuration I'll get an error message from hiera: > > > > Configuration failed: Failed to update hiera configuration > > > > Configuration log and apply_manifet.log is attached below. > > > > Any idea or hint? > > > > Thanks > > > > Marcel > > > > > > > > localhost:~# config_controller > > System Configuration > > ==================== > > Enter Q at any prompt to abort... > > > > System date and time: > > --------------------- > > > > The system date and time must be set now. Note that UTC time must be used and > > that the date and time must be set as accurately as possible, even if NTP/PTP is > > to be configured later. > > > > Current system date and time (UTC): 2019-02-01 15:15:25 > > > > Is the current date and time correct? [y/n]: y > > Current system date and time will be used. > > > > System timezone: > > ---------------- > > > > The system timezone must be set now. The timezone must be a valid timezone from > > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > > > Please input the timezone[UTC]:Europe/Berlin > > > > System Configuration: > > --------------------- > > > > System mode. Available options are: > > > > 1) duplex-direct - two node redundant configuration. Management and > > infrastructure networks are directly connected to peer ports > > 2) duplex - two node redundant configuration. > > 3) simplex - single node non-redundant configuration. > > System mode [duplex-direct]: > > > > PXEBoot Network: > > ---------------- > > > > The PXEBoot network is used for initial booting and installation of each node. > > IP addresses on this network are reachable only within the data center. > > > > The default configuration combines the PXEBoot network and the management > > network. If a separate PXEBoot network is used, it will share the management > > interface, which requires the management network to be placed on a VLAN. > > > > Configure a separate PXEBoot network [y/N]: > > Aborting configuration > > localhost:~# config_controller > > System Configuration > > ==================== > > Enter Q at any prompt to abort... > > > > System date and time: > > --------------------- > > > > The system date and time must be set now. Note that UTC time must be used and > > that the date and time must be set as accurately as possible, even if NTP/PTP is > > to be configured later. > > > > Current system date and time (UTC): 2019-02-01 15:15:40 > > > > Is the current date and time correct? [y/n]: y > > Current system date and time will be used. > > > > System timezone: > > ---------------- > > > > The system timezone must be set now. The timezone must be a valid timezone from > > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > > > Please input the timezone[UTC]:Europe/Berlin > > > > System Configuration: > > --------------------- > > > > System mode. Available options are: > > > > 1) duplex-direct - two node redundant configuration. Management and > > infrastructure networks are directly connected to peer ports > > 2) duplex - two node redundant configuration. > > 3) simplex - single node non-redundant configuration. > > System mode [duplex-direct]: 1 > > > > PXEBoot Network: > > ---------------- > > > > The PXEBoot network is used for initial booting and installation of each node. > > IP addresses on this network are reachable only within the data center. > > > > The default configuration combines the PXEBoot network and the management > > network. If a separate PXEBoot network is used, it will share the management > > interface, which requires the management network to be placed on a VLAN. > > > > Configure a separate PXEBoot network [y/N]: N > > > > Management Network: > > ------------------- > > > > The management network is used for internal communication between platform > > components. IP addresses on this network are reachable only within the data > > center. > > > > A management bond interface provides redundant connections for the management > > network. > > It is strongly recommended to configure Management interface link > > aggregation, for All-in-one duplex-direct. > > > > Management interface link aggregation [y/N]: y > > Management interface [bond0]: > > Management interface MTU [1500]: > > > > Specify one of the bonding policies. Possible values are: > > 1) 802.3ad (LACP) policy > > 2) Active-backup policy > > > > Management interface bonding policy [802.3ad]: > > A maximum of 2 physical interfaces can be attached to the management interface. > > > > First management interface member [enp3s0f1]: enp8s20f4 > > Second management interface member []: enp8s20f5 > > Management subnet [192.168.204.0/24]: 172.27.1.0/24 > > Use entire management subnet [Y/n]: Y > > > > IP addresses can be assigned to hosts dynamically or a static IP address can be > > specified for each host. This choice applies to both the management network and > > infrastructure network (if configured). > > Warning: Selecting 'N', or static IP address allocation, disables automatic > > provisioning of new hosts in System Inventory, requiring the user to manually > > provision using the 'system host-add' command. > > Dynamic IP address allocation [Y/n]: Y > > Management Network Multicast subnet [239.1.1.0/28]: > > > > Infrastructure Network: > > ----------------------- > > > > The infrastructure network is used for internal communication between platform > > components to offload the management network of high bandwidth services. IP > > addresses on this network are reachable only within the data center. > > > > If a separate infrastructure interface is not configured the management network > > will be used. > > > > It is NOT recommended to configure infrastructure network for All-in- > > one duplex-direct. > > Configure an infrastructure interface [y/N]: N > > > > External OAM Network: > > --------------------- > > > > The external OAM network is used for management of the cloud. It also provides > > access to the platform APIs. IP addresses on this network are reachable outside > > the data center. > > > > An external OAM bond interface provides redundant connections for the OAM > > network. > > > > External OAM interface link aggregation [y/N]: y > > External OAM interface [bond1]: > > Configure an external OAM VLAN [y/N]: > > External OAM interface MTU [1500]: > > > > Specify one of the bonding policies. Possible values are: > > 1) Active-backup policy > > 2) Balanced XOR policy > > 3) 802.3ad (LACP) policy > > > > External OAM interface bonding policy [active-backup]: > > A maximum of 2 physical interfaces can be attached to the external OAM > > interface. > > > > First external OAM interface member [enp3s0f0]: enp7s16f1 > > Second external oam interface member []: enp7s16f7 > > External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 > > External OAM gateway address [10.62.150.1]: > > External OAM floating address [10.62.150.2]: > > External OAM address for first controller node [10.62.150.3]: 10.62.150.210 > > External OAM address for second controller node [10.62.150.211]: > > > > Cloud Authentication: > > ------------------------------- > > > > Configure a password for the Cloud admin user The Password must have a minimum > > length of 7 character, and conform to password complexity rules > > Create admin user password: > > Repeat admin user password: > > > > > > > > The following configuration will be applied: > > > > System Configuration > > -------------------- > > Time Zone: Europe/Berlin > > System mode: duplex-direct > > > > PXEBoot Network Configuration > > ----------------------------- > > Separate PXEBoot network not configured > > PXEBoot Controller floating hostname: pxecontroller > > > > Management Network Configuration > > -------------------------------- > > Management interface name: bond0 > > Management interface: bond0 > > Management interface MTU: 1500 > > Management ae member 0: enp8s20f4 > > Management ae member 1: enp8s20f5 > > Management ae policy : 802.3ad > > Management subnet: 172.27.1.0/24 > > Controller floating address: 172.27.1.2 > > Controller 0 address: 172.27.1.3 > > Controller 1 address: 172.27.1.4 > > NFS Management Address 1: 172.27.1.5 > > NFS Management Address 2: 172.27.1.6 > > Controller floating hostname: controller > > Controller hostname prefix: controller- > > OAM Controller floating hostname: oamcontroller > > Dynamic IP address allocation is selected > > Management multicast subnet: 239.1.1.0/28 > > > > Infrastructure Network Configuration > > ------------------------------------ > > Infrastructure interface not configured > > > > External OAM Network Configuration > > ---------------------------------- > > External OAM interface name: bond1 > > External OAM interface: bond1 > > External OAM interface MTU: 1500 > > External OAM ae member 0: enp7s16f1 > > External OAM ae member 1: enp7s16f7 > > External OAM ae policy : active-backup > > External OAM subnet: 10.62.150.0/24 > > External OAM gateway address: 10.62.150.1 > > External OAM floating address: 10.62.150.2 > > External OAM 0 address: 10.62.150.210 > > External OAM 1 address: 10.62.150.211 > > > > Apply the above configuration? [y/n]: y > > > > Applying configuration (this will take several minutes): > > > > 01/08: Creating bootstrap configuration ... DONE > > 02/08: Applying bootstrap manifest ... DONE > > 03/08: Persisting local configuration ... DONE > > 04/08: Populating initial system inventory ... DONE > > 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 > > 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in > > 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main > > 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action > > 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper > > 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs > > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname > > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name > > 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port > > 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] > > 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 > > 2019-02-01 15:20:44.053 25508 TRACE sysinv > > Failed to update puppet hiera host config > > > > Configuration failed: Failed to update hiera configuration > > localhost:~# > > > > /tmp/apply_manifest.log: > > ======================== > > > > cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory > > cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory > > cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory > > Applying puppet bootstrap manifest... > > [DONE] > > > > > > ls /tmp/puppet/hieradata/ > > ========================= > > global.yaml personality.yaml secure_static.yaml static.yaml > > localhost:~# > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > From Matt.Peters at windriver.com Fri Feb 1 16:57:12 2019 From: Matt.Peters at windriver.com (Peters, Matt) Date: Fri, 1 Feb 2019 16:57:12 +0000 Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: <420746737.191750.1549039674257@communicator.strato.com> References: <1484346064.187533.1549035095571@communicator.strato.com> <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> <1883615762.190291.1549037961751@communicator.strato.com> <87F75100-3132-4085-B3CA-E851E0725B6C@windriver.com> <420746737.191750.1549039674257@communicator.strato.com> Message-ID: Hello Marcel, Ok, so apparently the client was updated to try and retrieve the smapi endpoint as part of all system requests (not sure why), so you will need to add the following so that the system commands against inventory will work. Can you try running the following and then reexecuting the system commands below. openstack endpoint create --region RegionOne smapi internal http://127.0.0.1:7777 On 2019-02-01, 11:48 AM, "Marcel Schaible" wrote: [root at controller-0 ~(keystone_admin)]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 8bca85c075e74635a4c81c978da9434c | RegionOne | keystone | identity | True | admin | http://127.0.0.1:5000/v3 | | edc1d5eb056744adba859c0d4630ba5b | RegionOne | keystone | identity | True | internal | http://127.0.0.1:5000/v3 | | 73e43933bd6744d7ae37bb938194be88 | RegionOne | keystone | identity | True | public | http://127.0.0.1:5000/v3 | | f5b6a70a9a3a42b99230462fd72188e1 | RegionOne | sysinv | platform | True | admin | http://127.0.0.1:6385/v1 | | 99c8461427b54839a0ca647815e0a469 | RegionOne | sysinv | platform | True | internal | http://127.0.0.1:6385/v1 | | a8a78b29d3054189bd734261c442ad4e | RegionOne | sysinv | platform | True | public | http://127.0.0.1:6385/v1 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ > "Peters, Matt" hat am 1. Februar 2019 um 17:38 geschrieben: > > > Hello, > I'm not sure why it is giving that error response. > > Are you able to run the following? > openstack endpoint list > > > On 2019-02-01, 11:19 AM, "Marcel Schaible" wrote: > > Hi Matt, > > thanks for your response. > > controller-0:~# source /etc/nova/openrc > -0oot at controller-0 ~(keystone_admin)]# system host-ethernet-port-list controller > internalURL endpoint for smapi service in RegionOne region not found > [root at controller-0 ~(keystone_admin)]# system host-if-list -a controller-0 > internalURL endpoint for smapi service in RegionOne region not found > [root at controller-0 ~(keystone_admin)]# > > Any idea what "RegionOne" mean? > Thanks > Marcel > > > > "Peters, Matt" hat am 1. Februar 2019 um 17:07 geschrieben: > > > > > > Hell Marcel, > > If your system is still in that state, can you run the following commands? > > > > source /etc/nova/openrc > > system host-ethernet-port-list controller-0 > > system host-if-list -a controller-0 > > > > > > On 2019-02-01, 10:32 AM, "Marcel Schaible" wrote: > > > > Hi, > > > > during installation when appling the configuration I'll get an error message from hiera: > > > > Configuration failed: Failed to update hiera configuration > > > > Configuration log and apply_manifet.log is attached below. > > > > Any idea or hint? > > > > Thanks > > > > Marcel > > > > > > > > localhost:~# config_controller > > System Configuration > > ==================== > > Enter Q at any prompt to abort... > > > > System date and time: > > --------------------- > > > > The system date and time must be set now. Note that UTC time must be used and > > that the date and time must be set as accurately as possible, even if NTP/PTP is > > to be configured later. > > > > Current system date and time (UTC): 2019-02-01 15:15:25 > > > > Is the current date and time correct? [y/n]: y > > Current system date and time will be used. > > > > System timezone: > > ---------------- > > > > The system timezone must be set now. The timezone must be a valid timezone from > > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > > > Please input the timezone[UTC]:Europe/Berlin > > > > System Configuration: > > --------------------- > > > > System mode. Available options are: > > > > 1) duplex-direct - two node redundant configuration. Management and > > infrastructure networks are directly connected to peer ports > > 2) duplex - two node redundant configuration. > > 3) simplex - single node non-redundant configuration. > > System mode [duplex-direct]: > > > > PXEBoot Network: > > ---------------- > > > > The PXEBoot network is used for initial booting and installation of each node. > > IP addresses on this network are reachable only within the data center. > > > > The default configuration combines the PXEBoot network and the management > > network. If a separate PXEBoot network is used, it will share the management > > interface, which requires the management network to be placed on a VLAN. > > > > Configure a separate PXEBoot network [y/N]: > > Aborting configuration > > localhost:~# config_controller > > System Configuration > > ==================== > > Enter Q at any prompt to abort... > > > > System date and time: > > --------------------- > > > > The system date and time must be set now. Note that UTC time must be used and > > that the date and time must be set as accurately as possible, even if NTP/PTP is > > to be configured later. > > > > Current system date and time (UTC): 2019-02-01 15:15:40 > > > > Is the current date and time correct? [y/n]: y > > Current system date and time will be used. > > > > System timezone: > > ---------------- > > > > The system timezone must be set now. The timezone must be a valid timezone from > > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > > > Please input the timezone[UTC]:Europe/Berlin > > > > System Configuration: > > --------------------- > > > > System mode. Available options are: > > > > 1) duplex-direct - two node redundant configuration. Management and > > infrastructure networks are directly connected to peer ports > > 2) duplex - two node redundant configuration. > > 3) simplex - single node non-redundant configuration. > > System mode [duplex-direct]: 1 > > > > PXEBoot Network: > > ---------------- > > > > The PXEBoot network is used for initial booting and installation of each node. > > IP addresses on this network are reachable only within the data center. > > > > The default configuration combines the PXEBoot network and the management > > network. If a separate PXEBoot network is used, it will share the management > > interface, which requires the management network to be placed on a VLAN. > > > > Configure a separate PXEBoot network [y/N]: N > > > > Management Network: > > ------------------- > > > > The management network is used for internal communication between platform > > components. IP addresses on this network are reachable only within the data > > center. > > > > A management bond interface provides redundant connections for the management > > network. > > It is strongly recommended to configure Management interface link > > aggregation, for All-in-one duplex-direct. > > > > Management interface link aggregation [y/N]: y > > Management interface [bond0]: > > Management interface MTU [1500]: > > > > Specify one of the bonding policies. Possible values are: > > 1) 802.3ad (LACP) policy > > 2) Active-backup policy > > > > Management interface bonding policy [802.3ad]: > > A maximum of 2 physical interfaces can be attached to the management interface. > > > > First management interface member [enp3s0f1]: enp8s20f4 > > Second management interface member []: enp8s20f5 > > Management subnet [192.168.204.0/24]: 172.27.1.0/24 > > Use entire management subnet [Y/n]: Y > > > > IP addresses can be assigned to hosts dynamically or a static IP address can be > > specified for each host. This choice applies to both the management network and > > infrastructure network (if configured). > > Warning: Selecting 'N', or static IP address allocation, disables automatic > > provisioning of new hosts in System Inventory, requiring the user to manually > > provision using the 'system host-add' command. > > Dynamic IP address allocation [Y/n]: Y > > Management Network Multicast subnet [239.1.1.0/28]: > > > > Infrastructure Network: > > ----------------------- > > > > The infrastructure network is used for internal communication between platform > > components to offload the management network of high bandwidth services. IP > > addresses on this network are reachable only within the data center. > > > > If a separate infrastructure interface is not configured the management network > > will be used. > > > > It is NOT recommended to configure infrastructure network for All-in- > > one duplex-direct. > > Configure an infrastructure interface [y/N]: N > > > > External OAM Network: > > --------------------- > > > > The external OAM network is used for management of the cloud. It also provides > > access to the platform APIs. IP addresses on this network are reachable outside > > the data center. > > > > An external OAM bond interface provides redundant connections for the OAM > > network. > > > > External OAM interface link aggregation [y/N]: y > > External OAM interface [bond1]: > > Configure an external OAM VLAN [y/N]: > > External OAM interface MTU [1500]: > > > > Specify one of the bonding policies. Possible values are: > > 1) Active-backup policy > > 2) Balanced XOR policy > > 3) 802.3ad (LACP) policy > > > > External OAM interface bonding policy [active-backup]: > > A maximum of 2 physical interfaces can be attached to the external OAM > > interface. > > > > First external OAM interface member [enp3s0f0]: enp7s16f1 > > Second external oam interface member []: enp7s16f7 > > External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 > > External OAM gateway address [10.62.150.1]: > > External OAM floating address [10.62.150.2]: > > External OAM address for first controller node [10.62.150.3]: 10.62.150.210 > > External OAM address for second controller node [10.62.150.211]: > > > > Cloud Authentication: > > ------------------------------- > > > > Configure a password for the Cloud admin user The Password must have a minimum > > length of 7 character, and conform to password complexity rules > > Create admin user password: > > Repeat admin user password: > > > > > > > > The following configuration will be applied: > > > > System Configuration > > -------------------- > > Time Zone: Europe/Berlin > > System mode: duplex-direct > > > > PXEBoot Network Configuration > > ----------------------------- > > Separate PXEBoot network not configured > > PXEBoot Controller floating hostname: pxecontroller > > > > Management Network Configuration > > -------------------------------- > > Management interface name: bond0 > > Management interface: bond0 > > Management interface MTU: 1500 > > Management ae member 0: enp8s20f4 > > Management ae member 1: enp8s20f5 > > Management ae policy : 802.3ad > > Management subnet: 172.27.1.0/24 > > Controller floating address: 172.27.1.2 > > Controller 0 address: 172.27.1.3 > > Controller 1 address: 172.27.1.4 > > NFS Management Address 1: 172.27.1.5 > > NFS Management Address 2: 172.27.1.6 > > Controller floating hostname: controller > > Controller hostname prefix: controller- > > OAM Controller floating hostname: oamcontroller > > Dynamic IP address allocation is selected > > Management multicast subnet: 239.1.1.0/28 > > > > Infrastructure Network Configuration > > ------------------------------------ > > Infrastructure interface not configured > > > > External OAM Network Configuration > > ---------------------------------- > > External OAM interface name: bond1 > > External OAM interface: bond1 > > External OAM interface MTU: 1500 > > External OAM ae member 0: enp7s16f1 > > External OAM ae member 1: enp7s16f7 > > External OAM ae policy : active-backup > > External OAM subnet: 10.62.150.0/24 > > External OAM gateway address: 10.62.150.1 > > External OAM floating address: 10.62.150.2 > > External OAM 0 address: 10.62.150.210 > > External OAM 1 address: 10.62.150.211 > > > > Apply the above configuration? [y/n]: y > > > > Applying configuration (this will take several minutes): > > > > 01/08: Creating bootstrap configuration ... DONE > > 02/08: Applying bootstrap manifest ... DONE > > 03/08: Persisting local configuration ... DONE > > 04/08: Populating initial system inventory ... DONE > > 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 > > 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in > > 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main > > 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action > > 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper > > 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs > > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname > > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name > > 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port > > 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] > > 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 > > 2019-02-01 15:20:44.053 25508 TRACE sysinv > > Failed to update puppet hiera host config > > > > Configuration failed: Failed to update hiera configuration > > localhost:~# > > > > /tmp/apply_manifest.log: > > ======================== > > > > cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory > > cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory > > cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory > > Applying puppet bootstrap manifest... > > [DONE] > > > > > > ls /tmp/puppet/hieradata/ > > ========================= > > global.yaml personality.yaml secure_static.yaml static.yaml > > localhost:~# > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > From xavinux at gmail.com Fri Feb 1 17:02:58 2019 From: xavinux at gmail.com (Javier Romero) Date: Fri, 1 Feb 2019 14:02:58 -0300 Subject: [Starlingx-discuss] Contribution to the project. In-Reply-To: References: <9A85D2917C58154C960D95352B22818BBFD18EB2@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD18F0D@fmsmsx123.amr.corp.intel.com> Message-ID: Hi Curtis, Thank you very much for your answer, will take a look at that list too. Best Regards, *Javier Romero* El vie., 1 feb. 2019 a las 12:32, Curtis () escribió: > Hi Javier, > > This doesn't help you much right now, but just to let you know we are in > the process of trying to set up some tagging for our issues and work items > that will show if they are good for new contributors and such, like what > many other open source projects do. Watch the list for updates related to > that project. > > Thanks, > Curtis > > On Thu, Jan 31, 2019 at 6:04 PM Javier Romero wrote: > >> Hi Bruce, >> >> Thanks for your information. >> >> Will join the weekly call next Wednesday, now I'm checking the wiki and >> docs of the project. >> >> I can use a VM with KVM hypervisor to run StarlingX, it has 8 cores and >> 16 GB of RAM, but seens that 32 GB are needed at least. Will have to find >> something else... >> >> >> >> >> El jueves, 31 de enero de 2019, Jones, Bruce E >> escribió: >> >>> Javier, thank you. I’m going to suggest a couple of things for you to >>> try out, to get started. >>> >>> >>> >>> You can join our weekly community call. It’s on Wednesdays at 1400 >>> UTC. Details are here: >>> https://wiki.openstack.org/wiki/Starlingx/Meetings. StarlingX is a big >>> project so we’ve divided it up into sub-projects – each has a call of their >>> own, listed on that page, that you could join as well. >>> >>> >>> >>> You should check out our wiki: https://wiki.openstack.org/wiki/StarlingX >>> and our documentation https://docs.starlingx.io/ >>> >>> >>> >>> You can download and run a pre-built StarlingX image from here: >>> http://mirror.starlingx.cengn.ca/mirror/starlingx/centos/2018.10/20181110/outputs/iso/ >>> >>> >>> >>> You can find some instructions on how to install the image here: >>> https://docs.starlingx.io/installation_guide/index.html. The document >>> needs some changes, which we are working on as part of the Documentation >>> sub-project. You will need a VM on a machine with lots of memory and/or a >>> dedicated system to boot the image on – the image installs an OS and all of >>> StarlingX. >>> >>> >>> >>> Myself and the community will be happy to answer any questions you have. >>> >>> >>> >>> brucej >>> >>> >>> >>> *From:* Javier Romero [mailto:xavinux at gmail.com] >>> *Sent:* Thursday, January 31, 2019 1:41 PM >>> *To:* Jones, Bruce E >>> *Cc:* starlingx-discuss at lists.starlingx.io >>> *Subject:* Re: Contribution to the project. >>> >>> >>> >>> Thanks for your answer. >>> >>> >>> >>> Well maybe testing but please, let me know those parts where you need >>> more help to see if I can be useful in any of them. >>> >>> >>> >>> Best Regards, >>> >>> >>> >>> >>> >>> El jueves, 31 de enero de 2019, Jones, Bruce E >>> escribió: >>> >>> We’d love to have your help! What part of the project are you >>> interested in? >>> >>> >>> >>> brucej >>> >>> >>> >>> *From:* Javier Romero [mailto:xavinux at gmail.com] >>> *Sent:* Thursday, January 31, 2019 12:42 PM >>> *To:* starlingx-discuss at lists.starlingx.io >>> *Subject:* [Starlingx-discuss] Contribution to the project. >>> >>> Hi Team, >>> >>> >>> >>> My name is Javier and live in Buenos Aires, Argentina. >>> >>> >>> >>> Work in the Network Operations Center of an Internet Service Provider >>> where I deploy and manage mission critical Linux Servers. >>> >>> >>> >>> Would like to know if there is something I can help with on the >>> StarlingX project . >>> >>> >>> >>> Thanks for your attention. >>> >>> >>> >>> Best Regards, >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> *Javier Romero* >>> >>> >>> >>> >>> >>> >>> >>> >>> >>> -- >>> >>> *Javier Romero* >>> >>> >>> >>> >>> >>> >>> >> >> >> -- >> *Javier Romero* >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> > > > -- > Blog: serverascode.com > -------------- next part -------------- An HTML attachment was scrubbed... URL: From tyler.smith at windriver.com Fri Feb 1 17:16:21 2019 From: tyler.smith at windriver.com (Smith, Tyler) Date: Fri, 1 Feb 2019 17:16:21 +0000 Subject: [Starlingx-discuss] [Containers] Change to container install process Message-ID: Hello, A change I've made which got merged this morning affects the install process for containerized systems, you must now add host labels to controllers and computes before unlocking them. I have updated the wikis accordingly. Thanks, Tyler -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Fri Feb 1 20:53:52 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Fri, 1 Feb 2019 20:53:52 +0000 Subject: [Starlingx-discuss] [build][meetings] Build team meeting minutes 1/31/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105CF462@fmsmsx104.amr.corp.intel.com> Build team meeting Attendees Marce, Erich, Pipo, Memo, Luis, Abraham, Saul, Victor, Bruce, Ken, Jason Agenda for 1/31/2019 - Review stories for next release - Explore build mechanism for feature branches - Cengn update - Opens Notes Review stories for next release: We reviewed the stories committed for next release out of the build team, after a brief overview for these, (2004011-13 ) we realize that the activities are linked to task under 2004043, so we decided to do some housekeeping and get rid of the duplicated ones. AR Cesar to send Ghada an email about this. Explore build mechanism for feature branches: we have the capacity to deliver these on demand, overall mechanism works the same as our release builds, so in case there are a requirement to build a custom ISO file, based on any given branch, we have the resources to do it. Cengn update: Build system running stable, a lot of effort to get builds based on the new architecture, we do have email notifications ready for build failures. Still pending the migration of hosted mirror to K8 clusters inside Cengn. Opens: There was a proposal for two new user experience features: - Provide a way to check requirements before starting a build. - Improve build system logs. The ask was about if these new features will need to follow the spec process to get accepted and the answer was yes, so, Erich will start putting something together to follow up on this. Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Fri Feb 1 21:27:17 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Fri, 1 Feb 2019 21:27:17 +0000 Subject: [Starlingx-discuss] [Containers] Deployment status In-Reply-To: References: <0A5D9A624DF90343892F8F3FE7DE525A2A936603@fmsmsx101.amr.corp.intel.com> <0A5D9A624DF90343892F8F3FE7DE525A2A936853@fmsmsx101.amr.corp.intel.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A936C1F@fmsmsx101.amr.corp.intel.com> Mingyuan After applying workaround you mentioned I only see the freeze one time, on glance pod 30%, after timeout expired I ran again the apply and everything worked correctly without any freeze until completed successfully. Regards, José > -----Original Message----- > From: Qi, Mingyuan > Sent: Friday, February 1, 2019 1:11 AM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io; Al.Bailey at windriver.com; Wensley, Barton > > Subject: RE: [Starlingx-discuss] [Containers] Deployment status > > Jose, > > Looking into your log, it seems that coredns is not ready. > Here is the workaround for it: > 1. kubectl -n kube-system edit configmap coredns 2. remove "loop" line and > save 3. kubectl -n kube-system delete rs {your-coredns-replicaset-name} > > Bart & Al, > > It's seems to be a real issue for users behind a proxy which I guess coredns is > not able to access external nameserver and falls in loop. > Could user specified nameserver to be updated to coredns? > > Mingyuan > > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: Friday, February 1, 2019 1:31 > To: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Deployment status > > Hi, > > I did some tests with the most recent ISO (01/31) that has already the > changes to add a proxy on config_controller, still facing the below > mentioned issues, I created a Launchpad [1] to track this. > > https://bugs.launchpad.net/starlingx/+bug/1814142 > > Regards, > José > > > -----Original Message----- > > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > > Sent: Wednesday, January 30, 2019 4:31 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] [Containers] Deployment status > > > > Hi All > > > > ** Virtual Environment ** > > > > I'm working on setting up a StarlingX deployment with containers as > > described on [1] using proxies but I have faces some issues described > below: > > > > - I have successfully completed all the steps until "system > > application-apply stx-openstack" when the system becomes unstable, the > > application-apply gets freezed in random pods until timeout of 30 min > > is reached, I discovered that killing armada process and start again the > "system application-apply" > > the application advance until again is freezed in a random pod. > > After some cycles the application is successfully completed. > > > > Here is a log [2] of one point when the process is freeze, the error > > is always the same, the only element that changes is the pod and its > > random, not always is the same pod. > > > > ===== > > 019-01-30 17:24:13.364 11824 ERROR sysinv.conductor.kube_app [-] > > Received a false positive response from Docker/Armada. Failed to apply > > application manifest /manifests/stx-openstack-manifest-no-tests.yaml: > > 2019- > > 01-30 16:54:07.367 20554 DEBUG armada.handlers.document [-] Resolving > > reference /manifests/stx-openstack-manifest-no-tests.yaml. > > resolve_reference /usr/local/lib/python3.5/site- > > packages/armada/handlers/document.py:49 > > ===== > > > > -After is completed correctly I'm receiving failures on "Verify the > > cluster endpoints" when running `openstack endpoint list`, I updated > > the .yml file with the correct password but connection is not > > established correctly. Here is the output [3]. > > > > ====== > > controller-0:~$ openstack endpoint list Failed to discover available > > identity versions when contacting > http://keystone.openstack.svc.cluster.local/v3. > > Attempting to parse version from URL. > > Unable to establish connection to > > http://keystone.openstack.svc.cluster.local/v3/auth/tokens: > > HTTPConnectionPool(host='keystone.openstack.svc.cluster.local', port=80): > > Max retries exceeded with url: /v3/auth/tokens (Caused by > > NewConnectionError(' > o n object at 0x7fe65ffb90d0>: Failed to establish a new connection: > > [Errno -3] Temporary failure in name resolution',)) ====== > > > > 1. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation > > 2. http://paste.openstack.org/show/744277 > > 3. http://paste.openstack.org/show/744281 > > > > Regards, > > José > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From xavinux at gmail.com Sat Feb 2 02:54:12 2019 From: xavinux at gmail.com (Javier Romero) Date: Fri, 1 Feb 2019 23:54:12 -0300 Subject: [Starlingx-discuss] Contribution to the project. In-Reply-To: References: <9A85D2917C58154C960D95352B22818BBFD18EB2@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD18F0D@fmsmsx123.amr.corp.intel.com> Message-ID: Hi Abraham, Thanks for your attentiion and I can help on the documentation team if you need help there. Have a good day! Best Regards, El jueves, 31 de enero de 2019, Arce Moreno, Abraham < abraham.arce.moreno at intel.com> escribió: > Javier, > > > > Well maybe testing but please, let me know those parts where you need > > more help to see if I can be useful in any of them. > > Please consider the invitation to join us at the Documentation Team [0] > > Right now we have activities around you area of expertise, creating the > documentation > to deploy and manage the different StarlingX configurations and its > services. > > Please let me know so we can have a call with you. > > [0] https://etherpad.openstack.org/p/stx-documentation > -- *Javier Romero* -------------- next part -------------- An HTML attachment was scrubbed... URL: From mario.alfredo.c.arevalo at intel.com Sat Feb 2 03:54:05 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Sat, 2 Feb 2019 03:54:05 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com> References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> ,<6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com> Message-ID: <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm From xiongzhiwei at baicells.com Sat Feb 2 04:38:10 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Sat, 2 Feb 2019 12:38:10 +0800 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Message-ID: <20190202123809273461136@baicells.com> Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Sat Feb 2 05:00:04 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Sat, 2 Feb 2019 05:00:04 +0000 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server In-Reply-To: <20190202123809273461136@baicells.com> References: <20190202123809273461136@baicells.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E75C28@SHSMSX104.ccr.corp.intel.com> HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiongzhiwei at baicells.com Sat Feb 2 06:16:21 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Sat, 2 Feb 2019 14:16:21 +0800 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server References: <20190202123809273461136@baicells.com>, <2FD5DDB5A04D264C80D42CA35194914F35E75C28@SHSMSX104.ccr.corp.intel.com> Message-ID: <20190202141620901159145@baicells.com> Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Sat Feb 2 06:27:30 2019 From: yong.hu at intel.com (Hu, Yong) Date: Sat, 2 Feb 2019 06:27:30 +0000 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server In-Reply-To: <20190202141620901159145@baicells.com> References: <20190202123809273461136@baicells.com> <2FD5DDB5A04D264C80D42CA35194914F35E75C28@SHSMSX104.ccr.corp.intel.com> <20190202141620901159145@baicells.com> Message-ID: <6392F92B-138A-42D9-ABB2-6D5EAD30B2E8@intel.com> If using 240G SATA HD as the boot disk, the storage might not be enough. At least, in our virtual environment, the boot disk has to be larger than 250 GB. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 2:17 PM To: "Xie, Cindy" , starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiongzhiwei at baicells.com Sat Feb 2 06:34:57 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Sat, 2 Feb 2019 14:34:57 +0800 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server References: <20190202123809273461136@baicells.com>, <2FD5DDB5A04D264C80D42CA35194914F35E75C28@SHSMSX104.ccr.corp.intel.com>, <20190202141620901159145@baicells.com>, <6392F92B-138A-42D9-ABB2-6D5EAD30B2E8@intel.com> Message-ID: <20190202143457151410149@baicells.com> Thanks Hu Yong and Cindy, I am trying to again after remove these two SATA HD. Will tell you once successed. Regards Tim From: Hu, Yong Date: 2019-02-02 14:27 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server If using 240G SATA HD as the boot disk, the storage might not be enough. At least, in our virtual environment, the boot disk has to be larger than 250 GB. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 2:17 PM To: "Xie, Cindy" , starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From xiongzhiwei at baicells.com Sat Feb 2 07:07:15 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Sat, 2 Feb 2019 15:07:15 +0800 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server References: <20190202123809273461136@baicells.com>, <2FD5DDB5A04D264C80D42CA35194914F35E75C28@SHSMSX104.ccr.corp.intel.com>, <20190202141620901159145@baicells.com>, <6392F92B-138A-42D9-ABB2-6D5EAD30B2E8@intel.com>, <20190202143457151410149@baicells.com> Message-ID: <20190202150714722222153@baicells.com> Hi Yong and Cindy, The same error appeared after removed these two sata disks. Thanks Tim From: xiongzhiwei at baicells.com Date: 2019-02-02 14:34 To: Hu, Yong; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Thanks Hu Yong and Cindy, I am trying to again after remove these two SATA HD. Will tell you once successed. Regards Tim From: Hu, Yong Date: 2019-02-02 14:27 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server If using 240G SATA HD as the boot disk, the storage might not be enough. At least, in our virtual environment, the boot disk has to be larger than 250 GB. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 2:17 PM To: "Xie, Cindy" , starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From yong.hu at intel.com Sat Feb 2 07:10:11 2019 From: yong.hu at intel.com (Hu, Yong) Date: Sat, 2 Feb 2019 07:10:11 +0000 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Message-ID: Pls share us your /var/log/puppet/latest/puppet.log. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 3:07 PM To: "xiongzhiwei at baicells.com" , "Hu, Yong" , "Xie, Cindy" , starlingx-discuss Subject: Re: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Yong and Cindy, The same error appeared after removed these two sata disks. Thanks Tim From: xiongzhiwei at baicells.com Date: 2019-02-02 14:34 To: Hu, Yong; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Thanks Hu Yong and Cindy, I am trying to again after remove these two SATA HD. Will tell you once successed. Regards Tim From: Hu, Yong Date: 2019-02-02 14:27 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server If using 240G SATA HD as the boot disk, the storage might not be enough. At least, in our virtual environment, the boot disk has to be larger than 250 GB. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 2:17 PM To: "Xie, Cindy" , starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavinux at gmail.com Sat Feb 2 14:20:21 2019 From: xavinux at gmail.com (Javier Romero) Date: Sat, 2 Feb 2019 11:20:21 -0300 Subject: [Starlingx-discuss] Hypervisor requrements. Message-ID: Hi Team, I've at disposal a Proxmox VM. The Server where Proxmox is running which uses KVM as hypevisor, has 8 cores intel E5 and 32 GB of memory Have someone test StarlingX with Proxmox or it only works with QEMU/libvirt and VirtualBox? Best Regards, -- *Javier Romero* -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavinux at gmail.com Sat Feb 2 17:07:07 2019 From: xavinux at gmail.com (Javier Romero) Date: Sat, 2 Feb 2019 14:07:07 -0300 Subject: [Starlingx-discuss] Vagrant StarlingX VM. Message-ID: Hi Team, Think that perhaps may be useful for new users to have a Vagrant preconfigured AIO VM to use StarlingX for the fiirst time. https://www.vagrantup.com If this can be useful I can help with that. Vagrant use VirtualBox by default to star the preconfigured VM and can also be set to be used with QEMU. Best Regards, -- *Javier Romero* -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Sat Feb 2 17:42:57 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Sat, 2 Feb 2019 17:42:57 +0000 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: References: Message-ID: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> Yes we are exploring a few possibilities for StarlingX in a bottle, vagrant in one of them. Feel free to throw some ideas at it Regards Cesar Lara Sent from my mobile phone ________________________________ From: Javier Romero Sent: Saturday, February 2, 2019 11:07 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Vagrant StarlingX VM. Hi Team, Think that perhaps may be useful for new users to have a Vagrant preconfigured AIO VM to use StarlingX for the fiirst time. https://www.vagrantup.com If this can be useful I can help with that. Vagrant use VirtualBox by default to star the preconfigured VM and can also be set to be used with QEMU. Best Regards, -- Javier Romero -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavinux at gmail.com Sat Feb 2 20:50:50 2019 From: xavinux at gmail.com (Javier Romero) Date: Sat, 2 Feb 2019 17:50:50 -0300 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> Message-ID: Cesar, Thank you for your time to answer. Will let you know if I have some ideas. Regards, El sábado, 2 de febrero de 2019, Lara, Cesar escribió: > Yes we are exploring a few possibilities for StarlingX in a bottle, > vagrant in one of them. Feel free to throw some ideas at it > > Regards > Cesar Lara > Sent from my mobile phone > ------------------------------ > *From:* Javier Romero > *Sent:* Saturday, February 2, 2019 11:07 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Vagrant StarlingX VM. > > Hi Team, > > Think that perhaps may be useful for new users to have a Vagrant > preconfigured AIO VM to use StarlingX for the fiirst time. > > https://www.vagrantup.com > > If this can be useful I can help with that. > > Vagrant use VirtualBox by default to star the preconfigured VM and can > also be set to be used with QEMU. > > Best Regards, > > > > > > -- > *Javier Romero* > > > > -- *Javier Romero* -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Sun Feb 3 08:15:50 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Sun, 3 Feb 2019 08:15:50 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> ,<6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com> <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E770BD@SHSMSX104.ccr.corp.intel.com> Thanks Mario. If you below 2 patches can close the tasks you've taken from storyboard 2004008, please go ahead to assign other tasks from that story. Thanks. - cindy -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Saturday, February 2, 2019 11:54 AM To: Penney, Don ; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From himanshugoyal500 at gmail.com Mon Feb 4 12:08:03 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Mon, 4 Feb 2019 17:38:03 +0530 Subject: [Starlingx-discuss] VCPU scheduler priority Message-ID: Hi, Please suggest how to set VCPU Scheduler policy in flavor, Not able to found that in flavor metadata. Regards, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Mon Feb 4 13:25:32 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Mon, 4 Feb 2019 13:25:32 +0000 Subject: [Starlingx-discuss] VCPU scheduler priority In-Reply-To: References: Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3B9685@ALA-MBD.corp.ad.wrs.com> Hi, There is no VCPU Scheduler policy extra spec. CPU policy/topology related extra specs can be found here, https://docs.openstack.org/nova/pike/admin/flavors.html Brent From: Himanshu Goyal [mailto:himanshugoyal500 at gmail.com] Sent: Monday, February 4, 2019 7:08 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] VCPU scheduler priority Hi, Please suggest how to set in flavor, Not able to found that in flavor metadata. Regards, Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Mon Feb 4 14:18:24 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 4 Feb 2019 09:18:24 -0500 Subject: [Starlingx-discuss] Hypervisor requrements. In-Reply-To: References: Message-ID: <1549289904.21474.6.camel@windriver.com> Hi Javier, Proxmox is also Qemu based, correct?  I am reading this Proxmox wiki: https://pve.proxmox.com/wiki/Main_Page If it is qemu then you may be able to select similar cpu and hardware options as are presented in the stx-tools/deployment/libvirt/*.xml files. M On Sat, 2019-02-02 at 11:20 -0300, Javier Romero wrote: > Hi Team, > > I've at disposal a Proxmox VM. The Server where Proxmox is running > which uses KVM as hypevisor, has 8 cores intel E5 and 32 GB of memory > > Have someone test StarlingX with Proxmox or it only works with > QEMU/libvirt and VirtualBox? > > Best Regards, > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From xavinux at gmail.com Mon Feb 4 14:55:11 2019 From: xavinux at gmail.com (Javier Romero) Date: Mon, 4 Feb 2019 11:55:11 -0300 Subject: [Starlingx-discuss] Hypervisor requrements. In-Reply-To: <1549289904.21474.6.camel@windriver.com> References: <1549289904.21474.6.camel@windriver.com> Message-ID: Hi Michel, I'll create a VM with Proxmox and try to run StarlingX in it. Will come back to let you know if it is workkng. Regards, El lunes, 4 de febrero de 2019, Michel Thebeau escribió: > Hi Javier, > > Proxmox is also Qemu based, correct? I am reading this Proxmox wiki: > > https://pve.proxmox.com/wiki/Main_Page > > If it is qemu then you may be able to select similar cpu and hardware > options as are presented in the stx-tools/deployment/libvirt/*.xml > files. > > M > > > > On Sat, 2019-02-02 at 11:20 -0300, Javier Romero wrote: > > Hi Team, > > > > I've at disposal a Proxmox VM. The Server where Proxmox is running > > which uses KVM as hypevisor, has 8 cores intel E5 and 32 GB of memory > > > > Have someone test StarlingX with Proxmox or it only works with > > QEMU/libvirt and VirtualBox? > > > > Best Regards, > > > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- *Javier Romero* -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Mon Feb 4 14:59:45 2019 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Mon, 4 Feb 2019 14:59:45 +0000 Subject: [Starlingx-discuss] CentOS 7.6 rebase feature testing In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE8137E@SHSMSX101.ccr.corp.intel.com> References: <151EE31B9FCCA54397A757BC674650F0BA4A9F22@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E4F339@SHSMSX103.ccr.corp.intel.com> <151EE31B9FCCA54397A757BC674650F0BA4A9F40@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE80761@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE80937@SHSMSX101.ccr.corp.intel.com> <3CAA827B7A79BA46B15B280EC82088FE48266B36@ALA-MBD.corp.ad.wrs.com> <4F6AACE4B0F173488D033B02A8BB5B7E7CD6394D@FMSMSX114.amr.corp.intel.com> <3CAA827B7A79BA46B15B280EC82088FE4826C03E@ALA-MBD.corp.ad.wrs.com> <151EE31B9FCCA54397A757BC674650F0BA4B7290@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE81334@SHSMSX101.ccr.corp.intel.com> <9700A18779F35F49AF027300A49E7C765FE8137E@SHSMSX101.ccr.corp.intel.com> Message-ID: <151EE31B9FCCA54397A757BC674650F0BA4B840B@ALA-MBD.corp.ad.wrs.com> Thanks Shuicheng. Is there a need to actually upversion the qat drivers? Are there new features or bug fixes that are of interest to StarlingX? I suggest you review the release notes and make a decision with the distro.other team leads before proceeding with this item. Ghada From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Wednesday, January 30, 2019 7:34 PM To: Khalil, Ghada; Waheed, Numan; Perez, Ricardo O; Xie, Cindy; Cabrales, Ada Cc: starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 rebase feature testing Hi all, Story is created to track this task: https://storyboard.openstack.org/#!/story/2004901 Best Regards Shuicheng From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Thursday, January 31, 2019 7:56 AM To: Khalil, Ghada >; Waheed, Numan >; Perez, Ricardo O >; Xie, Cindy >; Cabrales, Ada > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.6 rebase feature testing Hi Ghada, QAT driver is not upgraded yet, since there is no build failure with CentOS 7.6. I will have a check with it after Chinese New Year's holiday, and plan to do the upgrade in master. Best Regards Shuicheng From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Wednesday, January 30, 2019 11:01 PM To: Waheed, Numan >; Perez, Ricardo O >; Lin, Shuicheng >; Xie, Cindy >; Cabrales, Ada > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 rebase feature testing Shuicheng, Can you confirm if the qat17 driver was updated? I didn't see it on the task list. To the test team, I suggest you look at the summary of changes highlighted by Shuicheng below as well as the tasks in the stories (as they show the packages that have changed) and choose the test-cases and test configurations accordingly. Based on the changes listed in https://storyboard.openstack.org/#!/story/2004521 - Sanity/Basic Regression on standard and low-latency profile o Install/lock/unlock all node types (controller/compute/storage) o VM operations - Niantic NIC tested as a mgmt/infra and data interface - Fortville NIC tested as a mgmt/infra and data interface - Mellanox NIC tested as a mgmt/infra interface, but not a data interface (as mlx driver DPDK support is disabled) o Include cable pulls for all NIC types - Hardware TPM - IMA/Integrity - DRBD sync - should be covered by duplex/multi-node test-cases (lock/unlock/reinstall controller-1) Based on the changes listed in https://storyboard.openstack.org/#!/story/2004521 - https config (changes in haxproxy) - snmp traps (changes in net-snmp) - ldap user password changes (changes in openldap) - efi pxe install (changes to grub2) - The remaining packages are covered by the normal install and basic system operations I think the above would make a good regression suite for an OS upgrade in the future as well. I encourage others on the mailing list to review/comment as well. Regards, Ghada PS: The TCs below are heavily geared towards neutron. They don't provide sufficient coverage given the areas of churn. From: Waheed, Numan Sent: Wednesday, January 30, 2019 9:08 AM To: Perez, Ricardo O; Lin, Shuicheng; Khalil, Ghada; Xie, Cindy; Cabrales, Ada Subject: RE: CentOS 7.6 rebase feature testing Hi Ricardo, Thanks for providing the list of test cases. These are mostly networking related test cases. Can you please let me know which NICs you are using in your labs? I would also suggest to add some tests that are specific to OS e.g. password related tests, Backup and Restore etc. Are you also going to do any low-latency setup? I would also suggest to add some Nova related tests. Thanks, Numan. From: Perez, Ricardo O > Sent: January-29-19 6:08 PM To: Waheed, Numan >; Lin, Shuicheng >; Khalil, Ghada >; Xie, Cindy >; Cabrales, Ada > Subject: RE: CentOS 7.6 rebase feature testing Hi Numan, These are the tests that we are planning to run on CentOS 7.6 Testing: Live-migration: Instance is Scheduling on socket where vswitch is running (on destination host) Lock & Unlock the compute Off-line static configuration for "External OAM" Interface On-line Static configuration validation: swact controllers should be rejected until controller-0 config for "External OAM" Inter Provider network Down Alarm Reject changing interface MTU size to values smaller than MTU of provider network Verify administrator is able to set alter and query System name Via CLI and GUI Verify alarm generation for neutron L3 agent scheduling states Verify alarm generation for neutron provider network state Verify appropriate values should be used for modifying of interfaces (CLI, GUI) Verify ethernet MGMT interface is updated successfully on controller. Verify ethernet OAM interface is updated successfully on controller. Verify that all providernettypes can be shown via Neutron REST API Verify that associating provider network with an interface is rejected if MTU of the interface is smaller than MTU of provider n Verify that changing the physical interface of MGMT network is permitted Verify that changing the physical interface of OAM network is permitted Verify that hosts can be shown via Neutron REST API Verify that interfaces parameters can be modified via API reques Verify that interfaces ports data can be modified via API. Verify that providernet can be created/updated/deleted via Neutron REST API Verify that providernetranges can be created/updated/deleted via Neutron REST API Verify that subnet can be created and subnet attributes can be modified via Neutron REST API Verify that System name can be modified via CLI and GUI Verify that the MTU size is displayed correctly for the infra interface after it is changed. Verify that unlocked powered off host can not be deleted (CLI, GUI) The configurations that we are pretending to use are: P1 - 2+2+2 (External Storage) P2 - Duplex If someone has any other test that we have to consider or add to this list, just let us know. Regards -Ricardo From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Monday, January 28, 2019 7:49 AM To: Lin, Shuicheng >; Khalil, Ghada >; Xie, Cindy >; Cabrales, Ada > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 rebase feature testing Hi Ada, Which tests you are planning to run for CentOS 7.6 testing? Which configs you are using? Thanks, Numan. From: Lin, Shuicheng > Sent: January-28-19 1:28 AM To: Khalil, Ghada >; Xie, Cindy >; Cabrales, Ada > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.6 rebase feature testing Hi Ghada/Numan, 1 more patch is added for feature branch. https://review.openstack.org/633431 Best Regards Shuicheng From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: Monday, January 28, 2019 10:20 AM To: Khalil, Ghada >; Xie, Cindy >; Cabrales, Ada > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS 7.6 rebase feature testing Hi Ghada/Numan, All code for CentOS 7.6 upgrade has been merged to centos76 feature branch. Please have a try with it, and notify me if there is any problem. Manifest file I used is attached, it is with last week's master code + feature branch code for stx-integ/stx-upstream/stx-tools/stx-root. Here is a simple summary for CentOS 7.6 upgrade: 30 srpm & 650+ rpm upgraded. Kernel version upgraded to 3.10.0-957.1.3 for both std and rt kernel. Out of tree driver upgraded/updated: i40e/i40evf/ixgbe/ixgbevf/tpm/integrity/mellanox/rdma-core/libibverbs. Mellanox driver support in DPDK is disabled. (DPDK/OpenVswitch need be upgraded later to support Mellanox driver again.) You could get more info from below stories: Kenrel upgrade story: https://storyboard.openstack.org/#!/story/2004521 Srpm & rpm upgrade story: https://storyboard.openstack.org/#!/story/2004522 Mellanox driver support: https://storyboard.openstack.org/#!/story/2004743 Best Regards Shuicheng From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Thursday, January 24, 2019 3:15 AM To: Xie, Cindy >; Cabrales, Ada >; Lin, Shuicheng > Cc: Waheed, Numan >; Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 rebase feature testing Sorry for the confusion. Yes Numan will wait until all the pending patches are merged in the feature branch. Is this planned for the end of next week (before Chinese New Year holidays)? From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, January 23, 2019 2:10 PM To: Khalil, Ghada; Cabrales, Ada; Lin, Shuicheng Cc: Waheed, Numan; Jones, Bruce E; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS 7.6 rebase feature testing Ghada, In today's meeting, Ken mentioned that he'd like to see all pending patches merged to f/CentOS7.6 for Numan to start. But if Numan wants to start earlier, that's fine. @Shuicheng should be able to send out the build instructions. Thx. - cindy From: Khalil, Ghada [mailto:Ghada.Khalil at windriver.com] Sent: Wednesday, January 23, 2019 11:05 AM To: Xie, Cindy >; Cabrales, Ada > Cc: Waheed, Numan >; Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: CentOS 7.6 rebase feature testing Hi Cindy, My understanding from the F2F meeting last week that you already discussed with Ada testing the CentOS 7.6 rebase image (feature branch) starting Feb 1 while your team is off for Chinese New Year's. Can you please send the instructions for building the feature branch to the stx mailing list? Numan will try to have an image built to try it out in one of the WR labs (time-permitting). I expect the feature branch is rebased to the latest from master. Hi Ada, Given that there are driver upgrades for i40e & ixgbe, I assume you are planning to test on baremetal hardware, not just in the virtual env. Are your baremetal labs setup now? What NIC types do you have covered in your labs? Thanks, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Mon Feb 4 15:03:04 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 4 Feb 2019 10:03:04 -0500 Subject: [Starlingx-discuss] Hypervisor requrements. In-Reply-To: References: <1549289904.21474.6.camel@windriver.com> Message-ID: <1549292584.21474.12.camel@windriver.com> Cool, and if there's a way export the VM definition in a human readable form like XML please send it along to the list for insight. M On Mon, 2019-02-04 at 11:55 -0300, Javier Romero wrote: > Hi Michel, > > I'll create a VM with Proxmox and try to run StarlingX in it. Will > come back to let  you know if it is workkng. > > Regards, > > > El lunes, 4 de febrero de 2019, Michel Thebeau ver.com> escribió: > > Hi Javier, > > > > Proxmox is also Qemu based, correct?  I am reading this Proxmox > > wiki: > > > > https://pve.proxmox.com/wiki/Main_Page > > > > If it is qemu then you may be able to select similar cpu and > > hardware > > options as are presented in the stx-tools/deployment/libvirt/*.xml > > files. > > > > M > > > > > > > > On Sat, 2019-02-02 at 11:20 -0300, Javier Romero wrote: > > > Hi Team, > > >  > > > I've at disposal a Proxmox VM. The Server where Proxmox is > > running > > > which uses KVM as hypevisor, has 8 cores intel E5 and 32 GB of > > memory > > >  > > > Have someone test StarlingX with Proxmox or it only works with > > > QEMU/libvirt and VirtualBox? > > >  > > > Best Regards, > > >  > > >  > > >  > > >  > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-disc > > uss > > > From xavinux at gmail.com Mon Feb 4 15:24:55 2019 From: xavinux at gmail.com (Javier Romero) Date: Mon, 4 Feb 2019 12:24:55 -0300 Subject: [Starlingx-discuss] Hypervisor requrements. In-Reply-To: <1549292584.21474.12.camel@windriver.com> References: <1549289904.21474.6.camel@windriver.com> <1549292584.21474.12.camel@windriver.com> Message-ID: Of course I'll do it if everything works fine. Regards, El lunes, 4 de febrero de 2019, Michel Thebeau escribió: > Cool, and if there's a way export the VM definition in a human readable > form like XML please send it along to the list for insight. > > M > > > On Mon, 2019-02-04 at 11:55 -0300, Javier Romero wrote: > > Hi Michel, > > > > I'll create a VM with Proxmox and try to run StarlingX in it. Will > > come back to let you know if it is workkng. > > > > Regards, > > > > > > El lunes, 4 de febrero de 2019, Michel Thebeau > ver.com> escribió: > > > Hi Javier, > > > > > > Proxmox is also Qemu based, correct? I am reading this Proxmox > > > wiki: > > > > > > https://pve.proxmox.com/wiki/Main_Page > > > > > > If it is qemu then you may be able to select similar cpu and > > > hardware > > > options as are presented in the stx-tools/deployment/libvirt/*.xml > > > files. > > > > > > M > > > > > > > > > > > > On Sat, 2019-02-02 at 11:20 -0300, Javier Romero wrote: > > > > Hi Team, > > > > > > > > I've at disposal a Proxmox VM. The Server where Proxmox is > > > running > > > > which uses KVM as hypevisor, has 8 cores intel E5 and 32 GB of > > > memory > > > > > > > > Have someone test StarlingX with Proxmox or it only works with > > > > QEMU/libvirt and VirtualBox? > > > > > > > > Best Regards, > > > > > > > > > > > > > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-disc > > > uss > > > > > > -- *Javier Romero* -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Mon Feb 4 15:25:08 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Mon, 4 Feb 2019 15:25:08 +0000 Subject: [Starlingx-discuss] STX CentOS7.6 Nova Test Report - YELLOW Message-ID: <19C65A6E92EA384D809B1772130CD7F8621810EA@ALA-MBD.corp.ad.wrs.com> Hi All, Here's the results and analysis for nova regressions with centos7.6 load. 7 test cases failed due to two new issues (reproducible) and rest of the failures were caused by existing issues. Issues: 1. Existing host mempage config issue. https://bugs.launchpad.net/starlingx/+bug/1813325  -> failed 39 mempage testcases which require reconfig the host mempage as a pre-condition. 2. Existing intermittent swact issue. https://bugs.launchpad.net/starlingx/+bug/1812108  -> failed 2 testcases, passed in rerun. 3. [new issue] #1814336 CentOS7.6: Unable to launch vm directly from virsh -> failed 1 test 4. [new issue] #1814335 CentOS7.6: Unable to launch vm with UEFI boot -> failed 6 test Note that this is only the results for nova regressions. Testing in other domains are still in progress. Thanks, yang ========================================= Node Config: 2+8+2 Software Version: 19.01 Overall Status: YELLOW Automated Test Results Summary: ------------------------------------------------------ Passed: 179 (78.85%) Failed: 48 (21.15%) Total Executed: 227 List of Test Cases: ------------------------------ PASS    20190131 14:04:04      test_vm_with_config_drive PASS    20190131 14:16:28      test_boot_vm_cpu_policy_image[3-None-shared-image-None] PASS    20190131 14:19:35     test_boot_vm_cpu_policy_image[4-dedicated-dedicated-volume-None] PASS    20190131 14:22:47      test_boot_vm_cpu_policy_image[1-dedicated-None-image-None] PASS    20190131 14:25:24      test_boot_vm_cpu_policy_image[1-shared-shared-volume-None] PASS    20190131 14:28:35      test_boot_vm_cpu_policy_image[2-shared-None-image-None] PASS    20190131 14:31:19      test_boot_vm_cpu_policy_image[3-dedicated-shared-volume-None] PASS    20190131 14:34:44      test_boot_vm_cpu_policy_image[1-shared-dedicated-image-CPUPolicyErr.CONFLICT_FLV_IMG] PASS    20190131 14:36:14      test_cpu_pol_vm_actions[4-None-flavor-image] PASS    20190131 14:44:44      test_cpu_pol_vm_actions[2-dedicated-flavor-volume] PASS    20190131 14:52:53      test_cpu_pol_vm_actions[3-shared-flavor-volume] PASS    20190131 15:01:09      test_cpu_pol_vm_actions[1-dedicated-flavor-image] PASS    20190131 15:09:34      test_cpu_pol_vm_actions[2-dedicated-image-volume] PASS    20190131 15:18:20      test_cpu_pol_vm_actions[3-shared-image-volume] PASS    20190131 15:26:57      test_cpu_pol_vm_actions[1-dedicated-image-image] PASS    20190131 15:36:29      test_flavor_cpu_realtime_negative[2-dedicated-yes-None-None-CpuRtErr.RT_AND_ORD_REQUIRED] PASS    20190131 15:36:51      test_flavor_cpu_realtime_negative[2-shared-yes-^0-None-CpuRtErr.DED_CPU_POL_REQUIRED] PASS    20190131 15:37:11      test_flavor_cpu_realtime_negative[3-None-yes-^1-None-CpuRtErr.DED_CPU_POL_REQUIRED] PASS    20190131 15:37:33      test_flavor_cpu_realtime_negative[4-dedicated-yes-^2-3-1-CpuRtErr.RT_MASK_SHARED_VCPU_CONFLICT] PASS    20190131 15:37:54      test_flavor_cpu_realtime_negative[1-dedicated-yes-^0-None-CpuRtErr.RT_AND_ORD_REQUIRED] PASS    20190131 15:38:16      test_flavor_cpu_realtime_negative[4-dedicated-yes-^0-3-2-CpuRtErr.RT_AND_ORD_REQUIRED] PASS    20190131 15:38:38      test_cpu_realtime_vm_actions[3-None-^0-flavor-None-None-prefer-None] SKIP    20190131 15:46:21      test_cpu_realtime_vm_actions[4-yes-^0-favor-None-None-require-None] PASS    20190131 15:46:31      test_cpu_realtime_vm_actions[6-yes-^2-3-flavor-None-None-isolate-None] PASS    20190131 15:53:03      test_cpu_realtime_vm_actions[2-yes-^1-flavor-1-None-None-None] PASS    20190131 15:59:33      test_cpu_realtime_vm_actions[3-yes-^0-1-image-None-None-None-None] PASS    20190131 16:06:38      test_cpu_realtime_vm_actions[3-yes-^1-2-image-None-None-isolate-None] PASS    20190131 16:13:28      test_cpu_realtime_vm_actions[4-no-^0-2-flavor-2-None-None-None] PASS    20190131 16:20:01      test_cpu_realtime_vm_actions[4-no-^0-2-image-None-None-None-None] PASS    20190131 16:27:03      test_cpu_thread_flavor_set_negative[None-isolate-None-None-CPUThreadErr.DEDICATED_CPU_REQUIRED_FLAVOR] PASS    20190131 16:27:25      test_cpu_thread_flavor_set_negative[None-require-None-None-CPUThreadErr.DEDICATED_CPU_REQUIRED_FLAVOR] PASS    20190131 16:27:46      test_cpu_thread_flavor_set_negative[None-prefer-None-None-CPUThreadErr.DEDICATED_CPU_REQUIRED_FLAVOR] PASS    20190131 16:28:07      test_cpu_thread_flavor_set_negative[shared-isolate-None-None-CPUThreadErr.DEDICATED_CPU_REQUIRED_FLAVOR] PASS    20190131 16:28:29      test_cpu_thread_flavor_set_negative[shared-require-None-None-CPUThreadErr.DEDICATED_CPU_REQUIRED_FLAVOR] PASS    20190131 16:28:50      test_cpu_thread_flavor_set_negative[dedicated--None-None-CPUThreadErr.INVALID_POLICY] PASS    20190131 16:29:12      test_cpu_thread_flavor_set_negative[dedicated-requi-None-None-CPUThreadErr.INVALID_POLICY] PASS    20190131 16:29:34      test_cpu_thread_flavor_set_negative[dedicated-REQUIRE-None-None-CPUThreadErr.INVALID_POLICY] PASS    20190131 16:29:55      test_cpu_thread_flavor_set_negative[dedicated-AOID-None-None-CPUThreadErr.INVALID_POLICY] PASS    20190131 16:30:16      test_cpu_thread_flavor_set_negative[dedicated-ISOLATE-None-None-CPUThreadErr.INVALID_POLICY] PASS    20190131 16:30:38      test_cpu_thread_flavor_set_negative[dedicated-PREFR-None-None-CPUThreadErr.INVALID_POLICY] PASS    20190131 16:30:59      test_cpu_thread_flavor_set_negative[None-None-1-None-SharedCPUErr.DEDICATED_CPU_REQUIRED] PASS    20190131 16:31:21      test_cpu_thread_flavor_set_negative[shared-None-0-None-SharedCPUErr.DEDICATED_CPU_REQUIRED] PASS    20190131 16:31:42      test_cpu_thread_flavor_set_negative[dedicated-isolate-0-None-CPUThreadErr.UNSET_SHARED_VCPU] PASS    20190131 16:32:04      test_cpu_thread_flavor_set_negative[dedicated-require-1-None-CPUThreadErr.UNSET_SHARED_VCPU] PASS    20190131 16:32:26      test_cpu_thread_flavor_add_negative[specs_preset0-specs_to_set0-CPUThreadErr.UNSET_SHARED_VCPU] PASS    20190131 16:32:51      test_cpu_thread_flavor_add_negative[specs_preset1-specs_to_set1-CPUThreadErr.UNSET_SHARED_VCPU] PASS    20190131 16:33:17      test_cpu_thread_flavor_delete_negative[isolate] PASS    20190131 16:33:42      test_cpu_thread_flavor_delete_negative[require] PASS    20190131 16:34:09      test_cpu_thread_flavor_delete_negative[prefer] PASS    20190131 16:39:06      test_boot_vm_cpu_thread_ht_disabled[2-require-None-CPUThreadErr.HT_HOST_UNAVAIL] PASS    20190131 16:40:11      test_boot_vm_cpu_thread_ht_disabled[3-require-None-CPUThreadErr.HT_HOST_UNAVAIL] PASS    20190131 16:41:16      test_boot_vm_cpu_thread_ht_disabled[3-isolate-None-None] PASS    20190131 16:42:40      test_boot_vm_cpu_thread_ht_disabled[2-prefer-None-None] PASS    20190131 16:45:23      test_evacuate_vms_with_inst_backing[local_image] PASS    20190131 16:59:27      test_reboot_only_host PASS    20190131 17:13:10      test_flavor_default_specs PASS    20190131 17:13:36      test_set_flavor_extra_specs[hw:cpu_model-values0] PASS    20190131 17:14:10      test_set_flavor_extra_specs[hw:cpu_policy-values1] PASS    20190131 17:14:31      test_set_flavor_extra_specs[sw:wrs:auto_recovery-values2] PASS    20190131 17:15:04      test_create_flavor_with_excessive_vcpu_negative PASS    20190131 17:15:23      test_force_lock_with_mig_vms PASS    20190131 17:34:23      test_force_lock_with_non_mig_vms PASS    20190131 17:48:13      test_create_image_with_metadata[sw_wrs_auto_recovery-values0-qcow2-bare] PASS    20190131 17:51:01      test_create_image_with_metadata[sw_wrs_auto_recovery-values1-raw-bare] PASS    20190131 17:53:46      test_lock_with_vms FAIL    20190131 18:09:23      test_boot_vm_mem_page_size[None-None] FAIL    20190131 18:15:49      test_boot_vm_mem_page_size[None-any] FAIL    20190131 18:15:58      test_boot_vm_mem_page_size[None-large] FAIL    20190131 18:16:08      test_boot_vm_mem_page_size[None-small] FAIL    20190131 18:16:17      test_boot_vm_mem_page_size[None-2048] FAIL    20190131 18:16:27      test_boot_vm_mem_page_size[None-1048576] FAIL    20190131 18:16:36      test_boot_vm_mem_page_size[any-None] FAIL    20190131 18:16:45      test_boot_vm_mem_page_size[any-any] FAIL    20190131 18:16:55      test_boot_vm_mem_page_size[any-large] FAIL    20190131 18:17:04      test_boot_vm_mem_page_size[any-small] FAIL    20190131 18:17:13      test_boot_vm_mem_page_size[any-2048] FAIL    20190131 18:17:23      test_boot_vm_mem_page_size[any-1048576] FAIL    20190131 18:17:32      test_boot_vm_mem_page_size[large-None] FAIL    20190131 18:17:41      test_boot_vm_mem_page_size[large-any] FAIL    20190131 18:17:51      test_boot_vm_mem_page_size[large-large] FAIL    20190131 18:18:00      test_boot_vm_mem_page_size[large-small] FAIL    20190131 18:18:10      test_boot_vm_mem_page_size[large-2048] FAIL    20190131 18:18:19      test_boot_vm_mem_page_size[large-1048576] FAIL    20190131 18:18:29      test_boot_vm_mem_page_size[small-None] FAIL    20190131 18:18:38      test_boot_vm_mem_page_size[small-any] FAIL    20190131 18:18:47      test_boot_vm_mem_page_size[small-large] FAIL    20190131 18:18:57      test_boot_vm_mem_page_size[small-small] FAIL    20190131 18:19:06      test_boot_vm_mem_page_size[small-2048] FAIL    20190131 18:19:15      test_boot_vm_mem_page_size[small-1048576] FAIL    20190131 18:19:24      test_boot_vm_mem_page_size[2048-None] FAIL    20190131 18:19:34      test_boot_vm_mem_page_size[2048-any] FAIL    20190131 18:19:43      test_boot_vm_mem_page_size[2048-large] FAIL    20190131 18:19:53      test_boot_vm_mem_page_size[2048-small] FAIL    20190131 18:20:02      test_boot_vm_mem_page_size[2048-2048] FAIL    20190131 18:20:11      test_boot_vm_mem_page_size[2048-1048576] FAIL    20190131 18:20:21      test_boot_vm_mem_page_size[1048576-None] FAIL    20190131 18:20:30      test_boot_vm_mem_page_size[1048576-any] FAIL    20190131 18:20:39      test_boot_vm_mem_page_size[1048576-large] FAIL    20190131 18:20:48      test_boot_vm_mem_page_size[1048576-small] FAIL    20190131 18:20:58      test_boot_vm_mem_page_size[1048576-2048] FAIL    20190131 18:21:07      test_boot_vm_mem_page_size[1048576-1048576] FAIL    20190131 18:21:16      test_schedule_vm_mempage_config[1048576] FAIL    20190131 18:21:26      test_schedule_vm_mempage_config[large] FAIL    20190131 18:21:35      test_schedule_vm_mempage_config[small] PASS    20190131 18:21:44      test_compute_mempage_vars PASS    20190131 18:45:40      test_set_mem_page_size_extra_specs[small] PASS    20190131 18:45:58      test_set_mem_page_size_extra_specs[large] PASS    20190131 18:46:12      test_set_mem_page_size_extra_specs[any] PASS    20190131 18:46:25      test_set_mem_page_size_extra_specs[2048] PASS    20190131 18:46:38      test_set_mem_page_size_extra_specs[1048576] PASS    20190131 18:46:52      test_vm_mem_pool[2048] PASS    20190131 18:48:23      test_vm_mem_pool[large] PASS    20190131 18:49:35      test_vm_mem_pool[small] PASS    20190131 18:50:41      test_vm_mem_pool[1048576] PASS    20190131 18:51:44      test_live_migrate_vm_positive[local_image-0-0-None-1-volume-False] PASS    20190131 18:54:35      test_live_migrate_vm_positive[local_image-0-0-dedicated-2-volume-False] PASS    20190131 18:57:12      test_live_migrate_vm_positive[local_image-1-0-dedicated-2-volume-False] PASS    20190131 19:00:07      test_live_migrate_vm_positive[local_image-0-512-shared-1-volume-False] PASS    20190131 19:02:54      test_live_migrate_vm_positive[local_image-1-512-dedicated-2-volume-True] PASS    20190131 19:05:44      test_live_migrate_vm_positive[local_image-0-0-shared-2-image-True] PASS    20190131 19:08:26      test_live_migrate_vm_positive[local_image-1-512-dedicated-1-image-False] PASS    20190131 19:12:06      test_live_migrate_vm_positive[local_image-0-0-None-2-image_with_vol-False] PASS    20190131 19:15:04      test_live_migrate_vm_positive[local_image-0-0-dedicated-1-image_with_vol-True] PASS    20190131 19:17:54      test_live_migrate_vm_positive[local_image-1-512-dedicated-2-image_with_vol-True] PASS    20190131 19:21:20      test_live_migrate_vm_positive[local_image-1-512-dedicated-1-image_with_vol-False] PASS    20190131 19:25:22      test_live_migrate_vm_negative[local_image-0-0-volume-True-LiveMigErr.BLOCK_MIG_UNSUPPORTED] PASS    20190131 19:29:51      test_cold_migrate_vm[local_image-0-0-None-1-volume-confirm] PASS    20190131 19:32:29     test_cold_migrate_vm[local_image-0-0-dedicated-2-volume-confirm] PASS    20190131 19:35:09      test_cold_migrate_vm[local_image-1-0-shared-2-image-confirm] PASS    20190131 19:38:15      test_cold_migrate_vm[local_image-0-512-dedicated-1-image-confirm] PASS    20190131 19:41:11      test_cold_migrate_vm[local_image-0-0-None-1-image_with_vol-confirm] PASS    20190131 19:44:35      test_cold_migrate_vm[local_image-0-0-None-2-volume-revert] PASS    20190131 19:47:17      test_cold_migrate_vm[local_image-0-0-dedicated-1-volume-revert] PASS    20190131 19:50:13      test_cold_migrate_vm[local_image-1-0-shared-2-image-revert] PASS    20190131 19:53:00     test_cold_migrate_vm[local_image-0-512-dedicated-1-image-revert] PASS    20190131 19:56:11      test_cold_migrate_vm[local_image-0-0-dedicated-2-image_with_vol-revert] PASS    20190131 20:01:08      test_migrate_vm[ubuntu_14-live-dedicated] PASS    20190131 20:03:35      test_migrate_vm[ubuntu_14-cold-dedicated] PASS    20190131 20:06:06      test_migrate_vm[tis-centos-guest-live-None] PASS    20190131 20:08:11      test_migrate_vm[tis-centos-guest-cold-None] FAIL    20190131 20:10:32      test_migrate_vm_various_guest[ubuntu_14-1-1024-shared-volume] PASS    20190131 20:21:40      test_migrate_vm_various_guest[ubuntu_14-2-1024-dedicated-image] PASS    20190131 20:29:27     test_migrate_vm_various_guest[ubuntu_16-3-4096-dedicated-volume] PASS    20190131 20:38:05      test_migrate_vm_various_guest[centos_6-3-4096-dedicated-volume] PASS    20190131 20:47:20      test_migrate_vm_various_guest[centos_7-1-1024-dedicated-volume] PASS    20190131 20:56:18      test_migrate_vm_various_guest[centos_7-5-4096-None-image] PASS    20190131 21:05:11      test_migrate_vm_various_guest[opensuse_11-3-1024-dedicated-volume] FAIL    20190131 21:14:43      test_migrate_vm_various_guest[opensuse_12-4-4096-dedicated-volume] PASS    20190131 21:27:13      test_migrate_vm_various_guest[rhel_6-3-1024-dedicated-image] PASS    20190131 21:37:12      test_migrate_vm_various_guest[rhel_6-4-4096-None-volume] PASS    20190131 21:46:58      test_migrate_vm_various_guest[rhel_7-1-1024-dedicated-volume] PASS    20190131 21:59:01      test_migrate_vm_various_guest[win_2012-3-1024-dedicated-image] SKIP    20190131 22:14:58      test_migrate_vm_various_guest[win_2016-4-4096-dedicated-volume] FAIL    20190131 22:15:15      test_migrate_vm_various_guest[ge_edge-1-1024-shared-image] FAIL    20190131 22:17:08      test_migrate_vm_various_guest[ge_edge-4-4096-dedicated-volume] PASS    20190131 22:19:40      test_migration_auto_converge PASS    20190131 22:25:43      test_nova_actions[tis-centos-guest-dedicated-pause-unpause] PASS    20190131 22:27:54      test_nova_actions[ubuntu_14-shared-stop-start] PASS    20190131 22:30:01      test_nova_actions[ubuntu_14-dedicated-auto_recover] PASS    20190131 22:36:16      test_nova_actions[tis-centos-guest-dedicated-suspend-resume] PASS    20190131 22:38:27      test_nova_actions_various_guest[cgcs-guest-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190131 22:51:16      test_nova_actions_various_guest[ubuntu_14-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190131 23:03:42      test_nova_actions_various_guest[ubuntu_16-dedicated-image-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190131 23:16:28      test_nova_actions_various_guest[centos_6-dedicated-image-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190131 23:29:17      test_nova_actions_various_guest[centos_7-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190131 23:42:05      test_nova_actions_various_guest[opensuse_11-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190131 23:54:55      test_nova_actions_various_guest[opensuse_12-dedicated-image-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190201 00:07:32      test_nova_actions_various_guest[rhel_7-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190201 00:20:22      test_nova_actions_various_guest[rhel_6-dedicated-image-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190201 00:33:03      test_nova_actions_various_guest[win_2012-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] PASS    20190201 00:51:13      test_nova_actions_various_guest[win_2016-dedicated-image-pause-unpause-suspend-resume-stop-start-auto_recover] FAIL    20190201 01:09:47      test_nova_actions_various_guest[ge_edge-dedicated-volume-pause-unpause-suspend-resume-stop-start-auto_recover] FAIL    20190201 01:11:57      test_orphan_audit PASS    20190201 01:12:30      test_prioritized_vm_evacuations[reboot-False-diff_priority-same_vcpus-same_mem-same_root_disk-same_swap_disk] PASS    20190201 01:24:20      test_prioritized_vm_evacuations[reboot-False-same_priority-diff_vcpus-diff_mem-same_root_disk-no_swap_disk] PASS    20190201 01:35:14      test_prioritized_vm_evacuations[reboot-True-same_priority-same_vcpus-diff_mem-diff_root_disk-same_swap_disk] PASS    20190201 01:45:53      test_prioritized_vm_evacuations[reboot-True-same_priority-same_vcpus-same_mem-diff_root_disk-diff_swap_disk] PASS    20190201 01:56:28      test_prioritized_vm_evacuations[reboot-True-same_priority-same_vcpus-same_mem-same_root_disk-diff_swap_disk] PASS    20190201 02:07:07      test_prioritized_vm_evacuations[reboot-True-diff_priority-diff_vcpus-same_mem-same_root_disk-no_swap_disk] PASS    20190201 02:17:52      test_prioritized_vm_evacuations[reboot-True-diff_priority-diff_vcpus-diff_mem-diff_root_disk-diff_swap_disk] PASS    20190201 02:28:30      test_prioritized_vm_evacuations[force_reboot-False-same_priority-same_vcpus-diff_mem-diff_root_disk-diff_swap_disk] PASS    20190201 02:39:20      test_prioritized_vm_evacuations[force_reboot-True-diff_priority-diff_vcpus-diff_mem-same_root_disk-same_swap_disk] PASS    20190201 02:49:58      test_setting_evacuate_priority[set--2-error] PASS    20190201 02:50:52      test_setting_evacuate_priority[set-10-None] PASS    20190201 02:51:08      test_setting_evacuate_priority[set-11-error] PASS    20190201 02:51:23      test_setting_evacuate_priority[set--error] PASS    20190201 02:51:36      test_setting_evacuate_priority[set-random-error] PASS    20190201 02:51:50      test_setting_evacuate_priority[delete--error] PASS    20190201 02:54:29      test_resize_vm_positive[local_image-4_0_0-5_1_512-image] PASS    20190201 02:59:03      test_resize_vm_positive[local_image-4_1_512-5_2_1024-image] PASS    20190201 03:03:56      test_resize_vm_positive[local_image-5_1_512-5_1_0-image] PASS    20190201 03:09:45      test_resize_vm_positive[local_image-4_0_0-5_1_512-volume] PASS    20190201 03:14:06      test_resize_vm_positive[local_image-4_1_512-0_2_1024-volume] PASS    20190201 03:18:49      test_resize_vm_positive[local_image-4_1_512-1_1_0-volume] PASS    20190201 03:25:13      test_resize_vm_negative[local_image-5_0_0-0_0_0-image] PASS    20190201 03:27:37      test_resize_vm_negative[local_image-5_2_512-5_1_512-image] PASS    20190201 03:30:21      test_resize_vm_negative[local_image-5_1_512-4_1_512-image] PASS    20190201 03:32:50      test_resize_vm_negative[local_image-5_1_512-4_1_0-image] PASS    20190201 03:35:34      test_resize_vm_negative[local_image-1_1_512-1_0_512-volume] PASS    20190201 03:38:23      test_resize_different_comp_node[local_image] FAIL    20190201 03:44:42      test_vm_actions_secure_boot_vm FAIL    20190201 03:47:38      test_lock_unlock_secure_boot_vm FAIL    20190201 03:50:23      test_host_reboot_secure_boot_vm PASS    20190201 03:53:02      test_server_group_boot_vms[affinity-2] PASS    20190201 04:02:19      test_server_group_boot_vms[anti_affinity-2] PASS    20190201 04:07:38      test_server_group_launch_vms_in_parallel[affinity-3-4] PASS    20190201 04:09:32      test_server_group_launch_vms_in_parallel[anti_affinity-1-3] PASS    20190201 04:11:24      test_create_snapshot_using_boot_from_image_vm PASS    20190201 04:14:26      test_create_snapshot_using_boot_from_volume_vm PASS    20190201 04:19:30      test_attempt_to_delete_volume_associated_with_snapshot PASS    20190201 04:21:57      test_attach_cinder_volume_to_instance[virtio] PASS    20190201 04:24:43      test_vif_model_from_image[virtio] PASS    20190201 04:27:27      test_autorecovery_image_metadata_in_volume[False-raw-bare] PASS    20190201 04:28:29      test_vm_autorecovery_without_heartbeat[None-None-None-raw-bare-True] PASS    20190201 04:33:12      test_vm_autorecovery_without_heartbeat[None-false-true-qcow2-bare-False] PASS    20190201 04:45:42      test_vm_autorecovery_without_heartbeat[None-true-false-raw-bare-True] PASS    20190201 04:52:14      test_vm_autorecovery_without_heartbeat[dedicated-false-None-raw-bare-False] PASS    20190201 05:04:54      test_vm_autorecovery_without_heartbeat[dedicated-None-false-qcow2-bare-False] PASS    20190201 05:17:27      test_vm_autorecovery_without_heartbeat[shared-None-true-raw-bare-True] PASS    20190201 05:23:47      test_vm_autorecovery_without_heartbeat[shared-false-None-raw-bare-False] PASS    20190201 05:36:22      test_vm_autorecovery_with_heartbeat[None-true-True] PASS    20190201 05:39:29      test_vm_autorecovery_with_heartbeat[dedicated-None-True] PASS    20190201 05:42:32      test_vm_autorecovery_with_heartbeat[None-false-False] PASS    20190201 05:46:29      test_vm_autorecovery_with_heartbeat[shared-None-True] PASS    20190201 05:49:23      test_vm_autorecovery_with_heartbeat[shared-false-False] PASS    20190201 05:53:17      test_vm_heartbeat_without_autorecovery[None-False] PASS    20190201 06:01:45      test_vm_heartbeat_without_autorecovery[true-True] PASS    20190201 06:04:12      test_vm_heartbeat_without_autorecovery[false-False] PASS    20190201 06:12:45      test_vm_heartbeat_without_autorecovery[True-True] From Jason.McKenna at windriver.com Mon Feb 4 15:28:07 2019 From: Jason.McKenna at windriver.com (McKenna, Jason) Date: Mon, 4 Feb 2019 15:28:07 +0000 Subject: [Starlingx-discuss] StarlingX mirror scheduled outage Saturday Feb 9. Message-ID: Hi StarlingX, The facility where the StarlingX official mirror and build servers are hosted will have a scheduled power outage on February 9. The outage is scheduled from 1100 to 2200 UTC (6:00am to 5:00pm EST, 5:00am to 4:00pm CST, 3:00am to 2:00pm PST). During this time, the CENGN hosted StarlingX mirror will be down, and download_mirrors.sh will fetch any artifacts directly from the sources rather than the mirror. Furthermore, automated builds will not occur during this timeframe. If you plan on creating or updating a local mirror of input artifacts, I recommend you run download_mirrors.sh on February 8. This will allow your script to fetch artifacts from either the CENGN mirror or the original source. Furthermore, if you want to grab a pre-built StarlingX ISO image for testing, please download it before the outage. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build/outputs/iso/ Jason -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Feb 4 17:09:09 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 4 Feb 2019 17:09:09 +0000 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> Maybe we should rename this to “StarlingX in a Cage” since birds go into cages, not ships into bottles like Airship. brucej From: Javier Romero [mailto:xavinux at gmail.com] Sent: Saturday, February 2, 2019 12:51 PM To: Lara, Cesar Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. Cesar, Thank you for your time to answer. Will let you know if I have some ideas. Regards, El sábado, 2 de febrero de 2019, Lara, Cesar > escribió: Yes we are exploring a few possibilities for StarlingX in a bottle, vagrant in one of them. Feel free to throw some ideas at it Regards Cesar Lara Sent from my mobile phone ________________________________ From: Javier Romero > Sent: Saturday, February 2, 2019 11:07 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Vagrant StarlingX VM. Hi Team, Think that perhaps may be useful for new users to have a Vagrant preconfigured AIO VM to use StarlingX for the fiirst time. https://www.vagrantup.com If this can be useful I can help with that. Vagrant use VirtualBox by default to star the preconfigured VM and can also be set to be used with QEMU. Best Regards, -- Javier Romero -- Javier Romero -------------- next part -------------- An HTML attachment was scrubbed... URL: From glenn.seiler at windriver.com Mon Feb 4 17:20:56 2019 From: glenn.seiler at windriver.com (Seiler, Glenn) Date: Mon, 4 Feb 2019 17:20:56 +0000 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> , <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> Message-ID: Hmmm. Why not stick with “in a box” which is a well understood term. Not sure the image of a bird in a cage is a positive one. 🤔 Sent from my iPhone On Feb 4, 2019, at 10:11 AM, Jones, Bruce E > wrote: Maybe we should rename this to “StarlingX in a Cage” since birds go into cages, not ships into bottles like Airship. brucej From: Javier Romero [mailto:xavinux at gmail.com] Sent: Saturday, February 2, 2019 12:51 PM To: Lara, Cesar > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. Cesar, Thank you for your time to answer. Will let you know if I have some ideas. Regards, El sábado, 2 de febrero de 2019, Lara, Cesar > escribió: Yes we are exploring a few possibilities for StarlingX in a bottle, vagrant in one of them. Feel free to throw some ideas at it Regards Cesar Lara Sent from my mobile phone ________________________________ From: Javier Romero > Sent: Saturday, February 2, 2019 11:07 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Vagrant StarlingX VM. Hi Team, Think that perhaps may be useful for new users to have a Vagrant preconfigured AIO VM to use StarlingX for the fiirst time. https://www.vagrantup.com If this can be useful I can help with that. Vagrant use VirtualBox by default to star the preconfigured VM and can also be set to be used with QEMU. Best Regards, -- Javier Romero -- Javier Romero _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Mon Feb 4 17:24:42 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 4 Feb 2019 17:24:42 +0000 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> , <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BBFD1A6F0@fmsmsx123.amr.corp.intel.com> I like it. StarlingX in a (virtual) box ☺ brucej From: Seiler, Glenn [mailto:glenn.seiler at windriver.com] Sent: Monday, February 4, 2019 9:21 AM To: Jones, Bruce E Cc: Javier Romero ; Lara, Cesar ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. Hmmm. Why not stick with “in a box” which is a well understood term. Not sure the image of a bird in a cage is a positive one. 🤔 Sent from my iPhone On Feb 4, 2019, at 10:11 AM, Jones, Bruce E > wrote: Maybe we should rename this to “StarlingX in a Cage” since birds go into cages, not ships into bottles like Airship. brucej From: Javier Romero [mailto:xavinux at gmail.com] Sent: Saturday, February 2, 2019 12:51 PM To: Lara, Cesar > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. Cesar, Thank you for your time to answer. Will let you know if I have some ideas. Regards, El sábado, 2 de febrero de 2019, Lara, Cesar > escribió: Yes we are exploring a few possibilities for StarlingX in a bottle, vagrant in one of them. Feel free to throw some ideas at it Regards Cesar Lara Sent from my mobile phone ________________________________ From: Javier Romero > Sent: Saturday, February 2, 2019 11:07 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Vagrant StarlingX VM. Hi Team, Think that perhaps may be useful for new users to have a Vagrant preconfigured AIO VM to use StarlingX for the fiirst time. https://www.vagrantup.com If this can be useful I can help with that. Vagrant use VirtualBox by default to star the preconfigured VM and can also be set to be used with QEMU. Best Regards, -- Javier Romero -- Javier Romero _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Mon Feb 4 17:27:54 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Mon, 4 Feb 2019 17:27:54 +0000 Subject: [Starlingx-discuss] [Test] Proposal for Test Repository In-Reply-To: <19C65A6E92EA384D809B1772130CD7F8621758E9@ALA-MBD.corp.ad.wrs.com> References: <3CAA827B7A79BA46B15B280EC82088FE4825AE74@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2A935DD0@fmsmsx101.amr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621758E9@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A93702F@fmsmsx101.amr.corp.intel.com> Hi All As discussed on weekly test meeting we decide to consolidate in only one repo all the test for StarlingX. The main Idea is to have a common place to contribute with different test cases. Here is a draft document where you can add your comments and we can discuss on 02/05 Test Meeting to have an official repo structure. https://docs.google.com/document/d/1TuBZHSv0zerLp17PqIxk8HKXh0XsQJ_VAolO5rMGiac/edit?usp=sharing Regards, José From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Monday, January 28, 2019 8:11 AM To: Perez Carranza, Jose ; Waheed, Numan ; starlingx-discuss at lists.starlingx.io; Cabrales, Ada Subject: RE: [Test] Proposal for Test Repository Hi Jose, I think 3 separate repos will be easier from maintenance and usage perspective. - Manual/robot/pytest have very different installation/usage/contribution requirements, it would be much easier to maintain them with standard repo structure, i.e., package requirements and README in root directory of a repo. - User won't be forced to download everything if they are only interested in contributing to one of them. - Easier to separate code reviews. BR, Yang From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: January-28-19 8:50 AM To: Waheed, Numan; starlingx-discuss at lists.starlingx.io; Cabrales, Ada Subject: Re: [Starlingx-discuss] [Test] Proposal for Test Repository Hi Newman Some comments inline Regards, José From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Friday, January 25, 2019 3:18 PM To: starlingx-discuss at lists.starlingx.io; Cabrales, Ada > Subject: [Starlingx-discuss] [Test] Proposal for Test Repository Hi Ada and Christopher, After investigating existing openstack test projects, I think we should have: * separate repo for test cases that are independent from each other. Since they require different instructions for almost everything (install, usage, rules, etc.) * one repo for one auto project. Subrepos are not recommended due to complicity involved in updating common libraries(keywords, fixtures, etc.) and their usages(test cases). I think having 3 different repositories can become hard to maintain, also could be confusing for people who is contributing to the project. Maybe we can consolidate all in one repo divided by directories , below an example: -> stx-test - manual-tests - automated-tests - robot-suite - pytets-suite Thus I suggest 3 different repositories: - repo for manual test cases - Is this section going to have actual scripts or just Test Specifications (preconditions, steps and expected result) in a plain text? - repo for robot test cases - repo for pytest test cases Inside automated test repository, I would suggest the following structure: README.rst LICENSE setup.py tox.ini # pep8, py27, etc requirements.txt # project package requirements consts/... # directory for various constants modules keywords/... # directory for helper modules testfixtures/ # directory for commonly used test fixtures modules testcases/cli/mtc/... # directory for mtc test cases that is mainly using cli testcases/cli/heat/... testcases/cli/nova/... testcases/cli/networking/... testcases/cli/security/... testcases/cli/storage/... testcases/cli/sysinv/... testcases/rest/... # directory for restAPI test cases testcases/horizon/... # directory for horizon test cases testcases/system_test/... # directory for complex system test scenarios Agree with this structure inside of the automated test suites. Thanks, Numan -------------- next part -------------- An HTML attachment was scrubbed... URL: From Dariush.Eslimi at windriver.com Mon Feb 4 18:08:27 2019 From: Dariush.Eslimi at windriver.com (Eslimi, Dariush) Date: Mon, 4 Feb 2019 18:08:27 +0000 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: <9A85D2917C58154C960D95352B22818BBFD1A6F0@fmsmsx123.amr.corp.intel.com> References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> , <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD1A6F0@fmsmsx123.amr.corp.intel.com> Message-ID: Would not this be a bird in a box, or in short “bird box”? Dariush From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: February-04-19 12:25 PM To: Seiler, Glenn Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. I like it. StarlingX in a (virtual) box ☺ brucej From: Seiler, Glenn [mailto:glenn.seiler at windriver.com] Sent: Monday, February 4, 2019 9:21 AM To: Jones, Bruce E > Cc: Javier Romero >; Lara, Cesar >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. Hmmm. Why not stick with “in a box” which is a well understood term. Not sure the image of a bird in a cage is a positive one. 🤔 Sent from my iPhone On Feb 4, 2019, at 10:11 AM, Jones, Bruce E > wrote: Maybe we should rename this to “StarlingX in a Cage” since birds go into cages, not ships into bottles like Airship. brucej From: Javier Romero [mailto:xavinux at gmail.com] Sent: Saturday, February 2, 2019 12:51 PM To: Lara, Cesar > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. Cesar, Thank you for your time to answer. Will let you know if I have some ideas. Regards, El sábado, 2 de febrero de 2019, Lara, Cesar > escribió: Yes we are exploring a few possibilities for StarlingX in a bottle, vagrant in one of them. Feel free to throw some ideas at it Regards Cesar Lara Sent from my mobile phone ________________________________ From: Javier Romero > Sent: Saturday, February 2, 2019 11:07 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Vagrant StarlingX VM. Hi Team, Think that perhaps may be useful for new users to have a Vagrant preconfigured AIO VM to use StarlingX for the fiirst time. https://www.vagrantup.com If this can be useful I can help with that. Vagrant use VirtualBox by default to star the preconfigured VM and can also be set to be used with QEMU. Best Regards, -- Javier Romero -- Javier Romero _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From michel.thebeau at windriver.com Mon Feb 4 18:34:16 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 4 Feb 2019 13:34:16 -0500 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> , <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD1A6F0@fmsmsx123.amr.corp.intel.com> Message-ID: <1549305256.13783.7.camel@windriver.com> a box of birds? https://idioms.thefreedictionary.com/a+box+of+birds On Mon, 2019-02-04 at 18:08 +0000, Eslimi, Dariush wrote: > Would not this be a bird in a box, or in short “bird box”? >   > Dariush >   > From: Jones, Bruce E [mailto:bruce.e.jones at intel.com]  > Sent: February-04-19 12:25 PM > To: Seiler, Glenn > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. >   > I like it.  StarlingX in a (virtual) box J >   >         brucej >   > From: Seiler, Glenn [mailto:glenn.seiler at windriver.com]  > Sent: Monday, February 4, 2019 9:21 AM > To: Jones, Bruce E > Cc: Javier Romero ; Lara, Cesar com>; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. >   > Hmmm. Why not stick with “in a box” which is a well understood term. > Not sure the image of a bird in a cage is a positive one. 🤔 > > Sent from my iPhone > > On Feb 4, 2019, at 10:11 AM, Jones, Bruce E > wrote: > > Maybe we should rename this to “StarlingX in a Cage” since birds go > into cages, not ships into bottles like Airship. >   >        brucej >   > From: Javier Romero [mailto:xavinux at gmail.com]  > Sent: Saturday, February 2, 2019 12:51 PM > To: Lara, Cesar > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. >   > Cesar, >   > Thank you for your time to answer. Will let you know if I have some > ideas. >   > Regards, > > > El sábado, 2 de febrero de 2019, Lara, Cesar > escribió: > Yes we are exploring a few possibilities for StarlingX in a bottle, > vagrant in one of them. Feel free to throw some ideas at it >   > Regards > Cesar Lara > Sent from my mobile phone > From: Javier Romero > Sent: Saturday, February 2, 2019 11:07 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Vagrant StarlingX VM. >   > Hi Team, >   > Think that perhaps may be useful for new users to have a Vagrant > preconfigured AIO VM to use StarlingX for the fiirst time. >   > https://www.vagrantup.com >   > If this can be useful I can help with that. >   > Vagrant use VirtualBox by default to star the preconfigured VM and > can also be set to be  used with QEMU. >   > Best Regards, >   >   >   > > > -- > Javier Romero >   >   >   > > > -- > Javier Romero >   >   >   > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Mon Feb 4 18:40:28 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 4 Feb 2019 10:40:28 -0800 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: <1549305256.13783.7.camel@windriver.com> References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> <9A85D2917C58154C960D95352B22818BBFD1A5F4@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD1A6F0@fmsmsx123.amr.corp.intel.com> <1549305256.13783.7.camel@windriver.com> Message-ID: <2367284f-9e80-b9da-a4c1-b502b8d268ce@linux.intel.com> On 2/4/19 10:34 AM, Michel Thebeau wrote: > a box of birds? > https://idioms.thefreedictionary.com/a+box+of+birds > I like this one, as I think we might get in trouble with Netflix's "bird box" Interesting movie BTW! Sau! > > > On Mon, 2019-02-04 at 18:08 +0000, Eslimi, Dariush wrote: >> Would not this be a bird in a box, or in short “bird box”? >> >> Dariush >> >> From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] >> Sent: February-04-19 12:25 PM >> To: Seiler, Glenn >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. >> >> I like it.  StarlingX in a (virtual) box J >> >>         brucej >> >> From: Seiler, Glenn [mailto:glenn.seiler at windriver.com] >> Sent: Monday, February 4, 2019 9:21 AM >> To: Jones, Bruce E >> Cc: Javier Romero ; Lara, Cesar > com>; starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. >> >> Hmmm. Why not stick with “in a box” which is a well understood term. >> Not sure the image of a bird in a cage is a positive one. 🤔 >> >> Sent from my iPhone >> >> On Feb 4, 2019, at 10:11 AM, Jones, Bruce E >> wrote: >> >> Maybe we should rename this to “StarlingX in a Cage” since birds go >> into cages, not ships into bottles like Airship. >> >>        brucej >> >> From: Javier Romero [mailto:xavinux at gmail.com] >> Sent: Saturday, February 2, 2019 12:51 PM >> To: Lara, Cesar >> Cc: starlingx-discuss at lists.starlingx.io >> Subject: Re: [Starlingx-discuss] Vagrant StarlingX VM. >> >> Cesar, >> >> Thank you for your time to answer. Will let you know if I have some >> ideas. >> >> Regards, >> >> >> El sábado, 2 de febrero de 2019, Lara, Cesar >> escribió: >> Yes we are exploring a few possibilities for StarlingX in a bottle, >> vagrant in one of them. Feel free to throw some ideas at it >> >> Regards >> Cesar Lara >> Sent from my mobile phone >> From: Javier Romero >> Sent: Saturday, February 2, 2019 11:07 AM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] Vagrant StarlingX VM. >> >> Hi Team, >> >> Think that perhaps may be useful for new users to have a Vagrant >> preconfigured AIO VM to use StarlingX for the fiirst time. >> >> https://www.vagrantup.com >> >> If this can be useful I can help with that. >> >> Vagrant use VirtualBox by default to star the preconfigured VM and >> can also be set to be  used with QEMU. >> >> Best Regards, >> >> >> >> >> >> -- >> Javier Romero >> >> >> >> >> >> -- >> Javier Romero >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Ken.Young at windriver.com Mon Feb 4 18:48:55 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Mon, 4 Feb 2019 18:48:55 +0000 Subject: [Starlingx-discuss] CVE Support and Scanning Message-ID: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> Team, A “Lights On” feature for the 2019.05 release is “CVE Upgrades”. This feature will enable ongoing security updates for the master branch and selectively provide CVE corrective content to supported releases. The first step for this feature is to define a policy. With the help of the Starling X security team, a draft of this policy has been provided below: https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy Please review and provide comments. I plan to reserve a spot on the Community call for any discussion on Wednesday and start the discussion with the build team to identify tools to support this policy on Thursday. Regards, Ken Y -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Mon Feb 4 18:54:04 2019 From: sgw at linux.intel.com (Saul Wold) Date: Mon, 4 Feb 2019 10:54:04 -0800 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> Message-ID: On 2/4/19 10:48 AM, Young, Ken wrote: > Team, > > A “Lights On” feature for the 2019.05 release is “CVE Upgrades”.  This > feature will enable ongoing security updates for the master branch and > selectively provide CVE corrective content to supported releases. The > first step for this feature is to define a policy.  With the help of the > Starling X security team, a draft of this policy has been provided below: > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy > > Please review and provide comments.  I plan to reserve a spot on the > Community call for any discussion on Wednesday and start the discussion > with the build team to identify tools to support this policy on Thursday. > Can we try to avoid slideware in the Wiki please, ultimately wiki should be editable. Sau! > Regards, > > Ken Y > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From vm.rod25 at gmail.com Mon Feb 4 19:26:33 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 4 Feb 2019 13:26:33 -0600 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> Message-ID: On Mon, Feb 4, 2019 at 12:49 PM Young, Ken wrote: > > Team, > > > > A “Lights On” feature for the 2019.05 release is “CVE Upgrades”. This feature will enable ongoing security updates for the master branch and selectively provide CVE corrective content to supported releases. The first step for this feature is to define a policy. With the help of the Starling X security team, a draft of this policy has been provided below: > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy > > > > Please review and provide comments. I plan to reserve a spot on the Community call for any discussion on Wednesday and start the discussion with the build team to identify tools to support this policy on Thursday. Question on the last slide you propose a formula as Criticality >= 7 What standard are you plan to use? CVSS v3.0 or v2.0 For example, taking this MariaDB https://nvd.nist.gov/vuln/detail/CVE-2017-15365 Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 My recommendation will be to use the highest one despite if the score came from V2 or V3 However, I think we should specify that somewhere One more question, when you said critical issues are fixed if corrections are available upstream it means that ( taking the previous example ) if MariaDB provides a Patch, is merged in master and released in the latest release, like in the previous example : https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753699787fdf46e But CentOS has not taken it yet, are we OK to apply this patch in STX until CentOS apply in incoming future? Regards Victor R > > > Regards, > > Ken Y > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Mon Feb 4 19:31:07 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Mon, 4 Feb 2019 19:31:07 +0000 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> Message-ID: <9A85D2917C58154C960D95352B22818BBFD1A918@fmsmsx123.amr.corp.intel.com> Victor wrote: > One more question, when you said critical issues are fixed if corrections are available upstream it means that ( taking the previous example ) if MariaDB provides a Patch, is merged in master and released in the latest release, like in the previous example : > https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753699787fdf46e > But CentOS has not taken it yet, are we OK to apply this patch in STX until CentOS apply in incoming future? If we are picking up a component from a distro (like CentOS), I think the policy should be that we pick up the change from the distro. If we're picking up a component directly from its upstream, we pick up the fix from the upstream. So I think the answer is "it depends". brucej -----Original Message----- From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] Sent: Monday, February 4, 2019 11:27 AM To: Young, Ken Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CVE Support and Scanning On Mon, Feb 4, 2019 at 12:49 PM Young, Ken wrote: > > Team, > > > > A “Lights On” feature for the 2019.05 release is “CVE Upgrades”. This feature will enable ongoing security updates for the master branch and selectively provide CVE corrective content to supported releases. The first step for this feature is to define a policy. With the help of the Starling X security team, a draft of this policy has been provided below: > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy > > > > Please review and provide comments. I plan to reserve a spot on the Community call for any discussion on Wednesday and start the discussion with the build team to identify tools to support this policy on Thursday. Question on the last slide you propose a formula as Criticality >= 7 What standard are you plan to use? CVSS v3.0 or v2.0 For example, taking this MariaDB https://nvd.nist.gov/vuln/detail/CVE-2017-15365 Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 My recommendation will be to use the highest one despite if the score came from V2 or V3 However, I think we should specify that somewhere One more question, when you said critical issues are fixed if corrections are available upstream it means that ( taking the previous example ) if MariaDB provides a Patch, is merged in master and released in the latest release, like in the previous example : https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753699787fdf46e But CentOS has not taken it yet, are we OK to apply this patch in STX until CentOS apply in incoming future? Regards Victor R > > > Regards, > > Ken Y > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From michel.thebeau at windriver.com Mon Feb 4 20:25:44 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 4 Feb 2019 15:25:44 -0500 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> Message-ID: <1549311944.13783.17.camel@windriver.com> Hi Victor, > My recommendation will be to use the highest one despite if > the score came from V2 or V3 The two sets of metrics are not comparable.  StarlingX policy should refer to either CVSS v2 or v3. > One more question, when you said critical issues are fixed > if corrections are available upstream it means that... The text of one of the slides has written "StarlingX depends on the upstream OS community to fix CVEs".  Is this reference to upstream you intend?  In that text 'OS community' would refer to CentOS community rather than the example MariaDB community. M On Mon, 2019-02-04 at 13:26 -0600, Victor Rodriguez wrote: > On Mon, Feb 4, 2019 at 12:49 PM Young, Ken > wrote: > > > > > > Team, > > > > > > > > A “Lights On” feature for the 2019.05 release is “CVE > > Upgrades”.  This feature will enable ongoing security updates for > > the master branch and selectively provide CVE corrective content to > > supported releases.  The first step for this feature is to define a > > policy.  With the help of the Starling X security team, a draft of > > this policy has been provided below: > > > > > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Poli > > cy > > > > > > > > Please review and provide comments.  I plan to reserve a spot on > > the Community call for any discussion on Wednesday and start the > > discussion with the build team to identify tools to support this > > policy on Thursday. > > Question on the last slide you propose a formula as > > Criticality >= 7 > > What standard are you plan to use? CVSS v3.0 or v2.0 > > For example, taking this MariaDB > > https://nvd.nist.gov/vuln/detail/CVE-2017-15365 > > Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 > > My recommendation will be to use the highest one despite if the score > came from V2 or V3 > However, I think we should specify that somewhere > > One more question, when you said critical issues are fixed if > corrections are available upstream it means that ( taking the > previous > example ) if MariaDB provides a Patch, is merged in master and > released in the latest release,  like in the previous example : > https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753 > 699787fdf46e > > But CentOS has not taken it yet, are we OK to apply this patch in STX > until CentOS apply in incoming future? > > Regards > > Victor R > > > > > > > > Regards, > > > > Ken Y > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > s > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From vm.rod25 at gmail.com Mon Feb 4 20:41:09 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 4 Feb 2019 14:41:09 -0600 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: <1549311944.13783.17.camel@windriver.com> References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> <1549311944.13783.17.camel@windriver.com> Message-ID: On Mon, Feb 4, 2019 at 2:25 PM Michel Thebeau wrote: > > Hi Victor, > > > My recommendation will be to use the highest one despite if > > the score came from V2 or V3 > > The two sets of metrics are not comparable. StarlingX policy should > refer to either CVSS v2 or v3. > But slide in the wiki say : Criticality >= 7 , if V2 = 5 adn V3 =8 shoudl that CVE be consider as Criticality >= 7 ? My observation is that we shoudl specify Criticality >= 7 ( for either V2 or V3 ) > > One more question, when you said critical issues are fixed > > if corrections are available upstream it means that... > > The text of one of the slides has written "StarlingX depends on the > upstream OS community to fix CVEs". Is this reference to upstream you > intend? In that text 'OS community' would refer to CentOS community > rather than the example MariaDB community. > Ok, thanks for the clarification. I think that there might be some special cases where we might prompt to apply a CVE patch before CentOS, but that might have to be desided in the security meeeting when they analyze the CVEs > > M > > > On Mon, 2019-02-04 at 13:26 -0600, Victor Rodriguez wrote: > > On Mon, Feb 4, 2019 at 12:49 PM Young, Ken > > wrote: > > > > > > > > > Team, > > > > > > > > > > > > A “Lights On” feature for the 2019.05 release is “CVE > > > Upgrades”. This feature will enable ongoing security updates for > > > the master branch and selectively provide CVE corrective content to > > > supported releases. The first step for this feature is to define a > > > policy. With the help of the Starling X security team, a draft of > > > this policy has been provided below: > > > > > > > > > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Poli > > > cy > > > > > > > > > > > > Please review and provide comments. I plan to reserve a spot on > > > the Community call for any discussion on Wednesday and start the > > > discussion with the build team to identify tools to support this > > > policy on Thursday. > > > > Question on the last slide you propose a formula as > > > > Criticality >= 7 > > > > What standard are you plan to use? CVSS v3.0 or v2.0 > > > > For example, taking this MariaDB > > > > https://nvd.nist.gov/vuln/detail/CVE-2017-15365 > > > > Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 > > > > My recommendation will be to use the highest one despite if the score > > came from V2 or V3 > > However, I think we should specify that somewhere > > > > One more question, when you said critical issues are fixed if > > corrections are available upstream it means that ( taking the > > previous > > example ) if MariaDB provides a Patch, is merged in master and > > released in the latest release, like in the previous example : > > https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753 > > 699787fdf46e > > > > But CentOS has not taken it yet, are we OK to apply this patch in STX > > until CentOS apply in incoming future? > > > > Regards > > > > Victor R > > > > > > > > > > > > Regards, > > > > > > Ken Y > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discus > > > s > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From chris.friesen at windriver.com Mon Feb 4 20:51:03 2019 From: chris.friesen at windriver.com (Chris Friesen) Date: Mon, 4 Feb 2019 14:51:03 -0600 Subject: [Starlingx-discuss] VCPU scheduler priority In-Reply-To: References: Message-ID: <902ba073-4ab3-7f39-5bdc-3794f09d3b89@windriver.com> On 2/4/2019 6:08 AM, Himanshu Goyal wrote: > Hi, > > Please suggest how to set VCPU Scheduler policy in flavor, Not able to > found that in flavor metadata. What problem are you trying to solve? There may be alternate solutions. Chris From michel.thebeau at windriver.com Mon Feb 4 21:00:15 2019 From: michel.thebeau at windriver.com (Michel Thebeau) Date: Mon, 4 Feb 2019 16:00:15 -0500 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> <1549311944.13783.17.camel@windriver.com> Message-ID: <1549314015.13783.28.camel@windriver.com> On Mon, 2019-02-04 at 14:41 -0600, Victor Rodriguez wrote: > On Mon, Feb 4, 2019 at 2:25 PM Michel Thebeau > wrote: > > > > > > Hi Victor, > > > > > > > > My recommendation will be to use the highest one despite if > > > the score came from V2 or V3 > > The two sets of metrics are not comparable.  StarlingX policy > > should > > refer to either CVSS v2 or v3. > > > But slide in the wiki say : Criticality >= 7 , if V2 = 5 adn V3 =8 > shoudl that CVE be consider as Criticality >= 7 ? > My observation is that we shoudl specify Criticality >= 7 ( for > either > V2 or V3 ) I believe that Ken has a particular metric in mind, which was not documented.  The value of '7' is specific to the metric not documentated.  There is a hint in the document that the value '7' is for 'critical' CVEs issues.  ... refer to this document: https://nvd.nist.gov/vuln-metrics/cvss '7' is the boundary for the highest rating of CVSS v2.  Where as CVSS v3 lists critical as >9.0 > > > > > > > > > One more question, when you said critical issues are fixed > > > if corrections are available upstream it means that... > > The text of one of the slides has written "StarlingX depends on the > > upstream OS community to fix CVEs".  Is this reference to upstream > > you > > intend?  In that text 'OS community' would refer to CentOS > > community > > rather than the example MariaDB community. > > > Ok, thanks for the clarification. > I think that there might be some special cases where we might prompt > to apply a CVE patch before CentOS, but that might have to be desided > in the security meeeting when they analyze the CVEs I expect that it is unlikely a team in StarlingX will match the proficiency of the teams behind CentOS.  But I agree that the StarlingX community will discuss these things. M > > > > > > > M > > > > > > On Mon, 2019-02-04 at 13:26 -0600, Victor Rodriguez wrote: > > > > > > On Mon, Feb 4, 2019 at 12:49 PM Young, Ken > > om> > > > wrote: > > > > > > > > > > > > > > > > Team, > > > > > > > > > > > > > > > > A “Lights On” feature for the 2019.05 release is “CVE > > > > Upgrades”.  This feature will enable ongoing security updates > > > > for > > > > the master branch and selectively provide CVE corrective > > > > content to > > > > supported releases.  The first step for this feature is to > > > > define a > > > > policy.  With the help of the Starling X security team, a draft > > > > of > > > > this policy has been provided below: > > > > > > > > > > > > > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_ > > > > Poli > > > > cy > > > > > > > > > > > > > > > > Please review and provide comments.  I plan to reserve a spot > > > > on > > > > the Community call for any discussion on Wednesday and start > > > > the > > > > discussion with the build team to identify tools to support > > > > this > > > > policy on Thursday. > > > Question on the last slide you propose a formula as > > > > > > Criticality >= 7 > > > > > > What standard are you plan to use? CVSS v3.0 or v2.0 > > > > > > For example, taking this MariaDB > > > > > > https://nvd.nist.gov/vuln/detail/CVE-2017-15365 > > > > > > Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 > > > > > > My recommendation will be to use the highest one despite if the > > > score > > > came from V2 or V3 > > > However, I think we should specify that somewhere > > > > > > One more question, when you said critical issues are fixed if > > > corrections are available upstream it means that ( taking the > > > previous > > > example ) if MariaDB provides a Patch, is merged in master and > > > released in the latest release,  like in the previous example : > > > https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7 > > > e753 > > > 699787fdf46e > > > > > > But CentOS has not taken it yet, are we OK to apply this patch in > > > STX > > > until CentOS apply in incoming future? > > > > > > Regards > > > > > > Victor R > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > Ken Y > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-di > > > > scus > > > > s > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-disc > > > uss From vm.rod25 at gmail.com Mon Feb 4 21:23:44 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Mon, 4 Feb 2019 15:23:44 -0600 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: <1549314015.13783.28.camel@windriver.com> References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> <1549311944.13783.17.camel@windriver.com> <1549314015.13783.28.camel@windriver.com> Message-ID: On Mon, Feb 4, 2019 at 3:01 PM Michel Thebeau wrote: > > On Mon, 2019-02-04 at 14:41 -0600, Victor Rodriguez wrote: > > On Mon, Feb 4, 2019 at 2:25 PM Michel Thebeau > > wrote: > > > > > > > > > Hi Victor, > > > > > > > > > > > My recommendation will be to use the highest one despite if > > > > the score came from V2 or V3 > > > The two sets of metrics are not comparable. StarlingX policy > > > should > > > refer to either CVSS v2 or v3. > > > > > But slide in the wiki say : Criticality >= 7 , if V2 = 5 adn V3 =8 > > shoudl that CVE be consider as Criticality >= 7 ? > > My observation is that we shoudl specify Criticality >= 7 ( for > > either > > V2 or V3 ) > > > I believe that Ken has a particular metric in mind, which was not > documented. The value of '7' is specific to the metric not > documentated. There is a hint in the document that the value '7' is > for 'critical' CVEs issues. > > ... refer to this document: > https://nvd.nist.gov/vuln-metrics/cvss > > '7' is the boundary for the highest rating of CVSS v2. Where as CVSS > v3 lists critical as >9.0 > Ok , then Based on NVD Vulnerability Severity Ratings 7 is ok for both V2 or V3 We cond then said in the wiki that if CVE critilarity is >= High ... > > > > > > > > > > > > > > > One more question, when you said critical issues are fixed > > > > if corrections are available upstream it means that... > > > The text of one of the slides has written "StarlingX depends on the > > > upstream OS community to fix CVEs". Is this reference to upstream > > > you > > > intend? In that text 'OS community' would refer to CentOS > > > community > > > rather than the example MariaDB community. > > > > > Ok, thanks for the clarification. > > I think that there might be some special cases where we might prompt > > to apply a CVE patch before CentOS, but that might have to be desided > > in the security meeeting when they analyze the CVEs > > > I expect that it is unlikely a team in StarlingX will match the > proficiency of the teams behind CentOS. But I agree that the StarlingX > community will discuss these things. > I have seen cases before, as CVE mantianer of an OS for some time there were times were the severity of the CVE make us really urgent to update the package and release a new version of the OS, our mindset was security first. As an open comunity I woudl like to hear more feedabck from multiple users and developers Regards Victor > M > > > > > > > > > > > > > M > > > > > > > > > On Mon, 2019-02-04 at 13:26 -0600, Victor Rodriguez wrote: > > > > > > > > On Mon, Feb 4, 2019 at 12:49 PM Young, Ken > > > om> > > > > wrote: > > > > > > > > > > > > > > > > > > > > Team, > > > > > > > > > > > > > > > > > > > > A “Lights On” feature for the 2019.05 release is “CVE > > > > > Upgrades”. This feature will enable ongoing security updates > > > > > for > > > > > the master branch and selectively provide CVE corrective > > > > > content to > > > > > supported releases. The first step for this feature is to > > > > > define a > > > > > policy. With the help of the Starling X security team, a draft > > > > > of > > > > > this policy has been provided below: > > > > > > > > > > > > > > > > > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_ > > > > > Poli > > > > > cy > > > > > > > > > > > > > > > > > > > > Please review and provide comments. I plan to reserve a spot > > > > > on > > > > > the Community call for any discussion on Wednesday and start > > > > > the > > > > > discussion with the build team to identify tools to support > > > > > this > > > > > policy on Thursday. > > > > Question on the last slide you propose a formula as > > > > > > > > Criticality >= 7 > > > > > > > > What standard are you plan to use? CVSS v3.0 or v2.0 > > > > > > > > For example, taking this MariaDB > > > > > > > > https://nvd.nist.gov/vuln/detail/CVE-2017-15365 > > > > > > > > Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 > > > > > > > > My recommendation will be to use the highest one despite if the > > > > score > > > > came from V2 or V3 > > > > However, I think we should specify that somewhere > > > > > > > > One more question, when you said critical issues are fixed if > > > > corrections are available upstream it means that ( taking the > > > > previous > > > > example ) if MariaDB provides a Patch, is merged in master and > > > > released in the latest release, like in the previous example : > > > > https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7 > > > > e753 > > > > 699787fdf46e > > > > > > > > But CentOS has not taken it yet, are we OK to apply this patch in > > > > STX > > > > until CentOS apply in incoming future? > > > > > > > > Regards > > > > > > > > Victor R > > > > > > > > > > > > > > > > > > > > > > > > > Regards, > > > > > > > > > > Ken Y > > > > > > > > > > _______________________________________________ > > > > > Starlingx-discuss mailing list > > > > > Starlingx-discuss at lists.starlingx.io > > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-di > > > > > scus > > > > > s > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-disc > > > > uss From Ken.Young at windriver.com Mon Feb 4 22:23:59 2019 From: Ken.Young at windriver.com (Young, Ken) Date: Mon, 4 Feb 2019 22:23:59 +0000 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> Message-ID: <6E7E2668-C4A0-4290-A79F-499F8E4F94C8@windriver.com> See inline. On 2019-02-04, 2:26 PM, "Victor Rodriguez" wrote: On Mon, Feb 4, 2019 at 12:49 PM Young, Ken wrote: > > Team, > > > > A “Lights On” feature for the 2019.05 release is “CVE Upgrades”. This feature will enable ongoing security updates for the master branch and selectively provide CVE corrective content to supported releases. The first step for this feature is to define a policy. With the help of the Starling X security team, a draft of this policy has been provided below: > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy > > > > Please review and provide comments. I plan to reserve a spot on the Community call for any discussion on Wednesday and start the discussion with the build team to identify tools to support this policy on Thursday. Question on the last slide you propose a formula as Criticality >= 7 What standard are you plan to use? CVSS v3.0 or v2.0 I am suggesting we start with v2. I need to look into v3 a little more. For example, taking this MariaDB https://nvd.nist.gov/vuln/detail/CVE-2017-15365 Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 My recommendation will be to use the highest one despite if the score came from V2 or V3 However, I think we should specify that somewhere Agreed. One more question, when you said critical issues are fixed if corrections are available upstream it means that ( taking the previous example ) if MariaDB provides a Patch, is merged in master and released in the latest release, like in the previous example : https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753699787fdf46e But CentOS has not taken it yet, are we OK to apply this patch in STX until CentOS apply in incoming future? We will wait for CentOS. We are in the process of patch elimination. I do not to carry more. Regards Victor R > > > Regards, > > Ken Y > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Tue Feb 5 14:05:57 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Tue, 5 Feb 2019 15:05:57 +0100 (CET) Subject: [Starlingx-discuss] Installation problem: Configuration failed: Failed to update hiera configuration In-Reply-To: <87F75100-3132-4085-B3CA-E851E0725B6C@windriver.com> References: <1484346064.187533.1549035095571@communicator.strato.com> <4430FF2D-2A7C-4C5F-85C9-D3EEBC250697@windriver.com> <1883615762.190291.1549037961751@communicator.strato.com> <87F75100-3132-4085-B3CA-E851E0725B6C@windriver.com> Message-ID: <1355835714.102284.1549375557966@communicator.strato.com> [root at localhost ~(keystone_admin)]# openstack endpoint list +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | ID | Region | Service Name | Service Type | Enabled | Interface | URL | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ | 0c6612f4307548ac93a7f193b8e2434d | RegionOne | barbican | key-manager | True | admin | http://127.0.0.1:9311 | | 9b37261408974b36b2ea57cfe159eada | RegionOne | barbican | key-manager | True | internal | http://127.0.0.1:9311 | | d8b3e9f76c8a478796cc384961c9d202 | RegionOne | barbican | key-manager | True | public | http://127.0.0.1:9311 | | 08126550bd5c4027a2bb0b6ea147ba71 | RegionOne | keystone | identity | True | admin | http://127.0.0.1:5000/v3 | | e3403a40c8be45a8a6b3535832d8b06e | RegionOne | keystone | identity | True | internal | http://127.0.0.1:5000/v3 | | 2b2c606607464e61b2d4c1c9298d744d | RegionOne | keystone | identity | True | public | http://127.0.0.1:5000/v3 | | fe2ffb3fd52e4f0e91e41c7688ec6e81 | RegionOne | sysinv | platform | True | admin | http://127.0.0.1:6385/v1 | | 5099dee5af8a46eb9ea018ab3c175521 | RegionOne | sysinv | platform | True | internal | http://127.0.0.1:6385/v1 | | 11b28a902ebe425999b825fe63e372b5 | RegionOne | sysinv | platform | True | public | http://127.0.0.1:6385/v1 | +----------------------------------+-----------+--------------+--------------+---------+-----------+--------------------------+ > "Peters, Matt" hat am 1. Februar 2019 um 17:38 geschrieben: > > > Hello, > I'm not sure why it is giving that error response. > > Are you able to run the following? > openstack endpoint list > > > On 2019-02-01, 11:19 AM, "Marcel Schaible" wrote: > > Hi Matt, > > thanks for your response. > > controller-0:~# source /etc/nova/openrc > -0oot at controller-0 ~(keystone_admin)]# system host-ethernet-port-list controller > internalURL endpoint for smapi service in RegionOne region not found > [root at controller-0 ~(keystone_admin)]# system host-if-list -a controller-0 > internalURL endpoint for smapi service in RegionOne region not found > [root at controller-0 ~(keystone_admin)]# > > Any idea what "RegionOne" mean? > Thanks > Marcel > > > > "Peters, Matt" hat am 1. Februar 2019 um 17:07 geschrieben: > > > > > > Hell Marcel, > > If your system is still in that state, can you run the following commands? > > > > source /etc/nova/openrc > > system host-ethernet-port-list controller-0 > > system host-if-list -a controller-0 > > > > > > On 2019-02-01, 10:32 AM, "Marcel Schaible" wrote: > > > > Hi, > > > > during installation when appling the configuration I'll get an error message from hiera: > > > > Configuration failed: Failed to update hiera configuration > > > > Configuration log and apply_manifet.log is attached below. > > > > Any idea or hint? > > > > Thanks > > > > Marcel > > > > > > > > localhost:~# config_controller > > System Configuration > > ==================== > > Enter Q at any prompt to abort... > > > > System date and time: > > --------------------- > > > > The system date and time must be set now. Note that UTC time must be used and > > that the date and time must be set as accurately as possible, even if NTP/PTP is > > to be configured later. > > > > Current system date and time (UTC): 2019-02-01 15:15:25 > > > > Is the current date and time correct? [y/n]: y > > Current system date and time will be used. > > > > System timezone: > > ---------------- > > > > The system timezone must be set now. The timezone must be a valid timezone from > > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > > > Please input the timezone[UTC]:Europe/Berlin > > > > System Configuration: > > --------------------- > > > > System mode. Available options are: > > > > 1) duplex-direct - two node redundant configuration. Management and > > infrastructure networks are directly connected to peer ports > > 2) duplex - two node redundant configuration. > > 3) simplex - single node non-redundant configuration. > > System mode [duplex-direct]: > > > > PXEBoot Network: > > ---------------- > > > > The PXEBoot network is used for initial booting and installation of each node. > > IP addresses on this network are reachable only within the data center. > > > > The default configuration combines the PXEBoot network and the management > > network. If a separate PXEBoot network is used, it will share the management > > interface, which requires the management network to be placed on a VLAN. > > > > Configure a separate PXEBoot network [y/N]: > > Aborting configuration > > localhost:~# config_controller > > System Configuration > > ==================== > > Enter Q at any prompt to abort... > > > > System date and time: > > --------------------- > > > > The system date and time must be set now. Note that UTC time must be used and > > that the date and time must be set as accurately as possible, even if NTP/PTP is > > to be configured later. > > > > Current system date and time (UTC): 2019-02-01 15:15:40 > > > > Is the current date and time correct? [y/n]: y > > Current system date and time will be used. > > > > System timezone: > > ---------------- > > > > The system timezone must be set now. The timezone must be a valid timezone from > > /usr/share/zoneinfo (e.g. UTC, Asia/Hong_Kong, etc...) > > > > Please input the timezone[UTC]:Europe/Berlin > > > > System Configuration: > > --------------------- > > > > System mode. Available options are: > > > > 1) duplex-direct - two node redundant configuration. Management and > > infrastructure networks are directly connected to peer ports > > 2) duplex - two node redundant configuration. > > 3) simplex - single node non-redundant configuration. > > System mode [duplex-direct]: 1 > > > > PXEBoot Network: > > ---------------- > > > > The PXEBoot network is used for initial booting and installation of each node. > > IP addresses on this network are reachable only within the data center. > > > > The default configuration combines the PXEBoot network and the management > > network. If a separate PXEBoot network is used, it will share the management > > interface, which requires the management network to be placed on a VLAN. > > > > Configure a separate PXEBoot network [y/N]: N > > > > Management Network: > > ------------------- > > > > The management network is used for internal communication between platform > > components. IP addresses on this network are reachable only within the data > > center. > > > > A management bond interface provides redundant connections for the management > > network. > > It is strongly recommended to configure Management interface link > > aggregation, for All-in-one duplex-direct. > > > > Management interface link aggregation [y/N]: y > > Management interface [bond0]: > > Management interface MTU [1500]: > > > > Specify one of the bonding policies. Possible values are: > > 1) 802.3ad (LACP) policy > > 2) Active-backup policy > > > > Management interface bonding policy [802.3ad]: > > A maximum of 2 physical interfaces can be attached to the management interface. > > > > First management interface member [enp3s0f1]: enp8s20f4 > > Second management interface member []: enp8s20f5 > > Management subnet [192.168.204.0/24]: 172.27.1.0/24 > > Use entire management subnet [Y/n]: Y > > > > IP addresses can be assigned to hosts dynamically or a static IP address can be > > specified for each host. This choice applies to both the management network and > > infrastructure network (if configured). > > Warning: Selecting 'N', or static IP address allocation, disables automatic > > provisioning of new hosts in System Inventory, requiring the user to manually > > provision using the 'system host-add' command. > > Dynamic IP address allocation [Y/n]: Y > > Management Network Multicast subnet [239.1.1.0/28]: > > > > Infrastructure Network: > > ----------------------- > > > > The infrastructure network is used for internal communication between platform > > components to offload the management network of high bandwidth services. IP > > addresses on this network are reachable only within the data center. > > > > If a separate infrastructure interface is not configured the management network > > will be used. > > > > It is NOT recommended to configure infrastructure network for All-in- > > one duplex-direct. > > Configure an infrastructure interface [y/N]: N > > > > External OAM Network: > > --------------------- > > > > The external OAM network is used for management of the cloud. It also provides > > access to the platform APIs. IP addresses on this network are reachable outside > > the data center. > > > > An external OAM bond interface provides redundant connections for the OAM > > network. > > > > External OAM interface link aggregation [y/N]: y > > External OAM interface [bond1]: > > Configure an external OAM VLAN [y/N]: > > External OAM interface MTU [1500]: > > > > Specify one of the bonding policies. Possible values are: > > 1) Active-backup policy > > 2) Balanced XOR policy > > 3) 802.3ad (LACP) policy > > > > External OAM interface bonding policy [active-backup]: > > A maximum of 2 physical interfaces can be attached to the external OAM > > interface. > > > > First external OAM interface member [enp3s0f0]: enp7s16f1 > > Second external oam interface member []: enp7s16f7 > > External OAM subnet [10.10.10.0/24]: 10.62.150.0/24 > > External OAM gateway address [10.62.150.1]: > > External OAM floating address [10.62.150.2]: > > External OAM address for first controller node [10.62.150.3]: 10.62.150.210 > > External OAM address for second controller node [10.62.150.211]: > > > > Cloud Authentication: > > ------------------------------- > > > > Configure a password for the Cloud admin user The Password must have a minimum > > length of 7 character, and conform to password complexity rules > > Create admin user password: > > Repeat admin user password: > > > > > > > > The following configuration will be applied: > > > > System Configuration > > -------------------- > > Time Zone: Europe/Berlin > > System mode: duplex-direct > > > > PXEBoot Network Configuration > > ----------------------------- > > Separate PXEBoot network not configured > > PXEBoot Controller floating hostname: pxecontroller > > > > Management Network Configuration > > -------------------------------- > > Management interface name: bond0 > > Management interface: bond0 > > Management interface MTU: 1500 > > Management ae member 0: enp8s20f4 > > Management ae member 1: enp8s20f5 > > Management ae policy : 802.3ad > > Management subnet: 172.27.1.0/24 > > Controller floating address: 172.27.1.2 > > Controller 0 address: 172.27.1.3 > > Controller 1 address: 172.27.1.4 > > NFS Management Address 1: 172.27.1.5 > > NFS Management Address 2: 172.27.1.6 > > Controller floating hostname: controller > > Controller hostname prefix: controller- > > OAM Controller floating hostname: oamcontroller > > Dynamic IP address allocation is selected > > Management multicast subnet: 239.1.1.0/28 > > > > Infrastructure Network Configuration > > ------------------------------------ > > Infrastructure interface not configured > > > > External OAM Network Configuration > > ---------------------------------- > > External OAM interface name: bond1 > > External OAM interface: bond1 > > External OAM interface MTU: 1500 > > External OAM ae member 0: enp7s16f1 > > External OAM ae member 1: enp7s16f7 > > External OAM ae policy : active-backup > > External OAM subnet: 10.62.150.0/24 > > External OAM gateway address: 10.62.150.1 > > External OAM floating address: 10.62.150.2 > > External OAM 0 address: 10.62.150.210 > > External OAM 1 address: 10.62.150.211 > > > > Apply the above configuration? [y/n]: y > > > > Applying configuration (this will take several minutes): > > > > 01/08: Creating bootstrap configuration ... DONE > > 02/08: Applying bootstrap manifest ... DONE > > 03/08: Persisting local configuration ... DONE > > 04/08: Populating initial system inventory ... DONE > > 05/08: Creating system configuration ... sysinv 2019-02-01 15:20:44.053 25508 CRITICAL sysinv [-] 24 > > 2019-02-01 15:20:44.053 25508 TRACE sysinv Traceback (most recent call last): > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/bin/sysinv-puppet", line 10, in > > 2019-02-01 15:20:44.053 25508 TRACE sysinv sys.exit(main()) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 75, in main > > 2019-02-01 15:20:44.053 25508 TRACE sysinv CONF.action.func(CONF.action.path, CONF.action.hostname) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/cmd/puppet.py", line 47, in create_host_config_action > > 2019-02-01 15:20:44.053 25508 TRACE sysinv operator.update_host_config(host) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 30, in _wrapper > > 2019-02-01 15:20:44.053 25508 TRACE sysinv func(self, *args, **kwargs) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 145, in update_host_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv config.update(puppet_plugin.obj.get_host_config(host)) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 99, in get_host_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_interface_configs(context, config) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1029, in generate_interface_configs > > 2019-02-01 15:20:44.053 25508 TRACE sysinv generate_network_config(context, config, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 924, in generate_network_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv network_config = get_interface_network_config(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 896, in get_interface_network_config > > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_os_ifname(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 507, in get_interface_os_ifname > > 2019-02-01 15:20:44.053 25508 TRACE sysinv os_ifname = get_interface_port_name(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 472, in get_interface_port_name > > 2019-02-01 15:20:44.053 25508 TRACE sysinv port = get_interface_port(context, iface) > > 2019-02-01 15:20:44.053 25508 TRACE sysinv File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 464, in get_interface_port > > 2019-02-01 15:20:44.053 25508 TRACE sysinv return context['ports'][iface['id']] > > 2019-02-01 15:20:44.053 25508 TRACE sysinv KeyError: 24 > > 2019-02-01 15:20:44.053 25508 TRACE sysinv > > Failed to update puppet hiera host config > > > > Configuration failed: Failed to update hiera configuration > > localhost:~# > > > > /tmp/apply_manifest.log: > > ======================== > > > > cp: cannot stat ‘/tmp/hieradata/172.27.1.3.yaml’: No such file or directory > > cp: cannot stat ‘/tmp/hieradata/system.yaml’: No such file or directory > > cp: cannot stat ‘/tmp/hieradata/secure_system.yaml’: No such file or directory > > Applying puppet bootstrap manifest... > > [DONE] > > > > > > ls /tmp/puppet/hieradata/ > > ========================= > > global.yaml personality.yaml secure_static.yaml static.yaml > > localhost:~# > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > From ada.cabrales at intel.com Tue Feb 5 16:33:28 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 5 Feb 2019 16:33:28 +0000 Subject: [Starlingx-discuss] [ Test ] meeting - 02/05/ -> 02/06 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD67462@FMSMSX114.amr.corp.intel.com> Hello Due to some schedule conflicts, I'm changing the testing meeting for tomorrow, 02/06 at 6am PST. This is a one-time change. Sorry for the short notice. Thanks Ada From bruce.e.jones at intel.com Tue Feb 5 17:00:43 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 5 Feb 2019 17:00:43 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting today and time change for next week Message-ID: <9A85D2917C58154C960D95352B22818BBFD1BCB6@fmsmsx123.amr.corp.intel.com> At today's meeting we reviewed and updated the tracking spreadsheet which can be found here: [0]. We have 18 items of 58 that we are tracking that are at risk or unlikely to make the Stein release. Next week's meeting will start at 6:30 PST (9:30) EST to allow the Test team to have the first 30 minutes of our usual time slot. Brucej [0] https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Feb 5 18:15:26 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 5 Feb 2019 18:15:26 +0000 Subject: [Starlingx-discuss] Distro.openstack meeting today and time change for next week In-Reply-To: <9A85D2917C58154C960D95352B22818BBFD1BCB6@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BBFD1BCB6@fmsmsx123.amr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD67683@FMSMSX114.amr.corp.intel.com> Hello :) We have a confusion here: Testing meeting for this week will take place tomorrow, 02/06 at 6am PDT. We will use the distro.non-openstack time slot, as they cancelled their meeting occurrence due to the Chinese New Year. Thanks! Ada From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Tuesday, February 5, 2019 11:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Distro.openstack meeting today and time change for next week At today's meeting we reviewed and updated the tracking spreadsheet which can be found here: [0]. We have 18 items of 58 that we are tracking that are at risk or unlikely to make the Stein release. Next week's meeting will start at 6:30 PST (9:30) EST to allow the Test team to have the first 30 minutes of our usual time slot. Brucej [0] https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/edit?usp=sharing -------------- next part -------------- An HTML attachment was scrubbed... URL: From cristopher.j.lemus.contreras at intel.com Tue Feb 5 18:45:56 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Tue, 5 Feb 2019 18:45:56 +0000 Subject: [Starlingx-discuss] [Containers] Deployment status Message-ID: <27B234A6-A098-4034-9108-2BD1D25AC819@intel.com> Hi All, ** Baremetal Environment ** Simplex configuration on BareMetal servers without internet access. SUMMARY: * Worked with Erich and Mingyuan to apply the patch for config_controller (storyboard: https://storyboard.openstack.org/#!/story/2004711) * With these patches, we can define a mirror registry during the `config_controller --kubernetes` . Using the new option, config_controller completed without any issues, pulling the required images from our internal registry. * During system-application apply, there are errors related to images still trying to be downloaded from public registry (overriding private registries definitions): * 2019-02-01T17:51:57.394 controller-0 dockerd[2325]: info time="2019-02-01T17:51:57.394492462Z" level=info msg="Attempting next endpoint for pull after error: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Mingyuan mentioned that another patch is in progress for application-apply. Is somebody going to work on it during CNY? Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Tue Feb 5 19:09:28 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Tue, 5 Feb 2019 19:09:28 +0000 Subject: [Starlingx-discuss] [Containers] Deployment status In-Reply-To: <27B234A6-A098-4034-9108-2BD1D25AC819@intel.com> References: <27B234A6-A098-4034-9108-2BD1D25AC819@intel.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3BCF12@ALA-MBD.corp.ad.wrs.com> Hi Cristopher, I don’t believe the commit you reference is ready to merge, it is currently at -1 It also will not address the application images as you noted below. This is Mingyuan’s story, not sure if someone else was lined up to work on this during CNY. Brent From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Tuesday, February 5, 2019 1:46 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Deployment status Hi All, ** Baremetal Environment ** Simplex configuration on BareMetal servers without internet access. SUMMARY: - Worked with Erich and Mingyuan to apply the patch for config_controller (storyboard: https://storyboard.openstack.org/#!/story/2004711) - With these patches, we can define a mirror registry during the `config_controller --kubernetes` . Using the new option, config_controller completed without any issues, pulling the required images from our internal registry. - During system-application apply, there are errors related to images still trying to be downloaded from public registry (overriding private registries definitions): o 2019-02-01T17:51:57.394 controller-0 dockerd[2325]: info time="2019-02-01T17:51:57.394492462Z" level=info msg="Attempting next endpoint for pull after error: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" - Mingyuan mentioned that another patch is in progress for application-apply. Is somebody going to work on it during CNY? Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Feb 5 19:42:58 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 5 Feb 2019 20:42:58 +0100 Subject: [Starlingx-discuss] Community Planning call reminder Message-ID: Hi, This is a friendly reminder that we’re having the next community planning call __tomorrow (February 6) at 8am PT / 1600 UTC__. We will discuss topics such as outreach and onboarding, planning event presence and further items the participants are interested in discussing. You can find the agenda, dial-in info and notes from the previous call on this etherpad: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans Please feel free to add discussion topics to the agenda with your name/IRC nick to know who to ping during the call. Thanks and Best Regards, Ildikó From cristopher.j.lemus.contreras at intel.com Tue Feb 5 20:03:00 2019 From: cristopher.j.lemus.contreras at intel.com (Lemus Contreras, Cristopher J) Date: Tue, 5 Feb 2019 20:03:00 +0000 Subject: [Starlingx-discuss] [Containers] Deployment status In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3BCF12@ALA-MBD.corp.ad.wrs.com> References: <27B234A6-A098-4034-9108-2BD1D25AC819@intel.com> <2588653EBDFFA34B982FAF00F1B4844EBB3BCF12@ALA-MBD.corp.ad.wrs.com> Message-ID: <57658498-F20C-445A-A343-9391718D8C18@intel.com> Hi Brent, We used the commit to verify it, being that the only option that we have for our lab, is to use a local registry. It helped us to check that config_controller completed without errors. So, basically, we are waiting for the story to be completed to unblock us and fully provision our baremetal environments. Thanks & Regards, Cristopher Lemus From: "Rowsell, Brent" Date: Tuesday, February 5, 2019 at 1:10 PM To: "Lemus Contreras, Cristopher J" , "starlingx-discuss at lists.starlingx.io" Subject: RE: [Containers] Deployment status Hi Cristopher, I don’t believe the commit you reference is ready to merge, it is currently at -1 It also will not address the application images as you noted below. This is Mingyuan’s story, not sure if someone else was lined up to work on this during CNY. Brent From: Lemus Contreras, Cristopher J [mailto:cristopher.j.lemus.contreras at intel.com] Sent: Tuesday, February 5, 2019 1:46 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Deployment status Hi All, ** Baremetal Environment ** Simplex configuration on BareMetal servers without internet access. SUMMARY: * Worked with Erich and Mingyuan to apply the patch for config_controller (storyboard: https://storyboard.openstack.org/#!/story/2004711) * With these patches, we can define a mirror registry during the `config_controller --kubernetes` . Using the new option, config_controller completed without any issues, pulling the required images from our internal registry. * During system-application apply, there are errors related to images still trying to be downloaded from public registry (overriding private registries definitions): * 2019-02-01T17:51:57.394 controller-0 dockerd[2325]: info time="2019-02-01T17:51:57.394492462Z" level=info msg="Attempting next endpoint for pull after error: Get https://quay.io/v2/: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" * Mingyuan mentioned that another patch is in progress for application-apply. Is somebody going to work on it during CNY? Regards, Cristopher Lemus -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Feb 5 20:14:41 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 5 Feb 2019 15:14:41 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! Message-ID: <893714213.371.1549397683021.JavaMail.javamailuser@localhost> Project: STX_build_docker_flock_images Build #: 44 Status: Failure Timestamp: 20190205T190936Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190205T164241Z OS: centos MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root BASE_VERSION: f-stein-20190205T164241Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: f-stein PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs PUBLISH_TIMESTAMP: 20190205T164241Z FLOCK_VERSION: f-stein-centos-master-20190205T164241Z PREFIX: f-stein OPENSTACK_RELEASE: master TIMESTAMP: 20190205T164241Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/outputs REGISTRY: docker.io From build.starlingx at gmail.com Tue Feb 5 20:14:49 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 5 Feb 2019 15:14:49 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 41 - Failure! Message-ID: <772143831.374.1549397690491.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 41 Status: Failure Timestamp: 20190205T190637Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: f/stein MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190205T164241Z OS: centos MUNGED_BRANCH: f-stein MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/f-stein PUBLISH_DISTRO_BASE: /export/mirror/starlingx/feature/stein/centos PUBLISH_TIMESTAMP: 20190205T164241Z DOCKER_BUILD_ID: jenkins-f-stein-20190205T164241Z-builder OPENSTACK_RELEASE: master TIMESTAMP: 20190205T164241Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/outputs From build.starlingx at gmail.com Tue Feb 5 20:14:53 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 5 Feb 2019 15:14:53 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_stein_master - Build # 42 - Failure! Message-ID: <590783122.377.1549397694960.JavaMail.javamailuser@localhost> Project: STX_build_stein_master Build #: 42 Status: Failure Timestamp: 20190205T164241Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From jose.perez.carranza at intel.com Tue Feb 5 20:48:23 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 5 Feb 2019 20:48:23 +0000 Subject: [Starlingx-discuss] [Containers]Problems setting proxy via config_file Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A937350@fmsmsx101.amr.corp.intel.com> Hi I'm doing a configuring a controller using a configuration file `sudo config_controller --kubernetes --confi-file my-file.ini`. I modified my configuration file to add my proxies but this is causing errors [1], seems like when parsing the configuration file the NO_PROXY variable is set as python list format and hence not recognized by http library. 1. http://paste.openstack.org/show/744593/ Do you if there is a special way to assign multiple values to a variable on configuration file? Below is the example of my configuration file settings: ======================================================= [DOCKER_PROXY] DOCKER_HTTP_PROXY = http://:PORT DOCKER_HTTPS_PROXY = http://:PORT DOCKER_NO_PROXY = localhost,127.0.0.1,192.168.204.2,192.168.204.3,192.168.204.4,10.10.10.3 ======================================================== Regards, José From Barton.Wensley at windriver.com Tue Feb 5 21:38:28 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Tue, 5 Feb 2019 21:38:28 +0000 Subject: [Starlingx-discuss] [Containers]Problems setting proxy via config_file In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A937350@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A937350@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA61F64@ALA-MBD.corp.ad.wrs.com> José, You have uncovered a bug in the configuration file processing for the DOCKER_PROXY section. This was introduced in the original commit for the docker proxy: https://git.openstack.org/cgit/openstack/stx-config/commit/?id=3a30da9a88920fef4d6b185708bd5013bb8dd95c Please raise a bug for this. For now, your only option is to run config_controller interactively (without the config file). Bart -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: February 5, 2019 3:48 PM To: starlingx-discuss at lists.starlingx.io Cc: Miller, Frank; Wensley, Barton Subject: [Containers]Problems setting proxy via config_file Hi I'm doing a configuring a controller using a configuration file `sudo config_controller --kubernetes --confi-file my-file.ini`. I modified my configuration file to add my proxies but this is causing errors [1], seems like when parsing the configuration file the NO_PROXY variable is set as python list format and hence not recognized by http library. 1. http://paste.openstack.org/show/744593/ Do you if there is a special way to assign multiple values to a variable on configuration file? Below is the example of my configuration file settings: ======================================================= [DOCKER_PROXY] DOCKER_HTTP_PROXY = http://:PORT DOCKER_HTTPS_PROXY = http://:PORT DOCKER_NO_PROXY = localhost,127.0.0.1,192.168.204.2,192.168.204.3,192.168.204.4,10.10.10.3 ======================================================== Regards, José From scott.little at windriver.com Tue Feb 5 21:36:43 2019 From: scott.little at windriver.com (Scott Little) Date: Tue, 5 Feb 2019 16:36:43 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! In-Reply-To: <893714213.371.1549397683021.JavaMail.javamailuser@localhost> References: <893714213.371.1549397683021.JavaMail.javamailuser@localhost> Message-ID: <22364911-94cb-9c8c-b876-f63b6768cbd3@windriver.com> Need designer input. The build failed on the flock image "stx-cinder" No matching distribution found for google-api-python-client===1.7.8 (from -c /tmp/wheels/upper-constraints.txt (line 238)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Failed to build stx-cinder... Aborting Full log is here ... http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/latest_build/logs/jenkins-STX_build_docker_flock_images-44.log Scott On 2019-02-05 3:14 p.m., build.starlingx at gmail.com wrote: > Project: STX_build_docker_flock_images > Build #: 44 > Status: Failure > Timestamp: 20190205T190936Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs > -------------------------------------------------------------------------------- > Parameters > > HOST_PORT: 80 > MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190205T164241Z > OS: centos > MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root > BASE_VERSION: f-stein-20190205T164241Z > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs > REGISTRY_USERID: slittlewrs > HOST: build.starlingx.cengn.ca > LATEST_PREFIX: f-stein > PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs > PUBLISH_TIMESTAMP: 20190205T164241Z > FLOCK_VERSION: f-stein-centos-master-20190205T164241Z > PREFIX: f-stein > OPENSTACK_RELEASE: master > TIMESTAMP: 20190205T164241Z > REGISTRY_ORG: starlingx > PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/outputs > REGISTRY: docker.io > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Tue Feb 5 21:47:46 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 5 Feb 2019 21:47:46 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! In-Reply-To: <22364911-94cb-9c8c-b876-f63b6768cbd3@windriver.com> References: <893714213.371.1549397683021.JavaMail.javamailuser@localhost> <22364911-94cb-9c8c-b876-f63b6768cbd3@windriver.com> Message-ID: The 1.7.8 wheel on pypi is python3 only, even though the project is declared python2 and python3. Previous version was built as py2-py3 I don't know what the best way to fix this is. Al From: Scott Little [mailto:scott.little at windriver.com] Sent: Tuesday, February 05, 2019 4:37 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! Need designer input. The build failed on the flock image "stx-cinder" No matching distribution found for google-api-python-client===1.7.8 (from -c /tmp/wheels/upper-constraints.txt (line 238)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Failed to build stx-cinder... Aborting Full log is here ... http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/latest_build/logs/jenkins-STX_build_docker_flock_images-44.log Scott On 2019-02-05 3:14 p.m., build.starlingx at gmail.com wrote: Project: STX_build_docker_flock_images Build #: 44 Status: Failure Timestamp: 20190205T190936Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190205T164241Z OS: centos MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root BASE_VERSION: f-stein-20190205T164241Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: f-stein PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs PUBLISH_TIMESTAMP: 20190205T164241Z FLOCK_VERSION: f-stein-centos-master-20190205T164241Z PREFIX: f-stein OPENSTACK_RELEASE: master TIMESTAMP: 20190205T164241Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/outputs REGISTRY: docker.io _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Tue Feb 5 22:24:54 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Tue, 5 Feb 2019 22:24:54 +0000 Subject: [Starlingx-discuss] [Containers]Problems setting proxy via config_file In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA61F64@ALA-MBD.corp.ad.wrs.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A937350@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA61F64@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A9373DA@fmsmsx101.amr.corp.intel.com> Done https://bugs.launchpad.net/starlingx/+bug/1814833 Regards, José > -----Original Message----- > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > Sent: Tuesday, February 5, 2019 3:38 PM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io > Cc: Miller, Frank > Subject: RE: [Containers]Problems setting proxy via config_file > > José, > > You have uncovered a bug in the configuration file processing for the > DOCKER_PROXY section. This was introduced in the original commit for the > docker proxy: > https://git.openstack.org/cgit/openstack/stx- > config/commit/?id=3a30da9a88920fef4d6b185708bd5013bb8dd95c > > Please raise a bug for this. For now, your only option is to run config_controller > interactively (without the config file). > > Bart > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: February 5, 2019 3:48 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Miller, Frank; Wensley, Barton > Subject: [Containers]Problems setting proxy via config_file > > Hi > > I'm doing a configuring a controller using a configuration file `sudo > config_controller --kubernetes --confi-file my-file.ini`. I modified my > configuration file to add my proxies but this is causing errors [1], seems like > when parsing the configuration file the NO_PROXY variable is set as python list > format and hence not recognized by http library. > > 1. http://paste.openstack.org/show/744593/ > > Do you if there is a special way to assign multiple values to a variable on > configuration file? > > Below is the example of my configuration file settings: > > ======================================================= > [DOCKER_PROXY] > DOCKER_HTTP_PROXY = http://:PORT > DOCKER_HTTPS_PROXY = http://:PORT > DOCKER_NO_PROXY = > localhost,127.0.0.1,192.168.204.2,192.168.204.3,192.168.204.4,10.10.10.3 > ======================================================== > > Regards, > José From Al.Bailey at windriver.com Tue Feb 5 22:39:33 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Tue, 5 Feb 2019 22:39:33 +0000 Subject: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! In-Reply-To: References: <893714213.371.1549397683021.JavaMail.javamailuser@localhost> <22364911-94cb-9c8c-b876-f63b6768cbd3@windriver.com> Message-ID: Note: This only affects containers and only affects the stein branch. Bug report: https://bugs.launchpad.net/starlingx/+bug/1814835 Code Review: https://review.openstack.org/#/c/635065/ Al From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Tuesday, February 05, 2019 4:48 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! The 1.7.8 wheel on pypi is python3 only, even though the project is declared python2 and python3. Previous version was built as py2-py3 I don't know what the best way to fix this is. Al From: Scott Little [mailto:scott.little at windriver.com] Sent: Tuesday, February 05, 2019 4:37 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [build-report] STX_build_docker_flock_images - Build # 44 - Failure! Need designer input. The build failed on the flock image "stx-cinder" No matching distribution found for google-api-python-client===1.7.8 (from -c /tmp/wheels/upper-constraints.txt (line 238)) The command '/bin/sh -c /opt/loci/scripts/install.sh' returned a non-zero code: 1 Failed to build stx-cinder... Aborting Full log is here ... http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/latest_build/logs/jenkins-STX_build_docker_flock_images-44.log Scott On 2019-02-05 3:14 p.m., build.starlingx at gmail.com wrote: Project: STX_build_docker_flock_images Build #: 44 Status: Failure Timestamp: 20190205T190936Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs -------------------------------------------------------------------------------- Parameters HOST_PORT: 80 MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-stein/20190205T164241Z OS: centos MY_REPO: /localdisk/designer/jenkins/f-stein/cgcs-root BASE_VERSION: f-stein-20190205T164241Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs REGISTRY_USERID: slittlewrs HOST: build.starlingx.cengn.ca LATEST_PREFIX: f-stein PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/logs PUBLISH_TIMESTAMP: 20190205T164241Z FLOCK_VERSION: f-stein-centos-master-20190205T164241Z PREFIX: f-stein OPENSTACK_RELEASE: master TIMESTAMP: 20190205T164241Z REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/feature/stein/centos/20190205T164241Z/outputs REGISTRY: docker.io _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Feb 5 23:05:22 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 5 Feb 2019 23:05:22 +0000 Subject: [Starlingx-discuss] [ Test ] meeting agenda - 02/06/2019 - 6am PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD67914@FMSMSX114.amr.corp.intel.com> Agenda for 02/06 meeting - please remember the meeting will take place at 6am PDT. 1. Test plan for May release - Ghada/Ada - 15 min 2. Stories for upstream OpenStack testing - Bruce - 15 min 3. Test repo structure and reviewers - Jose/Cristopher - 15 min 4. Opens - all - 15 min Regards Ada From abraham.arce.moreno at intel.com Wed Feb 6 00:01:46 2019 From: abraham.arce.moreno at intel.com (Arce Moreno, Abraham) Date: Wed, 6 Feb 2019 00:01:46 +0000 Subject: [Starlingx-discuss] Contribution to the project. In-Reply-To: References: <9A85D2917C58154C960D95352B22818BBFD18EB2@fmsmsx123.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BBFD18F0D@fmsmsx123.amr.corp.intel.com> Message-ID: > Hi Abraham, Hi again Javier ☺ welcome to Documentation team! > Thanks for your attentiion and I can help on the documentation team if you > need help there. Here you have our Documentation Tree roadmap [0], please feel free to take a look. Based in the resources you have available and the priority of tasks, my suggestion for your short term and medium term tasks would be: 1. Go through " Developer’s Guide" to get your workstation setup [1] 2. Feedback new "Installation Guides" [Short Term] 1.1 See section "Installing and Configuring StarlingX with Containers" under [2] 3. SWIFT Configuration & Management [Medium Term] 3.1 Once you have exercised the installation and configuration, it is time to generate your first introductory guide for SWIFT, taking advantage of your experience. "Two" is being actively review now by our community and it will be translated from the Wiki format to the RST format to land under stx-docs [3] Please let this list know about improvements, findings, whatever you consider important, you will learn how to deploy StarlingX. When we have that new RST format, we will add you as reviewer (with Ubuntu Account obtained at [1]). "Three" We can have more details once you feel comfortable with tasks "one" and "two" on what the community is looking for, anyway, please take into consideration this is a great opportunity for you exercise the complete development cycle, from creating a first draft of a guide, through submitting your first patchset and finally, seeing it approved :) > Have a good day! You too! [0] https://docs.google.com/spreadsheets/d/1kvXEZl4XwNgHTYXPIcoZfKE4LBNcif4sBJdZCNSvBGw/edit?usp=sharing [1] https://docs.openstack.org/infra/manual/developers.html [2] https://wiki.openstack.org/wiki/StarlingX/Containers [3] https://git.openstack.org/cgit/openstack/stx-docs From cindy.xie at intel.com Wed Feb 6 01:11:12 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 6 Feb 2019 01:11:12 +0000 Subject: [Starlingx-discuss] [Containers]Problems setting proxy via config_file In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A9373DA@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A937350@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA61F64@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2A9373DA@fmsmsx101.amr.corp.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E79845@SHSMSX104.ccr.corp.intel.com> Thanks - I've assigned this bug to Mingyuan in the LP. -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: Wednesday, February 6, 2019 6:25 AM To: Wensley, Barton ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers]Problems setting proxy via config_file Done https://bugs.launchpad.net/starlingx/+bug/1814833 Regards, José > -----Original Message----- > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > Sent: Tuesday, February 5, 2019 3:38 PM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io > Cc: Miller, Frank > Subject: RE: [Containers]Problems setting proxy via config_file > > José, > > You have uncovered a bug in the configuration file processing for the > DOCKER_PROXY section. This was introduced in the original commit for > the docker proxy: > https://git.openstack.org/cgit/openstack/stx- > config/commit/?id=3a30da9a88920fef4d6b185708bd5013bb8dd95c > > Please raise a bug for this. For now, your only option is to run > config_controller interactively (without the config file). > > Bart > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: February 5, 2019 3:48 PM > To: starlingx-discuss at lists.starlingx.io > Cc: Miller, Frank; Wensley, Barton > Subject: [Containers]Problems setting proxy via config_file > > Hi > > I'm doing a configuring a controller using a configuration file `sudo > config_controller --kubernetes --confi-file my-file.ini`. I modified > my configuration file to add my proxies but this is causing errors > [1], seems like when parsing the configuration file the NO_PROXY > variable is set as python list format and hence not recognized by http library. > > 1. http://paste.openstack.org/show/744593/ > > Do you if there is a special way to assign multiple values to a > variable on configuration file? > > Below is the example of my configuration file settings: > > ======================================================= > [DOCKER_PROXY] > DOCKER_HTTP_PROXY = http://:PORT > DOCKER_HTTPS_PROXY = http://:PORT > DOCKER_NO_PROXY = > localhost,127.0.0.1,192.168.204.2,192.168.204.3,192.168.204.4,10.10.10 > .3 ======================================================== > > Regards, > José _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From serverascode at gmail.com Wed Feb 6 12:59:20 2019 From: serverascode at gmail.com (Curtis) Date: Wed, 6 Feb 2019 07:59:20 -0500 Subject: [Starlingx-discuss] Vagrant StarlingX VM. In-Reply-To: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> References: <970a6b61-79eb-46ec-9364-b297f18f5c84@intel.com> Message-ID: On Sat, Feb 2, 2019 at 12:44 PM Lara, Cesar wrote: > Yes we are exploring a few possibilities for StarlingX in a bottle, > vagrant in one of them. Feel free to throw some ideas at it > > I've not been doing a good job following the "in a bottle" work...in which project/meeting are these possibilities being discussed? I think the "in a bottle", ie. environments that can quickly be created for evaluation and development, are quite important, and also relate to zero touch provisioning. Thanks, Curtis > Regards > Cesar Lara > Sent from my mobile phone > ------------------------------ > *From:* Javier Romero > *Sent:* Saturday, February 2, 2019 11:07 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Vagrant StarlingX VM. > > Hi Team, > > Think that perhaps may be useful for new users to have a Vagrant > preconfigured AIO VM to use StarlingX for the fiirst time. > > https://www.vagrantup.com > > If this can be useful I can help with that. > > Vagrant use VirtualBox by default to star the preconfigured VM and can > also be set to be used with QEMU. > > Best Regards, > > > > > > -- > *Javier Romero* > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Wed Feb 6 15:28:24 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Wed, 6 Feb 2019 09:28:24 -0600 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: <6E7E2668-C4A0-4290-A79F-499F8E4F94C8@windriver.com> References: <3BD8A4F0-7C15-407E-9671-7E3C666F03E1@windriver.com> <6E7E2668-C4A0-4290-A79F-499F8E4F94C8@windriver.com> Message-ID: On Mon, Feb 4, 2019 at 4:24 PM Young, Ken wrote: > > See inline. > > On 2019-02-04, 2:26 PM, "Victor Rodriguez" wrote: > > On Mon, Feb 4, 2019 at 12:49 PM Young, Ken wrote: > > > > Team, > > > > > > > > A “Lights On” feature for the 2019.05 release is “CVE Upgrades”. This feature will enable ongoing security updates for the master branch and selectively provide CVE corrective content to supported releases. The first step for this feature is to define a policy. With the help of the Starling X security team, a draft of this policy has been provided below: > > > > > > > > https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy > > > > > > > > Please review and provide comments. I plan to reserve a spot on the Community call for any discussion on Wednesday and start the discussion with the build team to identify tools to support this policy on Thursday. > > > Question on the last slide you propose a formula as > > Criticality >= 7 > > What standard are you plan to use? CVSS v3.0 or v2.0 > > I am suggesting we start with v2. I need to look into v3 a little more. > > For example, taking this MariaDB > > https://nvd.nist.gov/vuln/detail/CVE-2017-15365 > > Base Score: 8.8 HIGH in V3 and 6.5 MEDIUM in V2 > > My recommendation will be to use the highest one despite if the score > came from V2 or V3 > > However, I think we should specify that somewhere > > Agreed. > > One more question, when you said critical issues are fixed if > corrections are available upstream it means that ( taking the previous > example ) if MariaDB provides a Patch, is merged in master and > released in the latest release, like in the previous example : > https://github.com/MariaDB/server/commit/0b5a5258abbeaf8a0c3a18c7e753699787fdf46e > > But CentOS has not taken it yet, are we OK to apply this patch in STX > until CentOS apply in incoming future? > > We will wait for CentOS. We are in the process of patch elimination. I do not to carry more. > Hi Ken This is the link to the tool that we could use to catch the CVEs https://github.com/clearlinux/cve-check-tool Hoep it helps as an starting point Regards Victor > Regards > > Victor R > > > > > > Regards, > > > > Ken Y > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From ada.cabrales at intel.com Wed Feb 6 17:18:56 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Wed, 6 Feb 2019 17:18:56 +0000 Subject: [Starlingx-discuss] [ Test ] meeting minutes - 02/06/2019 - 6am PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD67E38@FMSMSX114.amr.corp.intel.com> Agenda for 02/06/2019 Attendees: Elio, Cristopher, Jose, Victor, JP, Numan, Bruce, Javier, Fernando, Richo, Javier, Ada 1. Test plan for May release - Ghada/Ada - 15 min + https://docs.google.com/spreadsheets/d/1FwPTFKFFoNwWAsgbdw0kH_Bwv5JiutTWu_9UU4OAQbw/edit#gid=0 + resource planning to be finished by 02/08 2. Stories for upstream OpenStack testing - Bruce - 15 min + Test plan and test cases for HPET timer feature - https://storyboard.openstack.org/#!/story/2004736 + Implement test plan and test cases for SR-IOV scheduling policy change https://storyboard.openstack.org/#!/story/2004888 + Test cases for the updated vswitch affinity feature - https://storyboard.openstack.org/#!/story/2004889 + test cases for the changed Nova DB purge feature - https://storyboard.openstack.org/#!/story/2004890 + Ada to assign the stories Verify if there are test cases that can be used. 3. Test repo structure and reviewers - Jose/Cristopher - 15 min + To align with OpenStack model, the repo should also contain a "docs" and "releasenotes" directories. + Abraham to train the team on the way of submitting things to the releasenotes and docs. To be delivered next week. + "manual suite" directory should match the automated suite tree. + For documentation: consider the scenario of running only one test cases, several, or all the suite. + Cristopher to create the structure. + Also add Abraham for documentation reviews. + Reviewers: an UbuntuOne account is required. + The group of reviewers: - Numan Waheed - Chris Winnicki - Maria Yousaf - Nimalini Rasa - Wendy Mitchell - Yang Liu - Yosief Gebremariam - Jose Perez - Elio Martinez - JC Alonso + There might be other files related to OpenStack CI. + Structure to be set today. Reviewers to check and approve. + Have everything ready for testing submissions next week. + Link to the test case template: https://wiki.openstack.org/wiki/StarlingX/TestCaseDocument 4. Opens - all - 15 min + Test dashboard - are we going to make this a priority? Ada says that is really good to have it for May release. Ada to check Cristopher's bandwidth. Cristopher to send requirements to Ken, Numan and Ada (02/06) to ask CENGN. Suggestion: download and try it locally. From juan.carlos.alonso at intel.com Wed Feb 6 17:20:07 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 6 Feb 2019 17:20:07 +0000 Subject: [Starlingx-discuss] New docker0 interface enabled? Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8F533@FMSMSX108.amr.corp.intel.com> Hi, For today's ISO, in Simplex and Duplex configurations, we noticed a new interface called 'docker0', our framework takes the first interface and set a temporary ip address, but this 'docker0' interface could not be set, then STX setup fails. We applied a workaround to take the next interface to continue the STX tests. We also noticed that this 'docker0' interface is not present in Multinode configurations. localhost:~$ ls /sys/class/net docker0 enp2s1 enp2s2 eth1000 eth1001 lo Is this new interface enabled for configuration with kubernetes? Will this interface be enabled in Multinode configurations? Why docker0 interface cannot be set with an ip address? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From xavinux at gmail.com Wed Feb 6 17:44:36 2019 From: xavinux at gmail.com (Javier Romero) Date: Wed, 6 Feb 2019 14:44:36 -0300 Subject: [Starlingx-discuss] New docker0 interface enabled? In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C8F533@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C8F533@FMSMSX108.amr.corp.intel.com> Message-ID: Hi Juan Carlos, After a Docker installationis finished, the Bridge interface *docker0* is created by default and it's IP range is 172.17.0.0/16. But you can create another network interfaces running *docker network create new_network_name * If you want to change the *docker0 * settings, you can use the daemon.json file in /etc/docker/ If the file is not their you can create it and specify some parameters like the following for the bridge network: { "bip": "10.10.10.5/24", "fixed-cidr": "10.10.10.5/25", "fixed-cidr-v6": "2001:db8::/64", "mtu": 1500, "default-gateway": "10.20.1.1", "default-gateway-v6": "2001:db8:abcd::89", "dns": ["10.20.1.2","10.20.1.3"] } Then restart Docker. Hope this can help you. Best Regards, *Javier Romero* El mié., 6 feb. 2019 a las 14:21, Alonso, Juan Carlos (< juan.carlos.alonso at intel.com>) escribió: > Hi, > > > > For today’s ISO, in Simplex and Duplex configurations, we noticed a new > interface called ‘docker0’, our framework takes the first interface and set > a temporary ip address, but this ‘docker0’ interface could not be set, then > STX setup fails. > > We applied a workaround to take the next interface to continue the STX > tests. > > We also noticed that this ‘docker0’ interface is not present in Multinode > configurations. > > > > localhost:~$ ls /sys/class/net > > *docker0* enp2s1 enp2s2 eth1000 eth1001 lo > > > > Is this new interface enabled for configuration with kubernetes? Will this > interface be enabled in Multinode configurations? > > Why docker0 interface cannot be set with an ip address? > > > > Regards. > > Juan Carlos Alonso > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From David.Sullivan at windriver.com Wed Feb 6 18:07:45 2019 From: David.Sullivan at windriver.com (Sullivan, David) Date: Wed, 6 Feb 2019 18:07:45 +0000 Subject: [Starlingx-discuss] New docker0 interface enabled? In-Reply-To: References: <8557B550001AFB46A43A0CCC314BF85153C8F533@FMSMSX108.amr.corp.intel.com> Message-ID: There was a mistake in my submission https://review.openstack.org/#/c/633433/ The kubelet service is being started on all workers node, which creates the docker interface. This would affect all AIO installations but not standard installations. It will be addressed tonight. For now you should be able to ignore the extra interface. David From: Javier Romero [mailto:xavinux at gmail.com] Sent: Wednesday, February 6, 2019 12:45 PM To: Alonso, Juan Carlos Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] New docker0 interface enabled? Hi Juan Carlos, After a Docker installationis finished, the Bridge interface docker0 is created by default and it's IP range is 172.17.0.0/16. But you can create another network interfaces running docker network create new_network_name If you want to change the docker0 settings, you can use the daemon.json file in /etc/docker/ If the file is not their you can create it and specify some parameters like the following for the bridge network: { "bip": "10.10.10.5/24", "fixed-cidr": "10.10.10.5/25", "fixed-cidr-v6": "2001:db8::/64", "mtu": 1500, "default-gateway": "10.20.1.1", "default-gateway-v6": "2001:db8:abcd::89", "dns": ["10.20.1.2","10.20.1.3"] } Then restart Docker. Hope this can help you. Best Regards, Javier Romero El mié., 6 feb. 2019 a las 14:21, Alonso, Juan Carlos (>) escribió: Hi, For today’s ISO, in Simplex and Duplex configurations, we noticed a new interface called ‘docker0’, our framework takes the first interface and set a temporary ip address, but this ‘docker0’ interface could not be set, then STX setup fails. We applied a workaround to take the next interface to continue the STX tests. We also noticed that this ‘docker0’ interface is not present in Multinode configurations. localhost:~$ ls /sys/class/net docker0 enp2s1 enp2s2 eth1000 eth1001 lo Is this new interface enabled for configuration with kubernetes? Will this interface be enabled in Multinode configurations? Why docker0 interface cannot be set with an ip address? Regards. Juan Carlos Alonso _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Wed Feb 6 21:38:07 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Wed, 6 Feb 2019 21:38:07 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 2/6/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1AA4365@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Wed Feb 6 22:01:55 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 6 Feb 2019 22:01:55 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190206 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8F5E9@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-06 (link) Sanity Test is executed in a Bare Metal Environment Status: YELLOW Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 43 TCs PASS ] =========================================== Sanity Test is executed in a Virtual Environment Status: YELLOW Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 47 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 22 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] ------------------------------------------------------------------ A new interface called 'docker0' was enabled and present in Simplex and Duplex configurations. Issue will be addressed. Launchpad open: https://bugs.launchpad.net/starlingx/+bug/1814946 Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From jose.perez.carranza at intel.com Wed Feb 6 22:20:57 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Wed, 6 Feb 2019 22:20:57 +0000 Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on Duplex installation Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A9386E5@fmsmsx101.amr.corp.intel.com> Hi, I'm installing a Duplex with containers and I'm getting below error when unlocking controller-1 ======== | task | Configuration Failed, re-enabling | | task | Rebooting | | task | Booting | ... ... ... | 200.011 | controller-1 experienced a configuration failure. | host=controller-1 | critical | 2019-02-06T13:44:19 | controller ======== then controller tries to boot again until but below failure appears and the controller satays on failure status. =================== | task | Configuration failure, threshold reached, Lock/Unlock to retry | ====================== If I made a lock/unlock cycle I get with the same issues. I was searching on the puppet.log of controller-1 and I found below errors, seems like the repository is not reachable, my question is if I used a proxy on the config_controller those proxies are also propagated to controller-1 when is automatically installed? ==================== 2019-02-06T13:52:15.219 Notice: 2019-02-06 13:52:15 +0000 /Stage[main]/Platform::Helm/Exec[initialize helm]/returns: Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 8.8.4.4:53: read udp 10.10.10.4:52260->8.8.4.4:53: i/o timeout ======================= I created a Launchpad to track down this behavior https://bugs.launchpad.net/starlingx/+bug/1814968 Regards, José From xiongzhiwei at baicells.com Sat Feb 2 07:33:38 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Sat, 2 Feb 2019 15:33:38 +0800 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server References: Message-ID: <20190202153337711807156@baicells.com> Hi Yong, puppet.log attached. Thanks From: Hu, Yong Date: 2019-02-02 15:10 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Pls share us your /var/log/puppet/latest/puppet.log. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 3:07 PM To: "xiongzhiwei at baicells.com" , "Hu, Yong" , "Xie, Cindy" , starlingx-discuss Subject: Re: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Yong and Cindy, The same error appeared after removed these two sata disks. Thanks Tim From: xiongzhiwei at baicells.com Date: 2019-02-02 14:34 To: Hu, Yong; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Thanks Hu Yong and Cindy, I am trying to again after remove these two SATA HD. Will tell you once successed. Regards Tim From: Hu, Yong Date: 2019-02-02 14:27 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server If using 240G SATA HD as the boot disk, the storage might not be enough. At least, in our virtual environment, the boot disk has to be larger than 250 GB. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 2:17 PM To: "Xie, Cindy" , starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: puppet.rar Type: application/octet-stream Size: 142597 bytes Desc: not available URL: From ada.cabrales at intel.com Tue Feb 5 15:39:44 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 5 Feb 2019 15:39:44 +0000 Subject: [Starlingx-discuss] StarlingX Test meeting - 6:00 PDT Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD66FE4@FMSMSX114.amr.corp.intel.com> Re-scheduling for this week only - Wed 02/06 6am PDT Same zoom info -- Ada Changing the frequency to weekly. Weekly meetings on Tuesdays at 9am PDT / 1600 UTC * Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes: https://etherpad.openstack.org/p/stx-test -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 4778 bytes Desc: not available URL: From bruce.e.jones at intel.com Wed Feb 6 17:24:12 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 6 Feb 2019 17:24:12 +0000 Subject: [Starlingx-discuss] Community call notes Feb 6 2019 Message-ID: <9A85D2917C58154C960D95352B22818BBFD1C8CF@fmsmsx123.amr.corp.intel.com> Agenda and notes - Feb 6th call * CVE Support Policy - Ken * https://wiki.openstack.org/wiki/StarlingX/Security/CVE_Support_Policy * Upcoming container and openstack master cutover - Brent * Reminder to the community that the container/stein branch will merge to master next week - valentines day is the target * Release gating bug updates - Bruce / Ghada * stx.209.05 Release Gating Bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.2019.05 * Action: Project leads to review their bugs and drive resolution. ? Also to close any that are no longer valid due to the move to containers * Action: Ask BillZ can share the launchpad trend charts weekly in the community call * Process * Story Board Template * Community agreed to go with a simple template to set * Proposed Story Template: ? Brief Description: ? Justification: * Abandoning stale gerrit reviews -- need to agree on a policy ? Dean notes that there is no consistent best practice in the community. It depends on the community. Some do it per cycle due to time investment. ? Agreed to go with once per cycle -- core reviewers to abandon based on 3-6months inactivity. * Note: Abandoned reviews can be reversed and restarted again. It's not irreversible. * Release Plan Update * https://docs.google.com/spreadsheets/d/1HUwbsaSerzFRuvXVB_qvoGdI0Chx1YiiA2WYHwvIoYI/edit#gid=405844719 * Release Verification Plan: * https://docs.google.com/spreadsheets/d/1FwPTFKFFoNwWAsgbdw0kH_Bwv5JiutTWu_9UU4OAQbw/edit#gid=0 * Ada/Numan expect to have the resourcing plan in place by Feb 8 -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Wed Feb 6 18:17:53 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Wed, 6 Feb 2019 19:17:53 +0100 Subject: [Starlingx-discuss] [all] Denver Open Infrastructure Summit Community Contributor Awards! Message-ID: Hello Everyone! As we approach the Summit (still a ways away thankfully), its time to kick off the Community Contributor Award nominations[1]! For those of you that have never heard of the CCA, I'll briefly explain what they are :) We all know people in our communities that do the dirty jobs, we all know people that will bend over backwards trying to help someone new, we all know someone that is a savant in some area of the code we could never hope to understand. These people rarely get the thanks they deserve and the Community Contributor Awards are a chance to make sure they know that they are appreciated for the amazing work they do and skills they have. As always, participation is voluntary :) Nominations will close on April 14th at 7:00 UTC and recipients will be announced at the Open Infrastructure Summit in Denver[2]. Recipients will be selected by a panel of top-level OSF project representatives who wish to participate. Finally, congrats again to recipients in Berlin[3]! -Kendall Nelson (diablo_rojo) [1] https://openstackfoundation.formstack.com/forms/train_cca_nominations [2]https://www.openstack.org/summit/denver-2019/ [3] http://superuser.openstack.org/articles/openstack-community-contributor-awards-berlin-summit-edition/ -------------- next part -------------- An HTML attachment was scrubbed... URL: From himanshugoyal500 at gmail.com Thu Feb 7 07:32:11 2019 From: himanshugoyal500 at gmail.com (Himanshu Goyal) Date: Thu, 7 Feb 2019 13:02:11 +0530 Subject: [Starlingx-discuss] RT Patch on StarlingX Server Message-ID: Hi, Please let us know is there any available version of RT Patch that we can apply on starlingX compute host server. Regards Himanshu Goyal -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Feb 7 07:50:16 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 7 Feb 2019 02:50:16 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 137 - Failure! Message-ID: <335340080.383.1549525818260.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 137 Status: Failure Timestamp: 20190207T062103Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190207T060000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190207T060000Z DOCKER_BUILD_ID: jenkins-master-20190207T060000Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190207T060000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190207T060000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Thu Feb 7 07:50:21 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 7 Feb 2019 02:50:21 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 130 - Failure! Message-ID: <302024602.386.1549525822575.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 130 Status: Failure Timestamp: 20190207T060000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190207T060000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From serverascode at gmail.com Thu Feb 7 13:04:07 2019 From: serverascode at gmail.com (Curtis) Date: Thu, 7 Feb 2019 08:04:07 -0500 Subject: [Starlingx-discuss] ONAP OOM similarities Message-ID: Hi All, Like many of us, I'm on a few open source lists. One of those lists is for the ONAP project. From a high level, I see some similarities with their OOM (ONAP Operations Manager) project and some of the things StarlingX does. Might be worthwhile having a look at what they are doing and thinking about there. (Not to say what they have decided on is right for everyone, or StarlingX, but considering their thought process and understanding issues they have run into could be valuable.) Perhaps some similar work to multi-os (their CIA project I believe): * Meeting notes: https://wiki.onap.org/display/DW/CIA+Meeting+Notes * Initiating email: https://lists.onap.org/g/onap-discuss/message/11800 Issues with managing container image versions and integration: * https://lists.onap.org/g/onap-discuss/message/15388 * https://lists.onap.org/g/onap-discuss/attachment/15388/0/oom_version_tests.pdf Those are just some examples; likely others as well, multi-CPU images etc. At any rate, something to think about! :) Thanks, Curtis -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Feb 7 13:30:20 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 7 Feb 2019 13:30:20 +0000 Subject: [Starlingx-discuss] Reverting commit due to load breakage Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62A97@ALA-MBD.corp.ad.wrs.com> The following commit appears to have broken installations and is being reverted: https://git.openstack.org/cgit/openstack/stx-config/commit/?id=24fd045f6883dc060d6e09b5444080cae5196847 The failure happens during config_controller at step 02: Failed at Step 02 . . . Failed to execute bootstrap manifest. The following error is seen in /var/log/puppet/latest/puppet.log: 2019-02-07T00:03:07.915 Debug: 2019-02-07 00:03:06 +0000 Executing: '/usr/bin/openstack complete' 2019-02-07T00:03:08.827 Debug: 2019-02-07 00:03:08 +0000 importing '/usr/share/puppet/modules/platform/manifests/sysinv.pp' in environment production 2019-02-07T00:03:08.849 Debug: 2019-02-07 00:03:08 +0000 Automatically imported platform::sysinv::bootstrap from platform/sysinv into production 2019-02-07T00:03:08.851 Error: 2019-02-07 00:03:08 +0000 Evaluation Error: Error while evaluating a Function Call, Could not find class ::sysinv::db::postgresql for localhost at /usr/share/puppet/modules/platform/manifests/sysinv.pp:160:3 on node localhost Due to the build failure last night, this commit does not yet appear in a public build. Bart Wensley, Member of Technical Staff, Wind River direct 613.963.1385 -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Thu Feb 7 13:37:45 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 7 Feb 2019 14:37:45 +0100 Subject: [Starlingx-discuss] Community Planning Call update Message-ID: Hi, We’ve had a great call yesterday where we discussed goals for the year in the area of outreach and contributor onboarding and discussed plans onwards. We’ve touched on already ongoing activities such as documentation work, hands-on workshop for the Open Infrastructure Summit in Denver, Packet PoC, collaboration with the Edge WG on use cases and content for the StarlingX blog. You can find pointers of the above and further notes from the meeting on this etherpad: https://etherpad.openstack.org/p/2019_StarlingX_Marketing_Plans We are looking for contributors to the StarlingX blog. If you have stories to share, like introducing a feature or driving attention to an ongoing design and implementation work please sign up on the above etherpad or reach out to me. We agreed on a bi-weekly cadence so I created an invite that runs the call series until the Summit in Denver. Please let me know if you have any questions. Thanks, Ildikó -------------- next part -------------- A non-text attachment was scrubbed... Name: STX Community Planning Call.ics Type: text/calendar Size: 1781 bytes Desc: not available URL: -------------- next part -------------- From scott.little at windriver.com Thu Feb 7 15:20:26 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 7 Feb 2019 10:20:26 -0500 Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 130 - Failure! In-Reply-To: <302024602.386.1549525822575.JavaMail.javamailuser@localhost> References: <302024602.386.1549525822575.JavaMail.javamailuser@localhost> Message-ID: <635b43d9-19c0-eb26-ae8e-d6c3f43d30db@windriver.com> Ran out of disk space. The old build cleaner is running now to clean up older loads (retention policy is 2 weeks unless a specific load has be marked as one to preserve.  The cleanup job will now run daily. I'll also audit disk usage per build. We are clearly using more than was originally estimated. Scott On 2019-02-07 2:50 a.m., build.starlingx at gmail.com wrote: > Project: STX_build_master_pike > Build #: 130 > Status: Failure > Timestamp: 20190207T060000Z > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190207T060000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Thu Feb 7 16:03:49 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Thu, 7 Feb 2019 16:03:49 +0000 Subject: [Starlingx-discuss] [build][meeting] Build team meeting Agenda for 2/7/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105D5DE2@fmsmsx104.amr.corp.intel.com> Build team meeting Agenda for 2/7/2019 - Build system for containers - Improvements for current build system - MultiOS build system - discussion - Opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Feb 7 17:10:49 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 7 Feb 2019 17:10:49 +0000 Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on Duplex installation In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A9386E5@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A9386E5@fmsmsx101.amr.corp.intel.com> Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62CF3@ALA-MBD.corp.ad.wrs.com> José, The proxy configuration you supply during config_controller should be propagated to controller-1. If you are sure your controller-1 has connectivity to your proxy, then you will need help from Mingyuan Qi who implemented the proxy support. Bart -----Original Message----- From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] Sent: February 6, 2019 5:21 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on Duplex installation Hi, I'm installing a Duplex with containers and I'm getting below error when unlocking controller-1 ======== | task | Configuration Failed, re-enabling | | task | Rebooting | | task | Booting | ... ... ... | 200.011 | controller-1 experienced a configuration failure. | host=controller-1 | critical | 2019-02-06T13:44:19 | controller ======== then controller tries to boot again until but below failure appears and the controller satays on failure status. =================== | task | Configuration failure, threshold reached, Lock/Unlock to retry | ====================== If I made a lock/unlock cycle I get with the same issues. I was searching on the puppet.log of controller-1 and I found below errors, seems like the repository is not reachable, my question is if I used a proxy on the config_controller those proxies are also propagated to controller-1 when is automatically installed? ==================== 2019-02-06T13:52:15.219 Notice: 2019-02-06 13:52:15 +0000 /Stage[main]/Platform::Helm/Exec[initialize helm]/returns: Error: Looks like "https://kubernetes-charts.storage.googleapis.com" is not a valid chart repository or cannot be reached: Get https://kubernetes-charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes-charts.storage.googleapis.com on 8.8.4.4:53: read udp 10.10.10.4:52260->8.8.4.4:53: i/o timeout ======================= I created a Launchpad to track down this behavior https://bugs.launchpad.net/starlingx/+bug/1814968 Regards, José _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From jose.perez.carranza at intel.com Thu Feb 7 17:30:31 2019 From: jose.perez.carranza at intel.com (Perez Carranza, Jose) Date: Thu, 7 Feb 2019 17:30:31 +0000 Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on Duplex installation In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62CF3@ALA-MBD.corp.ad.wrs.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A9386E5@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62CF3@ALA-MBD.corp.ad.wrs.com> Message-ID: <0A5D9A624DF90343892F8F3FE7DE525A2A9389F8@fmsmsx101.amr.corp.intel.com> Thanks Bart Yes, I have connectivity actually the proxies are set correctly on "/etc/systemd/system/docker.service.d/http-proxy.conf" but seems like are not well recognized by controller-1. If I run below commands the URL is reachable. I'll update the Launchpad with this info. $ https_proxy= curl https://kubernetes-charts.storage.googleapis.com $ https_proxy= helm init Regards, José > -----Original Message----- > From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] > Sent: Thursday, February 7, 2019 11:11 AM > To: Perez Carranza, Jose ; starlingx- > discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] [Containers] Not able to boot controller-1 on > Duplex installation > > José, > > The proxy configuration you supply during config_controller should be > propagated to controller-1. If you are sure your controller-1 has connectivity to > your proxy, then you will need help from Mingyuan Qi who implemented the > proxy support. > > Bart > > -----Original Message----- > From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] > Sent: February 6, 2019 5:21 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on > Duplex installation > > Hi, > > I'm installing a Duplex with containers and I'm getting below error when > unlocking controller-1 ======== > > | task | Configuration Failed, re-enabling | > | task | Rebooting | > | task | Booting | > ... > ... > ... > | 200.011 | controller-1 experienced a configuration failure. | > | host=controller-1 | critical | 2019-02-06T13:44:19 | controller > ======== > > then controller tries to boot again until but below failure appears and the > controller satays on failure status. > =================== > | task | Configuration failure, threshold reached, Lock/Unlock to retry > | | > ====================== > > If I made a lock/unlock cycle I get with the same issues. I was searching on the > puppet.log of controller-1 and I found below errors, seems like the repository > is not reachable, my question is if I used a proxy on the config_controller those > proxies are also propagated to controller-1 when is automatically installed? > > ==================== > 2019-02-06T13:52:15.219 Notice: 2019-02-06 13:52:15 +0000 > /Stage[main]/Platform::Helm/Exec[initialize helm]/returns: Error: Looks like > "https://kubernetes-charts.storage.googleapis.com" is not a valid chart > repository or cannot be reached: Get https://kubernetes- > charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes- > charts.storage.googleapis.com on 8.8.4.4:53: read udp 10.10.10.4:52260- > >8.8.4.4:53: i/o timeout ======================= > > I created a Launchpad to track down this behavior > > https://bugs.launchpad.net/starlingx/+bug/1814968 > > Regards, > José > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Thu Feb 7 17:36:40 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 7 Feb 2019 09:36:40 -0800 Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on Duplex installation In-Reply-To: <0A5D9A624DF90343892F8F3FE7DE525A2A9389F8@fmsmsx101.amr.corp.intel.com> References: <0A5D9A624DF90343892F8F3FE7DE525A2A9386E5@fmsmsx101.amr.corp.intel.com> <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62CF3@ALA-MBD.corp.ad.wrs.com> <0A5D9A624DF90343892F8F3FE7DE525A2A9389F8@fmsmsx101.amr.corp.intel.com> Message-ID: Jose, Can you also talk with Memo as he is looking into this while Mingyuan is on break. Sau! On 2/7/19 9:30 AM, Perez Carranza, Jose wrote: > Thanks Bart > > Yes, I have connectivity actually the proxies are set correctly on "/etc/systemd/system/docker.service.d/http-proxy.conf" but seems like are not well recognized by controller-1. > > If I run below commands the URL is reachable. I'll update the Launchpad with this info. > > $ https_proxy= curl https://kubernetes-charts.storage.googleapis.com > $ https_proxy= helm init > > > Regards, > José > > >> -----Original Message----- >> From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] >> Sent: Thursday, February 7, 2019 11:11 AM >> To: Perez Carranza, Jose ; starlingx- >> discuss at lists.starlingx.io >> Subject: RE: [Starlingx-discuss] [Containers] Not able to boot controller-1 on >> Duplex installation >> >> José, >> >> The proxy configuration you supply during config_controller should be >> propagated to controller-1. If you are sure your controller-1 has connectivity to >> your proxy, then you will need help from Mingyuan Qi who implemented the >> proxy support. >> >> Bart >> >> -----Original Message----- >> From: Perez Carranza, Jose [mailto:jose.perez.carranza at intel.com] >> Sent: February 6, 2019 5:21 PM >> To: starlingx-discuss at lists.starlingx.io >> Subject: [Starlingx-discuss] [Containers] Not able to boot controller-1 on >> Duplex installation >> >> Hi, >> >> I'm installing a Duplex with containers and I'm getting below error when >> unlocking controller-1 ======== >> >> | task | Configuration Failed, re-enabling | >> | task | Rebooting | >> | task | Booting | >> ... >> ... >> ... >> | 200.011 | controller-1 experienced a configuration failure. | >> | host=controller-1 | critical | 2019-02-06T13:44:19 | controller >> ======== >> >> then controller tries to boot again until but below failure appears and the >> controller satays on failure status. >> =================== >> | task | Configuration failure, threshold reached, Lock/Unlock to retry >> | | >> ====================== >> >> If I made a lock/unlock cycle I get with the same issues. I was searching on the >> puppet.log of controller-1 and I found below errors, seems like the repository >> is not reachable, my question is if I used a proxy on the config_controller those >> proxies are also propagated to controller-1 when is automatically installed? >> >> ==================== >> 2019-02-06T13:52:15.219 Notice: 2019-02-06 13:52:15 +0000 >> /Stage[main]/Platform::Helm/Exec[initialize helm]/returns: Error: Looks like >> "https://kubernetes-charts.storage.googleapis.com" is not a valid chart >> repository or cannot be reached: Get https://kubernetes- >> charts.storage.googleapis.com/index.yaml: dial tcp: lookup kubernetes- >> charts.storage.googleapis.com on 8.8.4.4:53: read udp 10.10.10.4:52260- >>> 8.8.4.4:53: i/o timeout ======================= >> >> I created a Launchpad to track down this behavior >> >> https://bugs.launchpad.net/starlingx/+bug/1814968 >> >> Regards, >> José >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Tao.Liu at windriver.com Thu Feb 7 19:43:29 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Thu, 7 Feb 2019 19:43:29 +0000 Subject: [Starlingx-discuss] Host HTTP/HTTPS port number changed Message-ID: <7242A3DC72E453498E3D783BBB134C3E9DDB6F3F@ALA-MBD.corp.ad.wrs.com> Hi All, In order to avoid conflicts with containerized services, the default HTTP(80) and HTTPS(443) port numbers have been changed to 8080 and 8443. The platform horizon UI is now available at http://:8080 To configure a different port for http or https using system CLI: system service-parameter-list --service http +--------------------------------------+---------+---------+------------+-------+-------------+----------+ | uuid | service | section | name | value | personality | resource | +--------------------------------------+---------+---------+------------+-------+-------------+----------+ | 4fc7e8f5-4621-4a07-a207-86c40c6b05fa | http | config | http_port | 8080 | None | None | | 96180927-50f6-4f26-af78-f0cb7c1d4b37 | http | config | https_port | 8443 | None | None | +--------------------------------------+---------+---------+------------+-------+-------------+----------+ system service-parameter-modify http config http_port=8090 system service-parameter-modify http config https_port=9443 system service-parameter-apply http Containerized deployment: For custom apps that use a hard-coded location in the manifest file, the URL needs to be updated accordingly. For example: Location: http://172.17.0.1/helm_charts/hello-kitty.tgz Change to Location: http://172.17.0.1:8080/helm_charts/hello-kitty.tgz Tao Liu, Member of Technical Staff, Engineering,, Wind River direct 613.963.1413 fax: 613.492.7870 skype: tao_at_home 350 Terry Fox Drive, Suite 200, Ottawa, Ontario, K2K 2W5 -------------- next part -------------- An HTML attachment was scrubbed... URL: From mario.alfredo.c.arevalo at intel.com Thu Feb 7 20:12:46 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Thu, 7 Feb 2019 20:12:46 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> , <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com>, <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> Message-ID: <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com> Hi team, There is a pair of issues which needs to be solved in order to create a “fm rest api” docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the “api-paste.ini” file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable “CUSTOMIZATION” of loci with something like this: “cat > /etc/fm/api-paste.ini << EOF config lines EOF” or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to “/etc/fm”. Issue 2: ======= The “fm rest api” requires a shared object called “libfmcommon.so”, this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however­ I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Al.Bailey at windriver.com Thu Feb 7 20:37:59 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 7 Feb 2019 20:37:59 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> , <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com>, <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com> Message-ID: For issue 1) Other openstack components write the paste like this Here's the template for writing the api-paste.ini https://github.com/openstack/openstack-helm/blob/master/heat/templates/configmap-etc.yaml#L139 Here's example data being written to it: https://github.com/openstack/openstack-helm/blob/master/heat/values.yaml#L275 For issue 2) You should be able to include the wheel for fm_core, which is defined here https://github.com/openstack/stx-fault/blob/master/fm-common/sources/setup.py it's in fm-common folder, but its called fm_core and it is included in the wheels tarball http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/ fm_core-1.0-cp27-cp27mu-linux_x86_64.whl The wheel contains C code, so it is an architecture specific wheel Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 3:13 PM To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi team, There is a pair of issues which needs to be solved in order to create a "fm rest api" docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini << EOF config lines EOF" or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to "/etc/fm". Issue 2: ======= The "fm rest api" requires a shared object called "libfmcommon.so", this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cesar.lara at intel.com Thu Feb 7 20:50:54 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Thu, 7 Feb 2019 20:50:54 +0000 Subject: [Starlingx-discuss] [multios][meetings] MultiOS team meeting Agenda for 2/11/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105D6499@fmsmsx104.amr.corp.intel.com> MultiOS team meeting Agenda for 2/11/2019 - Continue Discussion on uploaded specs around MultiOS Multi-OS overview specification - https://review.openstack.org/#/c/619801/ Reorganize Flock Services Source Code repositories - https://review.openstack.org/#/c/631288/ Example of repo based on this spec - https://github.com/starlingx-staging/stx-packaging - MultiOS build system - Opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From mario.alfredo.c.arevalo at intel.com Thu Feb 7 21:01:06 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Thu, 7 Feb 2019 21:01:06 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> , <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com>, <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com>, Message-ID: <6594B51DBE477C48AAE23675314E6C466456A9E8@fmsmsx107.amr.corp.intel.com> Hi Al, Thanks for your answer. I have done a tests with the fm_core-1.0-cp27-cp27mu-linux_x86_64.whl package that you mention, however it does not include libfmcommon.so library. I downloaded the last wheels taball and I unzip all packages and I could not find that library. Best regards. Mario. ________________________________________ From: Bailey, Henry Albert (Al) [Al.Bailey at windriver.com] Sent: Thursday, February 07, 2019 12:37 PM To: Arevalo, Mario Alfredo C; Penney, Don; Wold, Saul; Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts For issue 1) Other openstack components write the paste like this Here's the template for writing the api-paste.ini https://github.com/openstack/openstack-helm/blob/master/heat/templates/configmap-etc.yaml#L139 Here's example data being written to it: https://github.com/openstack/openstack-helm/blob/master/heat/values.yaml#L275 For issue 2) You should be able to include the wheel for fm_core, which is defined here https://github.com/openstack/stx-fault/blob/master/fm-common/sources/setup.py it's in fm-common folder, but its called fm_core and it is included in the wheels tarball http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/ fm_core-1.0-cp27-cp27mu-linux_x86_64.whl The wheel contains C code, so it is an architecture specific wheel Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 3:13 PM To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi team, There is a pair of issues which needs to be solved in order to create a "fm rest api" docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini << EOF config lines EOF" or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to "/etc/fm". Issue 2: ======= The "fm rest api" requires a shared object called "libfmcommon.so", this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Thu Feb 7 21:12:15 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Thu, 7 Feb 2019 22:12:15 +0100 Subject: [Starlingx-discuss] OpenStack Foundation 2018 Annual Report Message-ID: <2113C627-45E5-44B0-8AC3-D42386EECB4A@gmail.com> Hi, I would like to draw your attention to the OpenStack Foundation 2018 Annual Report [1], that we have published today, which is a yearly report highlighting the incredible work and advancements being achieved by the community. You can find information about news and activities that happened under the OpenStack Foundation umbrella including a summary on StarlingX. Read the latest on: • The Foundation’s latest initiatives to support Open Infrastructure • Project updates from the OpenStack, Airship, Kata Containers, StarlingX, and Zuul communities • Highlights from OpenStack Workings Groups and SIGs • Community programs including OpenStack Upstream Institute, the Travel Support Program, Outreachy Internship Programs, and Contributor recognition • OpenStack Foundation events including PTGs, Forums, OpenStack / OpenInfra Days, and the OpenStack Summit With almost 100,000 individual members, our community accomplished a lot last year. If you would like to continue to stay updated in the latest Foundation and project news, subscribe to the bi-weekly Open Infrastructure newsletter [2]. We look forward to another successful year in 2019! Thanks, Ildikó [1] https://www.openstack.org/foundation/2018-openstack-foundation-annual-report [2] https://www.openstack.org/community/email-signup From Al.Bailey at windriver.com Thu Feb 7 21:12:55 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 7 Feb 2019 21:12:55 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: <6594B51DBE477C48AAE23675314E6C466456A9E8@fmsmsx107.amr.corp.intel.com> References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> , <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com>, <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com>, <6594B51DBE477C48AAE23675314E6C466456A9E8@fmsmsx107.amr.corp.intel.com> Message-ID: Looks like the wheel is not complete. the regular RPM is the only place that installs that shared lib. fm-common-1.0-8.tis.x86_64.rpm ie: /usr/lib64/libfmcommon.so.1 /usr/lib64/libfmcommon.so.1.0 I think you need to add this line in your build file DIST_PACKAGES="fm-common" Sort of this gnocchi does for rados https://github.com/openstack/stx-upstream/blob/master/openstack/python-gnocchi/centos/stx-gnocchi.pike_docker_image#L7 Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 4:01 PM To: Bailey, Henry Albert (Al); Penney, Don; Wold, Saul; Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Al, Thanks for your answer. I have done a tests with the fm_core-1.0-cp27-cp27mu-linux_x86_64.whl package that you mention, however it does not include libfmcommon.so library. I downloaded the last wheels taball and I unzip all packages and I could not find that library. Best regards. Mario. ________________________________________ From: Bailey, Henry Albert (Al) [Al.Bailey at windriver.com] Sent: Thursday, February 07, 2019 12:37 PM To: Arevalo, Mario Alfredo C; Penney, Don; Wold, Saul; Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts For issue 1) Other openstack components write the paste like this Here's the template for writing the api-paste.ini https://github.com/openstack/openstack-helm/blob/master/heat/templates/configmap-etc.yaml#L139 Here's example data being written to it: https://github.com/openstack/openstack-helm/blob/master/heat/values.yaml#L275 For issue 2) You should be able to include the wheel for fm_core, which is defined here https://github.com/openstack/stx-fault/blob/master/fm-common/sources/setup.py it's in fm-common folder, but its called fm_core and it is included in the wheels tarball http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/ fm_core-1.0-cp27-cp27mu-linux_x86_64.whl The wheel contains C code, so it is an architecture specific wheel Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 3:13 PM To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi team, There is a pair of issues which needs to be solved in order to create a "fm rest api" docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini << EOF config lines EOF" or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to "/etc/fm". Issue 2: ======= The "fm rest api" requires a shared object called "libfmcommon.so", this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Thu Feb 7 21:26:03 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 7 Feb 2019 21:26:03 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> , <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com>, <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com>, <6594B51DBE477C48AAE23675314E6C466456A9E8@fmsmsx107.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA435C02@ALA-MBD.corp.ad.wrs.com> The wheel includes the fm_core.so library, which is compiled from the python source in fm-common. libfmcommon.so, however, is not python code, so would not be part of a wheel. It would need to be installed via the DIST_PACKAGES entry, as Al describes. -----Original Message----- From: Bailey, Henry Albert (Al) Sent: Thursday, February 07, 2019 4:13 PM To: Arevalo, Mario Alfredo C; Penney, Don; Wold, Saul; Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Looks like the wheel is not complete. the regular RPM is the only place that installs that shared lib. fm-common-1.0-8.tis.x86_64.rpm ie: /usr/lib64/libfmcommon.so.1 /usr/lib64/libfmcommon.so.1.0 I think you need to add this line in your build file DIST_PACKAGES="fm-common" Sort of this gnocchi does for rados https://github.com/openstack/stx-upstream/blob/master/openstack/python-gnocchi/centos/stx-gnocchi.pike_docker_image#L7 Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 4:01 PM To: Bailey, Henry Albert (Al); Penney, Don; Wold, Saul; Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Al, Thanks for your answer. I have done a tests with the fm_core-1.0-cp27-cp27mu-linux_x86_64.whl package that you mention, however it does not include libfmcommon.so library. I downloaded the last wheels taball and I unzip all packages and I could not find that library. Best regards. Mario. ________________________________________ From: Bailey, Henry Albert (Al) [Al.Bailey at windriver.com] Sent: Thursday, February 07, 2019 12:37 PM To: Arevalo, Mario Alfredo C; Penney, Don; Wold, Saul; Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts For issue 1) Other openstack components write the paste like this Here's the template for writing the api-paste.ini https://github.com/openstack/openstack-helm/blob/master/heat/templates/configmap-etc.yaml#L139 Here's example data being written to it: https://github.com/openstack/openstack-helm/blob/master/heat/values.yaml#L275 For issue 2) You should be able to include the wheel for fm_core, which is defined here https://github.com/openstack/stx-fault/blob/master/fm-common/sources/setup.py it's in fm-common folder, but its called fm_core and it is included in the wheels tarball http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/ fm_core-1.0-cp27-cp27mu-linux_x86_64.whl The wheel contains C code, so it is an architecture specific wheel Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 3:13 PM To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi team, There is a pair of issues which needs to be solved in order to create a "fm rest api" docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini << EOF config lines EOF" or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to "/etc/fm". Issue 2: ======= The "fm rest api" requires a shared object called "libfmcommon.so", this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mario.alfredo.c.arevalo at intel.com Thu Feb 7 21:46:19 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Thu, 7 Feb 2019 21:46:19 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm charts In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA435C02@ALA-MBD.corp.ad.wrs.com> References: <6594B51DBE477C48AAE23675314E6C466456672F@fmsmsx107.amr.corp.intel.com> , <6703202FD9FDFF4A8DA9ACF104AE129FBA42F3BA@ALA-MBD.corp.ad.wrs.com>, <6594B51DBE477C48AAE23675314E6C4664568846@fmsmsx107.amr.corp.intel.com> <6594B51DBE477C48AAE23675314E6C466456A983@fmsmsx107.amr.corp.intel.com>, <6594B51DBE477C48AAE23675314E6C466456A9E8@fmsmsx107.amr.corp.intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA435C02@ALA-MBD.corp.ad.wrs.com> Message-ID: <6594B51DBE477C48AAE23675314E6C466456AA6F@fmsmsx107.amr.corp.intel.com> Sounds good, thanks team!. Best Regards. Mario. > -----Original Message----- > From: Penney, Don [mailto:Don.Penney at windriver.com] > Sent: Thursday, February 7, 2019 3:26 PM > To: Bailey, Henry Albert (Al) ; Arevalo, Mario > Alfredo C ; Wold, Saul > ; Troyer, Dean > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > The wheel includes the fm_core.so library, which is compiled from the python > source in fm-common. > > libfmcommon.so, however, is not python code, so would not be part of a wheel. > It would need to be installed via the DIST_PACKAGES entry, as Al describes. > > > -----Original Message----- > From: Bailey, Henry Albert (Al) > Sent: Thursday, February 07, 2019 4:13 PM > To: Arevalo, Mario Alfredo C; Penney, Don; Wold, Saul; Troyer, Dean > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > Looks like the wheel is not complete. > > the regular RPM is the only place that installs that shared lib. > fm-common-1.0-8.tis.x86_64.rpm > > ie: > /usr/lib64/libfmcommon.so.1 > /usr/lib64/libfmcommon.so.1.0 > > > I think you need to add this line in your build file DIST_PACKAGES="fm- > common" > > Sort of this gnocchi does for rados > https://github.com/openstack/stx-upstream/blob/master/openstack/python- > gnocchi/centos/stx-gnocchi.pike_docker_image#L7 > > Al > > -----Original Message----- > From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] > Sent: Thursday, February 07, 2019 4:01 PM > To: Bailey, Henry Albert (Al); Penney, Don; Wold, Saul; Troyer, Dean > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > Hi Al, > > Thanks for your answer. > > I have done a tests with the fm_core-1.0-cp27-cp27mu-linux_x86_64.whl > package that you mention, however it does not include libfmcommon.so library. > I downloaded the last wheels taball and I unzip all packages and I could not find > that library. > > Best regards. > Mario. > ________________________________________ > From: Bailey, Henry Albert (Al) [Al.Bailey at windriver.com] > Sent: Thursday, February 07, 2019 12:37 PM > To: Arevalo, Mario Alfredo C; Penney, Don; Wold, Saul; Troyer, Dean > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > For issue 1) > Other openstack components write the paste like this Here's the template for > writing the api-paste.ini > https://github.com/openstack/openstack- > helm/blob/master/heat/templates/configmap-etc.yaml#L139 > Here's example data being written to it: > https://github.com/openstack/openstack- > helm/blob/master/heat/values.yaml#L275 > > For issue 2) > You should be able to include the wheel for fm_core, which is defined here > https://github.com/openstack/stx-fault/blob/master/fm- > common/sources/setup.py > > it's in fm-common folder, but its called fm_core and it is included in the wheels > tarball > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_ > image_build/outputs/wheels/ > > fm_core-1.0-cp27-cp27mu-linux_x86_64.whl > > The wheel contains C code, so it is an architecture specific wheel > > Al > > > -----Original Message----- > From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] > Sent: Thursday, February 07, 2019 3:13 PM > To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > Hi team, > > There is a pair of issues which needs to be solved in order to create a "fm rest > api" docker image that can work correctly. > > Issue 1: > ======= > > The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems > that this file is created in the deployment process in this section by puppet: > https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet- > fm/src/fm/manifests/init.pp#n109 > > proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" > file was added manually inside devstack/files directory[1], then during the > devstack execution process, the plugin just makes a copy to the environment: > https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 > > I wonder if it is the best approach to do something similar using the variable > "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini > << EOF config lines EOF" or maybe cloning the stx-fault repository and copying > the available file in the devstack directory[0] to "/etc/fm". > > Issue 2: > ======= > > The "fm rest api" requires a shared object called "libfmcommon.so", this library > is created in fm-common project, however it is not available in any wheel > package. > > Proposal: I am not sure if something like this should be enough: > data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] > > Furthermore I tried to add it from the specfile: > https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm- > common.spec#n54 > however I think this is not a good option. > > What do you think is the best approach to tackle these issues? > > Any comments are welcome. > > Best regards. > Mario. > > [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini > [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files > [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py > > > > ________________________________________ > From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] > Sent: Friday, February 01, 2019 7:54 PM > To: Penney, Don; Miller, Frank > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts > > Hi Folks, > > This a short update about this task (I will be in holiday for our next > containerization meeting). > During this week I have been exploring the scripts and tools involved in the > containerization building process. This activity and the wiki page shared by Don > Penny have allowed me to get a better understating of the work-flow. I have > sent a PR[1] with the required files to create an image for fm-rest-api service. At > this moment it is WIP due to I need to do testing. Possibly it will require more > dependencies which are not available in the wheels.cfg/tarball, I will continue > working on this. > During the building tools exploration I had some issues related to network due > to I am working behind a proxy, I sent a patch[2] to set it in the docker > build/run commands and avoid manually modification efforts. > > Any comments, feel free to contact me. > > [1] https://review.openstack.org/#/c/634540/ > [2] https://review.openstack.org/#/c/634542/ > > Best regards. > Mario. > > ________________________________________ > From: Penney, Don [Don.Penney at windriver.com] > Sent: Wednesday, January 30, 2019 7:24 AM > To: Arevalo, Mario Alfredo C; Miller, Frank > Cc: Saul Wold; starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages > > > -----Original Message----- > From: Penney, Don > Sent: Tuesday, January 29, 2019 10:34 AM > To: 'Arevalo, Mario Alfredo C'; Miller, Frank > Cc: Saul Wold; starlingx-discuss at lists.starlingx.io > Subject: RE: [Containers] Background info on helm charts > > Hi Mario, > > I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on > it today, and try to get something published in the next couple of days, barring > other issues coming along. > > Cheers, > Don. > > -----Original Message----- > From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] > Sent: Monday, January 28, 2019 4:15 PM > To: Miller, Frank > Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don > Subject: RE: [Containers] Background info on helm charts > > Hi Frank, > > Thank you for the information, however my doubts are oriented more to the > work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] > for FM service in the last week, nevertheless, the starlingx work-flow includes > some parts which I have not digested completely yet. Don Penny have helped > me sending useful information related to this (Thanks for that), the interaction > between OpenStack/loci system and python/wheels. However I have some > gaps, for example, as staring point I have to create the docker image which will > be consumed by the chart. There are some build scripts and notes that talks > about this, but precisely, there are a pair of lines in the build-tools/README > which makes reference to a exposed service that is not specified [2,3]. > For that reason I asked for a little more detailed information about the work- > flow during our meeting today. > > Thanks for your attention. > > Best regards. > Mario. > > [1] https://github.com/MarioCarrilloA/chart-playground > [2] https://github.com/openstack/stx-root/blob/master/build-tools/build- > docker-images/README#L4 > [3] https://github.com/openstack/stx-root/blob/master/build-tools/build- > docker-images/README#L9 > > > > > > > > From: Miller, Frank [Frank.Miller at windriver.com] > > Sent: Monday, January 28, 2019 11:19 AM > > To: Arevalo, Mario Alfredo C > > Cc: Saul Wold; starlingx-discuss at lists.starlingx.io > > Subject: [Containers] Background info on helm charts > > > > > > > > Mario: > > On the containers community call this morning we took an action to identify > information about helm charts. Irina Mihai identified four references that she > used when working on the cinder helm chart overrides. See [1] to [4] below. > > Also as you work on creating a helm chart for the FM service, you should look at > examples of existing helm charts for a reference. 2 good examples are: > * > Cinder helm chart which is available in the upstream openstack-helm project [5] > and uses certain defaults. We have added StarlingX specific overrides which are > generated from code we added [6]. > * > Nova-api-proxy which is a StarlingX specific service and hence was created from > scratch [7,8] > > Frank > > [1] > > https://docs.helm.sh/developing_charts/ > [2] https://medium.com/containerum/how-to-make-and-share-your-own- > helm-package-50ae40f6c221 > [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ > [4] Helm chart install order https://github.com/helm/helm/blob/release- > 2.10/pkg/tiller/kind_sorter.go#L29 > [5] Cinder helm chart in upstream openstack-helm: > > https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder > [6] StarlingX cinder overrides: See cinder.py in > > https://github.com/openstack/stx- > config/tree/master/sysinv/sysinv/sysinv/sysinv/helm > > [7] Helm chart for nova_api_proxy: > > https://git.openstack.org/cgit/openstack/stx- > config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx- > openstack-helm/nova-api-proxy > > [8] nova_api_proxy override code: See nova_api_proxy.py in > https://github.com/openstack/stx- > config/tree/master/sysinv/sysinv/sysinv/sysinv/helm > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Thu Feb 7 21:52:01 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 7 Feb 2019 21:52:01 +0000 Subject: [Starlingx-discuss] Docs mega spec up for review Message-ID: <9A85D2917C58154C960D95352B22818BBFD1D64E@fmsmsx123.amr.corp.intel.com> The Docs team has been working on a spec for an overhaul of our formal project documentation, and the spec has been posted for review. You can find it at https://review.openstack.org/#/c/635641. Feedback and comments graciously accepted! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tao.Liu at windriver.com Thu Feb 7 22:20:03 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Thu, 7 Feb 2019 22:20:03 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm Message-ID: <7242A3DC72E453498E3D783BBB134C3E9DDB70F7@ALA-MBD.corp.ad.wrs.com> Hi Mario, With regards to issues # 2, FM rest api (located under stx/stx-fault/fm-rest-api) does not use a shared object called “libfmcommon.so”. Unless you are referring to FM application API (located under /stx/stx-fault/fm-api), correct me if I am wrong. The application API is provided as a python package and it is used by the platform applications to raise/clear alarms, or generate logs. The rest API provides client interfaces, and a API server for communicating with other applications(both internal and external) that request to review alarms/logs, or suppress alarms (from viewing). In the pre-k8s world, VIM uses the application API to raise/clear alarms, and generates logs; this api is imported as a python package. For orchestration, VIM uses the rest api to monitor system alarms/logs. With regards to the issue #1, you are referring to FM rest api paste config file. I assume that you are containerizing FM rest api, since the application API is not a message-based interface . Regards, Tao ----------------------------------------------------------------------------------------------------------------------------------------------------- Message: 3 Date: Thu, 7 Feb 2019 20:12:46 +0000 From: "Arevalo, Mario Alfredo C" To: "Penney, Don" , "Wold, Saul" , "al.bailey at windriver.com" , "Troyer, Dean" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Message-ID: <6594B51DBE477C48AAE23675314E6C466456A983 at fmsmsx107.amr.corp.intel.com> Content-Type: text/plain; charset="Windows-1252" Hi team, There is a pair of issues which needs to be solved in order to create a “fm rest api” docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the “api-paste.ini” file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable “CUSTOMIZATION” of loci with something like this: “cat > /etc/fm/api-paste.ini << EOF config lines EOF” or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to “/etc/fm”. Issue 2: ======= The “fm rest api” requires a shared object called “libfmcommon.so”, this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however­ I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ Message: 4 Date: Thu, 7 Feb 2019 20:37:59 +0000 From: "Bailey, Henry Albert (Al)" To: "Arevalo, Mario Alfredo C" , "Penney, Don" , "Wold, Saul" , "Troyer, Dean" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Message-ID: Content-Type: text/plain; charset="us-ascii" For issue 1) Other openstack components write the paste like this Here's the template for writing the api-paste.ini https://github.com/openstack/openstack-helm/blob/master/heat/templates/configmap-etc.yaml#L139 Here's example data being written to it: https://github.com/openstack/openstack-helm/blob/master/heat/values.yaml#L275 For issue 2) You should be able to include the wheel for fm_core, which is defined here https://github.com/openstack/stx-fault/blob/master/fm-common/sources/setup.py it's in fm-common folder, but its called fm_core and it is included in the wheels tarball http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/ fm_core-1.0-cp27-cp27mu-linux_x86_64.whl The wheel contains C code, so it is an architecture specific wheel Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 3:13 PM To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi team, There is a pair of issues which needs to be solved in order to create a "fm rest api" docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini << EOF config lines EOF" or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to "/etc/fm". Issue 2: ======= The "fm rest api" requires a shared object called "libfmcommon.so", this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 9, Issue 34 ************************************************ From mario.alfredo.c.arevalo at intel.com Thu Feb 7 22:54:54 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Thu, 7 Feb 2019 22:54:54 +0000 Subject: [Starlingx-discuss] [Containers] Background info on helm In-Reply-To: <7242A3DC72E453498E3D783BBB134C3E9DDB70F7@ALA-MBD.corp.ad.wrs.com> References: <7242A3DC72E453498E3D783BBB134C3E9DDB70F7@ALA-MBD.corp.ad.wrs.com> Message-ID: <6594B51DBE477C48AAE23675314E6C466456AAE3@fmsmsx107.amr.corp.intel.com> Hi Tao, That is correct, I was making reference to "/stx/stx-fault/fm-api", and yeah I am containerizing FM rest api, thanks for your answer. Best regards. Mario. ________________________________________ From: Liu, Tao [Tao.Liu at windriver.com] Sent: Thursday, February 07, 2019 2:20 PM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm Hi Mario, With regards to issues # 2, FM rest api (located under stx/stx-fault/fm-rest-api) does not use a shared object called “libfmcommon.so”. Unless you are referring to FM application API (located under /stx/stx-fault/fm-api), correct me if I am wrong. The application API is provided as a python package and it is used by the platform applications to raise/clear alarms, or generate logs. The rest API provides client interfaces, and a API server for communicating with other applications(both internal and external) that request to review alarms/logs, or suppress alarms (from viewing). In the pre-k8s world, VIM uses the application API to raise/clear alarms, and generates logs; this api is imported as a python package. For orchestration, VIM uses the rest api to monitor system alarms/logs. With regards to the issue #1, you are referring to FM rest api paste config file. I assume that you are containerizing FM rest api, since the application API is not a message-based interface . Regards, Tao ----------------------------------------------------------------------------------------------------------------------------------------------------- Message: 3 Date: Thu, 7 Feb 2019 20:12:46 +0000 From: "Arevalo, Mario Alfredo C" To: "Penney, Don" , "Wold, Saul" , "al.bailey at windriver.com" , "Troyer, Dean" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Message-ID: <6594B51DBE477C48AAE23675314E6C466456A983 at fmsmsx107.amr.corp.intel.com> Content-Type: text/plain; charset="Windows-1252" Hi team, There is a pair of issues which needs to be solved in order to create a “fm rest api” docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the “api-paste.ini” file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable “CUSTOMIZATION” of loci with something like this: “cat > /etc/fm/api-paste.ini << EOF config lines EOF” or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to “/etc/fm”. Issue 2: ======= The “fm rest api” requires a shared object called “libfmcommon.so”, this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however­ I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ Message: 4 Date: Thu, 7 Feb 2019 20:37:59 +0000 From: "Bailey, Henry Albert (Al)" To: "Arevalo, Mario Alfredo C" , "Penney, Don" , "Wold, Saul" , "Troyer, Dean" Cc: "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Message-ID: Content-Type: text/plain; charset="us-ascii" For issue 1) Other openstack components write the paste like this Here's the template for writing the api-paste.ini https://github.com/openstack/openstack-helm/blob/master/heat/templates/configmap-etc.yaml#L139 Here's example data being written to it: https://github.com/openstack/openstack-helm/blob/master/heat/values.yaml#L275 For issue 2) You should be able to include the wheel for fm_core, which is defined here https://github.com/openstack/stx-fault/blob/master/fm-common/sources/setup.py it's in fm-common folder, but its called fm_core and it is included in the wheels tarball http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/wheels/ fm_core-1.0-cp27-cp27mu-linux_x86_64.whl The wheel contains C code, so it is an architecture specific wheel Al -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Thursday, February 07, 2019 3:13 PM To: Penney, Don; Wold, Saul; Bailey, Henry Albert (Al); Troyer, Dean Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi team, There is a pair of issues which needs to be solved in order to create a "fm rest api" docker image that can work correctly. Issue 1: ======= The fm-rest-api requires a configuration file called "api-paste.ini"[0]. It seems that this file is created in the deployment process in this section by puppet: https://git.starlingx.io/cgit/stx-config/tree/puppet-modules-wrs/puppet-fm/src/fm/manifests/init.pp#n109 proposal: I noticed how devstack process deal with this, firstly the "api-paste.ini" file was added manually inside devstack/files directory[1], then during the devstack execution process, the plugin just makes a copy to the environment: https://git.starlingx.io/cgit/stx-fault/tree/devstack/lib/stx-fault#n173 I wonder if it is the best approach to do something similar using the variable "CUSTOMIZATION" of loci with something like this: "cat > /etc/fm/api-paste.ini << EOF config lines EOF" or maybe cloning the stx-fault repository and copying the available file in the devstack directory[0] to "/etc/fm". Issue 2: ======= The "fm rest api" requires a shared object called "libfmcommon.so", this library is created in fm-common project, however it is not available in any wheel package. Proposal: I am not sure if something like this should be enough: data_files=[('/usr/lib64', ['libfmcommon.so'])], inside the setup.py[2] Furthermore I tried to add it from the specfile: https://git.starlingx.io/cgit/stx-fault/tree/fm-common/centos/fm-common.spec#n54 however I think this is not a good option. What do you think is the best approach to tackle these issues? Any comments are welcome. Best regards. Mario. [0] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files/api-paste.ini [1] https://git.starlingx.io/cgit/stx-fault/tree/devstack/files [2] https://git.starlingx.io/cgit/stx-fault/tree/fm-common/sources/setup.py ________________________________________ From: Arevalo, Mario Alfredo C [mario.alfredo.c.arevalo at intel.com] Sent: Friday, February 01, 2019 7:54 PM To: Penney, Don; Miller, Frank Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Background info on helm charts Hi Folks, This a short update about this task (I will be in holiday for our next containerization meeting). During this week I have been exploring the scripts and tools involved in the containerization building process. This activity and the wiki page shared by Don Penny have allowed me to get a better understating of the work-flow. I have sent a PR[1] with the required files to create an image for fm-rest-api service. At this moment it is WIP due to I need to do testing. Possibly it will require more dependencies which are not available in the wheels.cfg/tarball, I will continue working on this. During the building tools exploration I had some issues related to network due to I am working behind a proxy, I sent a patch[2] to set it in the docker build/run commands and avoid manually modification efforts. Any comments, feel free to contact me. [1] https://review.openstack.org/#/c/634540/ [2] https://review.openstack.org/#/c/634542/ Best regards. Mario. ________________________________________ From: Penney, Don [Don.Penney at windriver.com] Sent: Wednesday, January 30, 2019 7:24 AM To: Arevalo, Mario Alfredo C; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts https://wiki.openstack.org/wiki/StarlingX/Containers/BuildingImages -----Original Message----- From: Penney, Don Sent: Tuesday, January 29, 2019 10:34 AM To: 'Arevalo, Mario Alfredo C'; Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: RE: [Containers] Background info on helm charts Hi Mario, I'm making "writing a wiki" one of my top priorities. I'm hoping to get started on it today, and try to get something published in the next couple of days, barring other issues coming along. Cheers, Don. -----Original Message----- From: Arevalo, Mario Alfredo C [mailto:mario.alfredo.c.arevalo at intel.com] Sent: Monday, January 28, 2019 4:15 PM To: Miller, Frank Cc: Saul Wold; starlingx-discuss at lists.starlingx.io; Penney, Don Subject: RE: [Containers] Background info on helm charts Hi Frank, Thank you for the information, however my doubts are oriented more to the work-flow than the Dockerfile/helm/chart/ tools. Even I created a dockerfile[1] for FM service in the last week, nevertheless, the starlingx work-flow includes some parts which I have not digested completely yet. Don Penny have helped me sending useful information related to this (Thanks for that), the interaction between OpenStack/loci system and python/wheels. However I have some gaps, for example, as staring point I have to create the docker image which will be consumed by the chart. There are some build scripts and notes that talks about this, but precisely, there are a pair of lines in the build-tools/README which makes reference to a exposed service that is not specified [2,3]. For that reason I asked for a little more detailed information about the work-flow during our meeting today. Thanks for your attention. Best regards. Mario. [1] https://github.com/MarioCarrilloA/chart-playground [2] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L4 [3] https://github.com/openstack/stx-root/blob/master/build-tools/build-docker-images/README#L9 From: Miller, Frank [Frank.Miller at windriver.com] Sent: Monday, January 28, 2019 11:19 AM To: Arevalo, Mario Alfredo C Cc: Saul Wold; starlingx-discuss at lists.starlingx.io Subject: [Containers] Background info on helm charts Mario: On the containers community call this morning we took an action to identify information about helm charts. Irina Mihai identified four references that she used when working on the cinder helm chart overrides. See [1] to [4] below. Also as you work on creating a helm chart for the FM service, you should look at examples of existing helm charts for a reference. 2 good examples are: * Cinder helm chart which is available in the upstream openstack-helm project [5] and uses certain defaults. We have added StarlingX specific overrides which are generated from code we added [6]. * Nova-api-proxy which is a StarlingX specific service and hence was created from scratch [7,8] Frank [1] https://docs.helm.sh/developing_charts/ [2] https://medium.com/containerum/how-to-make-and-share-your-own-helm-package-50ae40f6c221 [3] https://docs.bitnami.com/kubernetes/how-to/create-your-first-helm-chart/ [4] Helm chart install order https://github.com/helm/helm/blob/release-2.10/pkg/tiller/kind_sorter.go#L29 [5] Cinder helm chart in upstream openstack-helm: https://git.openstack.org/cgit/openstack/openstack-helm/tree/cinder [6] StarlingX cinder overrides: See cinder.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm [7] Helm chart for nova_api_proxy: https://git.openstack.org/cgit/openstack/stx-config/tree/kubernetes/applications/stx-openstack/stx-openstack-helm/stx-openstack-helm/nova-api-proxy [8] nova_api_proxy override code: See nova_api_proxy.py in https://github.com/openstack/stx-config/tree/master/sysinv/sysinv/sysinv/sysinv/helm _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 9, Issue 34 ************************************************ _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yang.liu at windriver.com Fri Feb 8 18:08:32 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Fri, 8 Feb 2019 18:08:32 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Message-ID: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Feb 8 22:31:54 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 8 Feb 2019 22:31:54 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190208 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C8FA1D@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-08 (link) Sanity Test is executed in a Bare Metal Environment Status: GREEN Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 43 TCs PASS ] =========================================== Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 47 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 22 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Sat Feb 9 00:38:09 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Sat, 9 Feb 2019 00:38:09 +0000 Subject: [Starlingx-discuss] Reverting commit due to load breakage In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62A97@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA62A97@ALA-MBD.corp.ad.wrs.com> Message-ID: Sorry about this, there was a bug in the installation path in the modified specfiles. I sent the new review with this change and config_controller run without issues: https://review.openstack.org/#/c/635990/1 -Erich On Thu, 2019-02-07 at 13:30 +0000, Wensley, Barton wrote: > The following commit appears to have broken installations and is > being reverted: > https://git.openstack.org/cgit/openstack/stx-config/commit/?id=24fd04 > 5f6883dc060d6e09b5444080cae5196847 > > The failure happens during config_controller at step 02: > Failed at Step 02 . . . Failed to execute bootstrap manifest. > > The following error is seen in /var/log/puppet/latest/puppet.log: > 2019-02-07T00:03:07.915 Debug: 2019-02-07 00:03:06 +0000 Executing: > '/usr/bin/openstack complete' > 2019-02-07T00:03:08.827 Debug: 2019-02-07 00:03:08 +0000 importing > '/usr/share/puppet/modules/platform/manifests/sysinv.pp' in > environment production > 2019-02-07T00:03:08.849 Debug: 2019-02-07 00:03:08 +0000 > Automatically imported platform::sysinv::bootstrap from > platform/sysinv into production > 2019-02-07T00:03:08.851 Error: 2019-02-07 00:03:08 +0000 Evaluation > Error: Error while evaluating a Function Call, Could not find class > ::sysinv::db::postgresql for localhost at > /usr/share/puppet/modules/platform/manifests/sysinv.pp:160:3 on node > localhost > > Due to the build failure last night, this commit does not yet appear > in a public build. > > Bart Wensley, Member of Technical Staff, Wind River > direct 613.963.1385 > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From cindy.xie at intel.com Sat Feb 9 01:27:32 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Sat, 9 Feb 2019 01:27:32 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Sat Feb 9 02:18:00 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Sat, 9 Feb 2019 02:18:00 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Sat Feb 9 16:13:34 2019 From: sgw at linux.intel.com (Saul Wold) Date: Sat, 9 Feb 2019 08:13:34 -0800 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> Message-ID: <1f26a897-8f51-da70-2af1-8bbefdad3c6c@linux.intel.com> Cindy, I asked Ada to rebuild and re-test based on the patch that Martin provided before CNY, the 20190202 ISO failed the Duplex testing, but when manually patched succeeded. I wondered if the 20190202 ISO was not rebuilt correctly. This is why I have not merged Martin's patch yet, I did not find out about this failure mode until Thursday. They started that build on Friday and should be able to test it on Monday. If your team is back already and can also rebuild with the patch included and confirm it's correct, then we can work to unblock testing of CentOS-76 work. Sau! On 2/8/19 6:18 PM, Liu, Yang wrote: > Correct. > > BR, > > Yang > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* February-08-19 8:28 PM > *To:* Liu, Yang; starlingx-discuss at lists.starlingx.io > *Subject:* RE: CentOS7.6 testing status - blocked > > Hi, Yang, > > Thanks for the report. > > Are the “two node system” below referring to Duplex? Just want to > confirm because #1814360 we have a patch pending and we do want to > ensure it works on Duplex as well. > > Th.x - cindy > > *From:* Liu, Yang [mailto:yang.liu at windriver.com] > *Sent:* Saturday, February 9, 2019 2:09 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] CentOS7.6 testing status - blocked > > Hi folks, > > Here’s an update for CentOS7.6 testing. > > We are currently blocked due to pxeboot from controller-0 does not work > for EFI. (#1814360) > > We will continue after that issue is resolved. > > System > > > > NICs > > Mgmt;infra;data > > > > Special Configs > > > > Test coverage after Install and Config > > > > Status/Issues > > Dedicated storage > > > > X540-AT2; X540-AT2; fortville > > > > IPv6 > > > > Sanity, nova > > > > Completed. New issues logged. > > #1814336 CentOS7.6: Unable to launch vm directly from virsh > > > #1814335 CentOS7.6: Unable to launch vm with UEFI boot > > > One node system > > > > none; none; X522/X577-AT > > > > > > Sanity, basic regression > > > > Completed. Passed. > > Two node system > > > > fortville; fortville; fortville > > > > tboot, tpm, https, > > extended security profile > > > > Sanity, security > > > > Blocked by #1814360 > > Multi-node system > > > > BCM5720; Niantic; Niantic > > > > Sriov(niantic),pcipt(niantic) > > > > Sanity, networking > > > > Completed. Passed. > > Two node system > > > > Fortville; none; Fortville > > > > Low latency, UEFI > > > > Sanity, basic regression, cyclictest > > > > Blocked by #1814360 > > Two node system > > > > Fortville; none; Fortville > > > > Secure boot > > > > Sanity, security > > > > Blocked by #1814360 > > Multi-node system > > > > I350; Niantic/cx3; cx3 > > > > Pxeboot script > > > > Sanity > > > > Completed. Passed. > > Only compute-0 was used, since compute-1 has CX3 data nic. > > ?? > > > > CX4 on infra or mgmt, but NOT data > > > > Won’t test. We don’t have a system have required nics. > > BR, > > yang > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From build.starlingx at gmail.com Sun Feb 10 06:00:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 10 Feb 2019 01:00:35 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 165 - Failure! Message-ID: <962573924.2.1549778437065.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 165 Status: Failure Timestamp: 20190210T060031Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190210T060000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190210T060000Z DOCKER_DL_ID: jenkins-master-20190210T060000Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190210T060000Z/logs DOCKER_DL_TAG: master-20190210T060000Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190210T060000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sun Feb 10 06:00:39 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 10 Feb 2019 01:00:39 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 133 - Failure! Message-ID: <897966598.5.1549778440552.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 133 Status: Failure Timestamp: 20190210T060000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190210T060000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From cindy.xie at intel.com Sun Feb 10 13:29:12 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Sun, 10 Feb 2019 13:29:12 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <1f26a897-8f51-da70-2af1-8bbefdad3c6c@linux.intel.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <1f26a897-8f51-da70-2af1-8bbefdad3c6c@linux.intel.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E824FE@SHSMSX104.ccr.corp.intel.com> Thank you, Saul - it's might be possible that the ISO is not consistent with the patch that supposed to be built in. It will be very valuable that Ada continue the testing of the re-built image with the patch cherry-pick in. Thx. - cindy -----Original Message----- From: Saul Wold [mailto:sgw at linux.intel.com] Sent: Sunday, February 10, 2019 12:14 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Cindy, I asked Ada to rebuild and re-test based on the patch that Martin provided before CNY, the 20190202 ISO failed the Duplex testing, but when manually patched succeeded. I wondered if the 20190202 ISO was not rebuilt correctly. This is why I have not merged Martin's patch yet, I did not find out about this failure mode until Thursday. They started that build on Friday and should be able to test it on Monday. If your team is back already and can also rebuild with the patch included and confirm it's correct, then we can work to unblock testing of CentOS-76 work. Sau! On 2/8/19 6:18 PM, Liu, Yang wrote: > Correct. > > BR, > > Yang > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* February-08-19 8:28 PM > *To:* Liu, Yang; starlingx-discuss at lists.starlingx.io > *Subject:* RE: CentOS7.6 testing status - blocked > > Hi, Yang, > > Thanks for the report. > > Are the “two node system” below referring to Duplex? Just want to > confirm because #1814360 we have a patch pending and we do want to > ensure it works on Duplex as well. > > Th.x - cindy > > *From:* Liu, Yang [mailto:yang.liu at windriver.com] > *Sent:* Saturday, February 9, 2019 2:09 AM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] CentOS7.6 testing status - blocked > > Hi folks, > > Here’s an update for CentOS7.6 testing. > > We are currently blocked due to pxeboot from controller-0 does not > work for EFI. (#1814360) > > We will continue after that issue is resolved. > > System > > > > NICs > > Mgmt;infra;data > > > > Special Configs > > > > Test coverage after Install and Config > > > > Status/Issues > > Dedicated storage > > > > X540-AT2; X540-AT2; fortville > > > > IPv6 > > > > Sanity, nova > > > > Completed. New issues logged. > > #1814336 CentOS7.6: Unable to launch vm directly from virsh > > > #1814335 CentOS7.6: Unable to launch vm with UEFI boot > > > One node system > > > > none; none; X522/X577-AT > > > > > > Sanity, basic regression > > > > Completed. Passed. > > Two node system > > > > fortville; fortville; fortville > > > > tboot, tpm, https, > > extended security profile > > > > Sanity, security > > > > Blocked by #1814360 > > Multi-node system > > > > BCM5720; Niantic; Niantic > > > > Sriov(niantic),pcipt(niantic) > > > > Sanity, networking > > > > Completed. Passed. > > Two node system > > > > Fortville; none; Fortville > > > > Low latency, UEFI > > > > Sanity, basic regression, cyclictest > > > > Blocked by #1814360 > > Two node system > > > > Fortville; none; Fortville > > > > Secure boot > > > > Sanity, security > > > > Blocked by #1814360 > > Multi-node system > > > > I350; Niantic/cx3; cx3 > > > > Pxeboot script > > > > Sanity > > > > Completed. Passed. > > Only compute-0 was used, since compute-1 has CX3 data nic. > > ?? > > > > CX4 on infra or mgmt, but NOT data > > > > Won’t test. We don’t have a system have required nics. > > BR, > > yang > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Mon Feb 11 06:00:34 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 11 Feb 2019 01:00:34 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_DL_container_setup - Build # 166 - Still Failing! In-Reply-To: <639788786.0.1549778432919.JavaMail.javamailuser@localhost> References: <639788786.0.1549778432919.JavaMail.javamailuser@localhost> Message-ID: <482644970.8.1549864835541.JavaMail.javamailuser@localhost> Project: STX_DL_container_setup Build #: 166 Status: Still Failing Timestamp: 20190211T060029Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190211T060000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190211T060000Z DOCKER_DL_ID: jenkins-master-20190211T060000Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190211T060000Z/logs DOCKER_DL_TAG: master-20190211T060000Z-downloader-image PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190211T060000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon Feb 11 06:00:37 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 11 Feb 2019 01:00:37 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 134 - Still Failing! In-Reply-To: <1878626154.3.1549778437936.JavaMail.javamailuser@localhost> References: <1878626154.3.1549778437936.JavaMail.javamailuser@localhost> Message-ID: <1883447254.11.1549864838898.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 134 Status: Still Failing Timestamp: 20190211T060000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190211T060000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From Ian.Jolliffe at windriver.com Mon Feb 11 14:52:40 2019 From: Ian.Jolliffe at windriver.com (Jolliffe, Ian) Date: Mon, 11 Feb 2019 14:52:40 +0000 Subject: [Starlingx-discuss] [TSC]MInutes - February 7, 2019 Message-ID: <351CFA9E-A2D3-4CBC-8AE1-965B517CD250@windriver.com> Hi all; TSC attendees: Brent, Curtis, Dean, Ian, Miguel, Saul First Contact SIG/Group (ildikov) * Came up on the community planning call to explore the idea of creating a group to help new comers to participate in the community * Group in OpenStack: https://wiki.openstack.org/wiki/First_Contact_SIG * Focus is helping new contributors * Coordinate office hours, other ways to help new users, developers encourage community norms on boarding new developers and potentially new orgs help dev's in different time zones * and people who don't work full time on stx and can't attend meetings as they are usually during NA working hours. * The SIG developed doc's to help getting started in the community * Do we want a similar group within STX community? It could be a way to move forward ideas from F2F. Curtis - need for a structured group, don't need to adopt carte blanche, how helpful is office hours? Liason list might help Table for today - what items are important to the project - get feedback from ML - Bruce put on community call agenda for next week. Community planning call to happen every other week. * May want to change the name of the call, bit confusing to have a "community" call and a "community planning" call, which is really open source biz dev-type stuff Pharos labs - testing opportunity (Ian) * tie in with OPNFV test suites – Functest, storperf, Dovetail, Yardstick (which also pull in Rally from OpenStack) * One Pharos lab is running StarlingX release 1 – very cool * can we leverage these resources * Tie in with Ci/CD pipelines * Miguel - talked to OPNFV QA project leader - will work to connect dots STX policies (Curtis) * Some discussion of security policy in community meeting around making policy immutable by storing text in images...should come to some kind of governance conclusion on how to store policies, and whether or not immutability is even an issue. Of course enforcing polices is another gray area... :) * once one is written - where does it live and how does it evolve. Immutablity - that change happens but it is reviewed and can be reverted as required stx/gov and stx/specs are places for those reviews, some could go into docs if required * gerrit review process works for policy management/updates * TSC should approve policy - and changes - what is the threshold for change approval we want * we need think what the threshold should be based on some examples ( simple majority vs super majority ) * Do we need a policy repo? * Close on path forward at Feb 14th meeting. Regards; Ian -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Mon Feb 11 15:30:25 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 11 Feb 2019 15:30:25 +0000 Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? Message-ID: <8557B550001AFB46A43A0CCC314BF85153C95ECF@FMSMSX108.amr.corp.intel.com> Hi, In order to launch instances from a volume snapshot, I use nova boot command. After deploy an STX containerized system this command is returning below error: [wrsroot at controller-0 ~(keystone_admin)]$ nova boot -flavor --snapshot --nic ERROR (ConnectFailure): Unable to establish connection to http://192.168.204.2:8774/v2.1/a4952fba408146b9b6cbe2e028da2708: HTTPConnectionPool(host='192.168.204.2', port=8774): Max retries exceeded with url: /v2.1/a4952fba408146b9b6cbe2e028da2708 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) I tried with 'source /etc/nova/openrc' and 'source /etc/platform/openrc' authentication, both got the same issue. Is this nova boot command still supported with containers? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Mon Feb 11 15:41:07 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Mon, 11 Feb 2019 15:41:07 +0000 Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C95ECF@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C95ECF@FMSMSX108.amr.corp.intel.com> Message-ID: I believe you are authenticating with the keystone that is on the controller, rather than the one that is running in the container. Refer to this section of the wiki https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints Once you are pointing at the nova that is running in the container, those nova commands should work as expected. Al From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Monday, February 11, 2019 10:30 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? Hi, In order to launch instances from a volume snapshot, I use nova boot command. After deploy an STX containerized system this command is returning below error: [wrsroot at controller-0 ~(keystone_admin)]$ nova boot -flavor --snapshot --nic ERROR (ConnectFailure): Unable to establish connection to http://192.168.204.2:8774/v2.1/a4952fba408146b9b6cbe2e028da2708: HTTPConnectionPool(host='192.168.204.2', port=8774): Max retries exceeded with url: /v2.1/a4952fba408146b9b6cbe2e028da2708 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) I tried with 'source /etc/nova/openrc' and 'source /etc/platform/openrc' authentication, both got the same issue. Is this nova boot command still supported with containers? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Feb 11 15:49:09 2019 From: scott.little at windriver.com (Scott Little) Date: Mon, 11 Feb 2019 10:49:09 -0500 Subject: [Starlingx-discuss] Access to mirror.starlingx.cengn.ca Message-ID: Access to mirror.starlingx.cengn.ca will be subject to a few interruptions throughout the day. I am working with the folks at CENGN to restore normal operations after the maintenance outage of the past weekend. Apologies for the inconvenience. Scott From juan.carlos.alonso at intel.com Mon Feb 11 16:45:35 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Mon, 11 Feb 2019 16:45:35 +0000 Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? Message-ID: <8557B550001AFB46A43A0CCC314BF85153C96F0C@FMSMSX108.amr.corp.intel.com> Hi, Yes, export OS_CLOUD=openstack_helm already applied. Openstack commands work correctly but nova, glance, cinder are asking for user name, user id, project name, tenant id, etc.. I think some this parameters are defined on /etc/openstack/clouds.yaml, right? Does clouds.yaml need extra parameters? That's why also I am asking if nova, glance, cinder, etc commands won't be supported? In order to update to openstack command. Regards. Juan Carlos Alonso From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, February 11, 2019 9:41 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? I believe you are authenticating with the keystone that is on the controller, rather than the one that is running in the container. Refer to this section of the wiki https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints Once you are pointing at the nova that is running in the container, those nova commands should work as expected. Al From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Monday, February 11, 2019 10:30 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? Hi, In order to launch instances from a volume snapshot, I use nova boot command. After deploy an STX containerized system this command is returning below error: [wrsroot at controller-0 ~(keystone_admin)]$ nova boot -flavor --snapshot --nic ERROR (ConnectFailure): Unable to establish connection to http://192.168.204.2:8774/v2.1/a4952fba408146b9b6cbe2e028da2708: HTTPConnectionPool(host='192.168.204.2', port=8774): Max retries exceeded with url: /v2.1/a4952fba408146b9b6cbe2e028da2708 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) I tried with 'source /etc/nova/openrc' and 'source /etc/platform/openrc' authentication, both got the same issue. Is this nova boot command still supported with containers? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Mon Feb 11 17:00:52 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Mon, 11 Feb 2019 17:00:52 +0000 Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C96F0C@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C96F0C@FMSMSX108.amr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F86218307E@ALA-MBD.corp.ad.wrs.com> If you ever sourced to openrc file in the same shell, then the auth url in the openrc file will be used. After sourcing to /etc/nova/openrc, if you want to switch back to containerized openstack cli's, you can export the os_auth_url to make sure the correct keystone is used. export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3 BR, Yang From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: February-11-19 11:46 AM To: Bailey, Henry Albert (Al); 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? Hi, Yes, export OS_CLOUD=openstack_helm already applied. Openstack commands work correctly but nova, glance, cinder are asking for user name, user id, project name, tenant id, etc.. I think some this parameters are defined on /etc/openstack/clouds.yaml, right? Does clouds.yaml need extra parameters? That's why also I am asking if nova, glance, cinder, etc commands won't be supported? In order to update to openstack command. Regards. Juan Carlos Alonso From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Monday, February 11, 2019 9:41 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: Re: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? I believe you are authenticating with the keystone that is on the controller, rather than the one that is running in the container. Refer to this section of the wiki https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints Once you are pointing at the nova that is running in the container, those nova commands should work as expected. Al From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: Monday, February 11, 2019 10:30 AM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? Hi, In order to launch instances from a volume snapshot, I use nova boot command. After deploy an STX containerized system this command is returning below error: [wrsroot at controller-0 ~(keystone_admin)]$ nova boot -flavor --snapshot --nic ERROR (ConnectFailure): Unable to establish connection to http://192.168.204.2:8774/v2.1/a4952fba408146b9b6cbe2e028da2708: HTTPConnectionPool(host='192.168.204.2', port=8774): Max retries exceeded with url: /v2.1/a4952fba408146b9b6cbe2e028da2708 (Caused by NewConnectionError(': Failed to establish a new connection: [Errno 111] Connection refused',)) I tried with 'source /etc/nova/openrc' and 'source /etc/platform/openrc' authentication, both got the same issue. Is this nova boot command still supported with containers? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Mon Feb 11 18:54:09 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 11 Feb 2019 13:54:09 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 141 - Failure! Message-ID: <2099069497.15.1549911250970.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 141 Status: Failure Timestamp: 20190211T165925Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190211T161653Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190211T161653Z DOCKER_BUILD_ID: jenkins-master-20190211T161653Z-builder MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190211T161653Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190211T161653Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon Feb 11 18:54:13 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 11 Feb 2019 13:54:13 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 135 - Still Failing! In-Reply-To: <457906627.9.1549864836334.JavaMail.javamailuser@localhost> References: <457906627.9.1549864836334.JavaMail.javamailuser@localhost> Message-ID: <202765377.18.1549911254637.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 135 Status: Still Failing Timestamp: 20190211T161653Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190211T161653Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From bruce.e.jones at intel.com Tue Feb 12 01:09:54 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 12 Feb 2019 01:09:54 +0000 Subject: [Starlingx-discuss] distro.openstack call this week Message-ID: <9A85D2917C58154C960D95352B22818BBFD2611B@fmsmsx123.amr.corp.intel.com> Agenda Review tracking sheet for the latest updates Checkpoint progress and discuss options -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Feb 12 01:10:01 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 12 Feb 2019 01:10:01 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Feb 12 01:22:10 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 12 Feb 2019 01:22:10 +0000 Subject: [Starlingx-discuss] [ Test ] meeting agenda - 02/12/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD72264@FMSMSX114.amr.corp.intel.com> Agenda for 02/12/2019 1. Release verification plan check - Ada - 15 min 2. Repo submissions - Numan - 15 min 3. Release notes generation (for testing repo) - Abraham/Cristopher - 15 min 4. Opens - all Regards Ada From kyle.oh95 at gmail.com Tue Feb 12 05:54:46 2019 From: kyle.oh95 at gmail.com (Jaewook Oh) Date: Tue, 12 Feb 2019 14:54:46 +0900 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration Message-ID: Hello starlingx team, I deployed Starlingx All-in-one simplex setting on my server, it seems deployment was done well. *However I cannot access to the dashboard.* I used "2019-*Feb-08* 03:40:29 bootimage.iso". Following log is the service that are now enabled, [wrsroot at controller-0 ~(keystone_admin)]$ system service-list +-----+-------------------------------+--------------+----------------+ | id | service_name | hostname | state | +-----+-------------------------------+--------------+----------------+ | 68 | aodh-api | controller-0 | enabled-active | | 69 | aodh-evaluator | controller-0 | enabled-active | | 70 | aodh-listener | controller-0 | enabled-active | | 71 | aodh-notifier | controller-0 | enabled-active | | 102 | barbican-api | controller-0 | enabled-active | | 103 | barbican-keystone-listener | controller-0 | enabled-active | | 104 | barbican-worker | controller-0 | enabled-active | | 43 | ceilometer-agent-notification | controller-0 | enabled-active | | 14 | cgcs-export-fs | controller-0 | enabled-active | | 10 | cgcs-fs | controller-0 | enabled-active | | 16 | cgcs-nfs-ip | controller-0 | enabled-active | | 35 | cinder-api | controller-0 | enabled-active | | 38 | cinder-backup | controller-0 | enabled-active | | 58 | cinder-ip | controller-0 | enabled-active | | 56 | cinder-lvm | controller-0 | enabled-active | | 36 | cinder-scheduler | controller-0 | enabled-active | | 37 | cinder-volume | controller-0 | enabled-active | | 23 | dnsmasq | controller-0 | enabled-active | | 5 | drbd-cgcs | controller-0 | enabled-active | | 55 | drbd-cinder | controller-0 | enabled-active | | 75 | drbd-extension | controller-0 | enabled-active | | 3 | drbd-pg | controller-0 | enabled-active | | 6 | drbd-platform | controller-0 | enabled-active | | 4 | drbd-rabbit | controller-0 | enabled-active | | 77 | extension-export-fs | controller-0 | enabled-active | | 76 | extension-fs | controller-0 | enabled-active | | 24 | fm-mgr | controller-0 | enabled-active | | 27 | glance-api | controller-0 | enabled-active | | 26 | glance-registry | controller-0 | enabled-active | | 108 | gnocchi-api | controller-0 | enabled-active | | 109 | gnocchi-metricd | controller-0 | enabled-active | | 62 | guest-agent | controller-0 | enabled-active | | 64 | haproxy | controller-0 | enabled-active | | 45 | heat-api | controller-0 | enabled-active | | 46 | heat-api-cfn | controller-0 | enabled-active | | 47 | heat-api-cloudwatch | controller-0 | enabled-active | | 44 | heat-engine | controller-0 | enabled-active | | 51 | horizon | controller-0 | enabled-active | | 22 | hw-mon | controller-0 | enabled-active | | 57 | iscsi | controller-0 | enabled-active | | 25 | keystone | controller-0 | enabled-active | | 50 | lighttpd | controller-0 | enabled-active | | 2 | management-ip | controller-0 | enabled-active | | 20 | mtc-agent | controller-0 | enabled-active | | 28 | neutron-server | controller-0 | enabled-active | | 9 | nfs-mgmt | controller-0 | enabled-active | | 29 | nova-api | controller-0 | enabled-active | | 63 | nova-api-proxy | controller-0 | enabled-active | | 31 | nova-conductor | controller-0 | enabled-active | | 33 | nova-console-auth | controller-0 | enabled-active | | 34 | nova-novnc | controller-0 | enabled-active | | 83 | nova-placement-api | controller-0 | enabled-active | | 30 | nova-scheduler | controller-0 | enabled-active | | 48 | open-ldap | controller-0 | enabled-active | | 82 | panko-api | controller-0 | enabled-active | | 52 | patch-alarm-manager | controller-0 | enabled-active | | 7 | pg-fs | controller-0 | enabled-active | | 15 | platform-export-fs | controller-0 | enabled-active | | 11 | platform-fs | controller-0 | enabled-active | | 17 | platform-nfs-ip | controller-0 | enabled-active | | 12 | postgres | controller-0 | enabled-active | | 65 | pxeboot-ip | controller-0 | enabled-active | | 13 | rabbit | controller-0 | enabled-active | | 8 | rabbit-fs | controller-0 | enabled-active | | 49 | snmp | controller-0 | enabled-active | | 19 | sysinv-conductor | controller-0 | enabled-active | | 18 | sysinv-inv | controller-0 | enabled-active | | 59 | vim | controller-0 | enabled-active | | 60 | vim-api | controller-0 | enabled-active | | 61 | vim-webserver | controller-0 | enabled-active | +-----+-------------------------------+--------------+----------------+ As you can see, *I cannot find #1 service (oam-ip)*, and I think this is one of the main reasons. *And also I had a problem when I followed the installation instruction.* [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a Above instruction was not naturally worked, I mean the third command, 'host-if-modify' failed with the message "*DataNetwork providernet-a could not be found*." So I googled and I figured out I should command $ *system datanetwork-add providernet-a vlan* first. But I remember when my first deployment succeed, I didn't need to use that command at all. (The deployment was implemented on Jan.) Any help is appreciated, thank you for your help in advance. Best Regards, Jaewook. ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Tue Feb 12 06:19:34 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Tue, 12 Feb 2019 06:19:34 +0000 Subject: [Starlingx-discuss] Agenda: Weekly StarlingX non-OpenStack Distro meeting, 2/13 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E842BE@SHSMSX104.ccr.corp.intel.com> Agenda for 2/13 meeting: 1. CentOS 7.6 upgrade, test status (Shuicheng, Ada/Numan) 2. Ceph upgrade status (Frank/Ovidiu, Changcheng) 3. Bug triage (Cindy) 4. Opens (All) -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: Xie, Cindy; 'Khalil, Ghada'; Sun, Austin; Somerville, Jim; 'Rowsell, Brent'; Liu, ZhipengS; Wold, Saul; starlingx-discuss at lists.starlingx.io; Shang, Dehao; Waheed, Numan; Troyer, Dean; Jones, Bruce E; Lin, Shuicheng; Zhu, Vivian; Hu, Yong Cc: Hu, Wei W; 'Seiler, Glenn'; Gomez, Juan P; 'Chen, Jacky'; Perez Rodriguez, Humberto I; 'Young, Ken'; Cobbley, David A; 'Waines, Greg'; Arce Moreno, Abraham; 'Eslimi, Dariush'; Lara, Cesar; Perez Carranza, Jose; 'Hellmann, Gil'; Armstrong, Robert H; Martinez Landa, Hayde; Martinez Monroy, Elio; Fang, Liang A Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, February 13, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From austin.sun at intel.com Tue Feb 12 08:02:41 2019 From: austin.sun at intel.com (Sun, Austin) Date: Tue, 12 Feb 2019 08:02:41 +0000 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration In-Reply-To: References: Message-ID: Hi Jaewook: 1) are you deploy on Virtual env or bare-metal ? 2) can you ping oam-ip ? and do you have any proxy setting in your web browser ? 3) for system datanetwork-add, please refer http://lists.starlingx.io/pipermail/starlingx-discuss/2019-January/002788.html for more detail. Thanks. BR Austin Sun. From: Jaewook Oh [mailto:kyle.oh95 at gmail.com] Sent: Tuesday, February 12, 2019 1:55 PM To: starlingx-discuss at lists.starlingx.io Cc: 오재욱 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration Hello starlingx team, I deployed Starlingx All-in-one simplex setting on my server, it seems deployment was done well. However I cannot access to the dashboard. I used "2019-Feb-08 03:40:29 bootimage.iso". Following log is the service that are now enabled, [wrsroot at controller-0 ~(keystone_admin)]$ system service-list +-----+-------------------------------+--------------+----------------+ | id | service_name | hostname | state | +-----+-------------------------------+--------------+----------------+ | 68 | aodh-api | controller-0 | enabled-active | | 69 | aodh-evaluator | controller-0 | enabled-active | | 70 | aodh-listener | controller-0 | enabled-active | | 71 | aodh-notifier | controller-0 | enabled-active | | 102 | barbican-api | controller-0 | enabled-active | | 103 | barbican-keystone-listener | controller-0 | enabled-active | | 104 | barbican-worker | controller-0 | enabled-active | | 43 | ceilometer-agent-notification | controller-0 | enabled-active | | 14 | cgcs-export-fs | controller-0 | enabled-active | | 10 | cgcs-fs | controller-0 | enabled-active | | 16 | cgcs-nfs-ip | controller-0 | enabled-active | | 35 | cinder-api | controller-0 | enabled-active | | 38 | cinder-backup | controller-0 | enabled-active | | 58 | cinder-ip | controller-0 | enabled-active | | 56 | cinder-lvm | controller-0 | enabled-active | | 36 | cinder-scheduler | controller-0 | enabled-active | | 37 | cinder-volume | controller-0 | enabled-active | | 23 | dnsmasq | controller-0 | enabled-active | | 5 | drbd-cgcs | controller-0 | enabled-active | | 55 | drbd-cinder | controller-0 | enabled-active | | 75 | drbd-extension | controller-0 | enabled-active | | 3 | drbd-pg | controller-0 | enabled-active | | 6 | drbd-platform | controller-0 | enabled-active | | 4 | drbd-rabbit | controller-0 | enabled-active | | 77 | extension-export-fs | controller-0 | enabled-active | | 76 | extension-fs | controller-0 | enabled-active | | 24 | fm-mgr | controller-0 | enabled-active | | 27 | glance-api | controller-0 | enabled-active | | 26 | glance-registry | controller-0 | enabled-active | | 108 | gnocchi-api | controller-0 | enabled-active | | 109 | gnocchi-metricd | controller-0 | enabled-active | | 62 | guest-agent | controller-0 | enabled-active | | 64 | haproxy | controller-0 | enabled-active | | 45 | heat-api | controller-0 | enabled-active | | 46 | heat-api-cfn | controller-0 | enabled-active | | 47 | heat-api-cloudwatch | controller-0 | enabled-active | | 44 | heat-engine | controller-0 | enabled-active | | 51 | horizon | controller-0 | enabled-active | | 22 | hw-mon | controller-0 | enabled-active | | 57 | iscsi | controller-0 | enabled-active | | 25 | keystone | controller-0 | enabled-active | | 50 | lighttpd | controller-0 | enabled-active | | 2 | management-ip | controller-0 | enabled-active | | 20 | mtc-agent | controller-0 | enabled-active | | 28 | neutron-server | controller-0 | enabled-active | | 9 | nfs-mgmt | controller-0 | enabled-active | | 29 | nova-api | controller-0 | enabled-active | | 63 | nova-api-proxy | controller-0 | enabled-active | | 31 | nova-conductor | controller-0 | enabled-active | | 33 | nova-console-auth | controller-0 | enabled-active | | 34 | nova-novnc | controller-0 | enabled-active | | 83 | nova-placement-api | controller-0 | enabled-active | | 30 | nova-scheduler | controller-0 | enabled-active | | 48 | open-ldap | controller-0 | enabled-active | | 82 | panko-api | controller-0 | enabled-active | | 52 | patch-alarm-manager | controller-0 | enabled-active | | 7 | pg-fs | controller-0 | enabled-active | | 15 | platform-export-fs | controller-0 | enabled-active | | 11 | platform-fs | controller-0 | enabled-active | | 17 | platform-nfs-ip | controller-0 | enabled-active | | 12 | postgres | controller-0 | enabled-active | | 65 | pxeboot-ip | controller-0 | enabled-active | | 13 | rabbit | controller-0 | enabled-active | | 8 | rabbit-fs | controller-0 | enabled-active | | 49 | snmp | controller-0 | enabled-active | | 19 | sysinv-conductor | controller-0 | enabled-active | | 18 | sysinv-inv | controller-0 | enabled-active | | 59 | vim | controller-0 | enabled-active | | 60 | vim-api | controller-0 | enabled-active | | 61 | vim-webserver | controller-0 | enabled-active | +-----+-------------------------------+--------------+----------------+ As you can see, I cannot find #1 service (oam-ip), and I think this is one of the main reasons. And also I had a problem when I followed the installation instruction. [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a Above instruction was not naturally worked, I mean the third command, 'host-if-modify' failed with the message "DataNetwork providernet-a could not be found." So I googled and I figured out I should command $ system datanetwork-add providernet-a vlan first. But I remember when my first deployment succeed, I didn't need to use that command at all. (The deployment was implemented on Jan.) Any help is appreciated, thank you for your help in advance. Best Regards, Jaewook. ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Tue Feb 12 14:00:12 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Tue, 12 Feb 2019 14:00:12 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F8621832F3@ALA-MBD.corp.ad.wrs.com> Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From Tao.Liu at windriver.com Tue Feb 12 15:49:14 2019 From: Tao.Liu at windriver.com (Liu, Tao) Date: Tue, 12 Feb 2019 15:49:14 +0000 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one Message-ID: <7242A3DC72E453498E3D783BBB134C3E9DDBD1EC@ALA-MBD.corp.ad.wrs.com> Hi Jaewook, In regard to *However I cannot access to the dashboard.* Per Feb 7 discussion (link below), the platform horizon UI is now available at http://:8080. http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003017.html If you use VirtualBox Nat Networking, you will need to update the port forwarding for dashboard per wiki. https://wiki.openstack.org/wiki/StarlingX/Containers/Installation Regards, Tao ------------------------------------------------------------------------------------------------------------------------------------ Message: 2 Date: Tue, 12 Feb 2019 14:54:46 +0900 From: Jaewook Oh To: starlingx-discuss at lists.starlingx.io Cc: 오재욱 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration Message-ID: Content-Type: text/plain; charset="utf-8" Hello starlingx team, I deployed Starlingx All-in-one simplex setting on my server, it seems deployment was done well. *However I cannot access to the dashboard.* I used "2019-*Feb-08* 03:40:29 bootimage.iso". Following log is the service that are now enabled, [wrsroot at controller-0 ~(keystone_admin)]$ system service-list +-----+-------------------------------+--------------+----------------+ | id | service_name | hostname | state | +-----+-------------------------------+--------------+----------------+ | 68 | aodh-api | controller-0 | enabled-active | | 69 | aodh-evaluator | controller-0 | enabled-active | | 70 | aodh-listener | controller-0 | enabled-active | | 71 | aodh-notifier | controller-0 | enabled-active | | 102 | barbican-api | controller-0 | enabled-active | | 103 | barbican-keystone-listener | controller-0 | enabled-active | | 104 | barbican-worker | controller-0 | enabled-active | | 43 | ceilometer-agent-notification | controller-0 | enabled-active | | 14 | cgcs-export-fs | controller-0 | enabled-active | | 10 | cgcs-fs | controller-0 | enabled-active | | 16 | cgcs-nfs-ip | controller-0 | enabled-active | | 35 | cinder-api | controller-0 | enabled-active | | 38 | cinder-backup | controller-0 | enabled-active | | 58 | cinder-ip | controller-0 | enabled-active | | 56 | cinder-lvm | controller-0 | enabled-active | | 36 | cinder-scheduler | controller-0 | enabled-active | | 37 | cinder-volume | controller-0 | enabled-active | | 23 | dnsmasq | controller-0 | enabled-active | | 5 | drbd-cgcs | controller-0 | enabled-active | | 55 | drbd-cinder | controller-0 | enabled-active | | 75 | drbd-extension | controller-0 | enabled-active | | 3 | drbd-pg | controller-0 | enabled-active | | 6 | drbd-platform | controller-0 | enabled-active | | 4 | drbd-rabbit | controller-0 | enabled-active | | 77 | extension-export-fs | controller-0 | enabled-active | | 76 | extension-fs | controller-0 | enabled-active | | 24 | fm-mgr | controller-0 | enabled-active | | 27 | glance-api | controller-0 | enabled-active | | 26 | glance-registry | controller-0 | enabled-active | | 108 | gnocchi-api | controller-0 | enabled-active | | 109 | gnocchi-metricd | controller-0 | enabled-active | | 62 | guest-agent | controller-0 | enabled-active | | 64 | haproxy | controller-0 | enabled-active | | 45 | heat-api | controller-0 | enabled-active | | 46 | heat-api-cfn | controller-0 | enabled-active | | 47 | heat-api-cloudwatch | controller-0 | enabled-active | | 44 | heat-engine | controller-0 | enabled-active | | 51 | horizon | controller-0 | enabled-active | | 22 | hw-mon | controller-0 | enabled-active | | 57 | iscsi | controller-0 | enabled-active | | 25 | keystone | controller-0 | enabled-active | | 50 | lighttpd | controller-0 | enabled-active | | 2 | management-ip | controller-0 | enabled-active | | 20 | mtc-agent | controller-0 | enabled-active | | 28 | neutron-server | controller-0 | enabled-active | | 9 | nfs-mgmt | controller-0 | enabled-active | | 29 | nova-api | controller-0 | enabled-active | | 63 | nova-api-proxy | controller-0 | enabled-active | | 31 | nova-conductor | controller-0 | enabled-active | | 33 | nova-console-auth | controller-0 | enabled-active | | 34 | nova-novnc | controller-0 | enabled-active | | 83 | nova-placement-api | controller-0 | enabled-active | | 30 | nova-scheduler | controller-0 | enabled-active | | 48 | open-ldap | controller-0 | enabled-active | | 82 | panko-api | controller-0 | enabled-active | | 52 | patch-alarm-manager | controller-0 | enabled-active | | 7 | pg-fs | controller-0 | enabled-active | | 15 | platform-export-fs | controller-0 | enabled-active | | 11 | platform-fs | controller-0 | enabled-active | | 17 | platform-nfs-ip | controller-0 | enabled-active | | 12 | postgres | controller-0 | enabled-active | | 65 | pxeboot-ip | controller-0 | enabled-active | | 13 | rabbit | controller-0 | enabled-active | | 8 | rabbit-fs | controller-0 | enabled-active | | 49 | snmp | controller-0 | enabled-active | | 19 | sysinv-conductor | controller-0 | enabled-active | | 18 | sysinv-inv | controller-0 | enabled-active | | 59 | vim | controller-0 | enabled-active | | 60 | vim-api | controller-0 | enabled-active | | 61 | vim-webserver | controller-0 | enabled-active | +-----+-------------------------------+--------------+----------------+ As you can see, *I cannot find #1 service (oam-ip)*, and I think this is one of the main reasons. *And also I had a problem when I followed the installation instruction.* [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a Above instruction was not naturally worked, I mean the third command, 'host-if-modify' failed with the message "*DataNetwork providernet-a could not be found*." So I googled and I figured out I should command $ *system datanetwork-add providernet-a vlan* first. But I remember when my first deployment succeed, I didn't need to use that command at all. (The deployment was implemented on Jan.) Any help is appreciated, thank you for your help in advance. Best Regards, Jaewook. ================================================ *Jaewook Oh* (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 9, Issue 44 ************************************************ From Bin.Qian at windriver.com Tue Feb 12 16:09:19 2019 From: Bin.Qian at windriver.com (Qian, Bin) Date: Tue, 12 Feb 2019 16:09:19 +0000 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration In-Reply-To: References: , Message-ID: Hi, Jaewook, oam-ip service is not provisioned for an All-in-one simplex system. Looks like all services are running fine. You probably want to focus on the connectivity of the oam interface. Regards, Bin ________________________________ From: Sun, Austin [austin.sun at intel.com] Sent: Tuesday, February 12, 2019 12:02 AM To: Jaewook Oh; starlingx-discuss at lists.starlingx.io Cc: 오재욱 Subject: Re: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration Hi Jaewook: 1) are you deploy on Virtual env or bare-metal ? 2) can you ping oam-ip ? and do you have any proxy setting in your web browser ? 3) for system datanetwork-add, please refer http://lists.starlingx.io/pipermail/starlingx-discuss/2019-January/002788.html for more detail. Thanks. BR Austin Sun. From: Jaewook Oh [mailto:kyle.oh95 at gmail.com] Sent: Tuesday, February 12, 2019 1:55 PM To: starlingx-discuss at lists.starlingx.io Cc: 오재욱 Subject: [Starlingx-discuss] [deploy-fail] Failed to deploy All-in-one simplex configuration Hello starlingx team, I deployed Starlingx All-in-one simplex setting on my server, it seems deployment was done well. However I cannot access to the dashboard. I used "2019-Feb-08 03:40:29 bootimage.iso". Following log is the service that are now enabled, [wrsroot at controller-0 ~(keystone_admin)]$ system service-list +-----+-------------------------------+--------------+----------------+ | id | service_name | hostname | state | +-----+-------------------------------+--------------+----------------+ | 68 | aodh-api | controller-0 | enabled-active | | 69 | aodh-evaluator | controller-0 | enabled-active | | 70 | aodh-listener | controller-0 | enabled-active | | 71 | aodh-notifier | controller-0 | enabled-active | | 102 | barbican-api | controller-0 | enabled-active | | 103 | barbican-keystone-listener | controller-0 | enabled-active | | 104 | barbican-worker | controller-0 | enabled-active | | 43 | ceilometer-agent-notification | controller-0 | enabled-active | | 14 | cgcs-export-fs | controller-0 | enabled-active | | 10 | cgcs-fs | controller-0 | enabled-active | | 16 | cgcs-nfs-ip | controller-0 | enabled-active | | 35 | cinder-api | controller-0 | enabled-active | | 38 | cinder-backup | controller-0 | enabled-active | | 58 | cinder-ip | controller-0 | enabled-active | | 56 | cinder-lvm | controller-0 | enabled-active | | 36 | cinder-scheduler | controller-0 | enabled-active | | 37 | cinder-volume | controller-0 | enabled-active | | 23 | dnsmasq | controller-0 | enabled-active | | 5 | drbd-cgcs | controller-0 | enabled-active | | 55 | drbd-cinder | controller-0 | enabled-active | | 75 | drbd-extension | controller-0 | enabled-active | | 3 | drbd-pg | controller-0 | enabled-active | | 6 | drbd-platform | controller-0 | enabled-active | | 4 | drbd-rabbit | controller-0 | enabled-active | | 77 | extension-export-fs | controller-0 | enabled-active | | 76 | extension-fs | controller-0 | enabled-active | | 24 | fm-mgr | controller-0 | enabled-active | | 27 | glance-api | controller-0 | enabled-active | | 26 | glance-registry | controller-0 | enabled-active | | 108 | gnocchi-api | controller-0 | enabled-active | | 109 | gnocchi-metricd | controller-0 | enabled-active | | 62 | guest-agent | controller-0 | enabled-active | | 64 | haproxy | controller-0 | enabled-active | | 45 | heat-api | controller-0 | enabled-active | | 46 | heat-api-cfn | controller-0 | enabled-active | | 47 | heat-api-cloudwatch | controller-0 | enabled-active | | 44 | heat-engine | controller-0 | enabled-active | | 51 | horizon | controller-0 | enabled-active | | 22 | hw-mon | controller-0 | enabled-active | | 57 | iscsi | controller-0 | enabled-active | | 25 | keystone | controller-0 | enabled-active | | 50 | lighttpd | controller-0 | enabled-active | | 2 | management-ip | controller-0 | enabled-active | | 20 | mtc-agent | controller-0 | enabled-active | | 28 | neutron-server | controller-0 | enabled-active | | 9 | nfs-mgmt | controller-0 | enabled-active | | 29 | nova-api | controller-0 | enabled-active | | 63 | nova-api-proxy | controller-0 | enabled-active | | 31 | nova-conductor | controller-0 | enabled-active | | 33 | nova-console-auth | controller-0 | enabled-active | | 34 | nova-novnc | controller-0 | enabled-active | | 83 | nova-placement-api | controller-0 | enabled-active | | 30 | nova-scheduler | controller-0 | enabled-active | | 48 | open-ldap | controller-0 | enabled-active | | 82 | panko-api | controller-0 | enabled-active | | 52 | patch-alarm-manager | controller-0 | enabled-active | | 7 | pg-fs | controller-0 | enabled-active | | 15 | platform-export-fs | controller-0 | enabled-active | | 11 | platform-fs | controller-0 | enabled-active | | 17 | platform-nfs-ip | controller-0 | enabled-active | | 12 | postgres | controller-0 | enabled-active | | 65 | pxeboot-ip | controller-0 | enabled-active | | 13 | rabbit | controller-0 | enabled-active | | 8 | rabbit-fs | controller-0 | enabled-active | | 49 | snmp | controller-0 | enabled-active | | 19 | sysinv-conductor | controller-0 | enabled-active | | 18 | sysinv-inv | controller-0 | enabled-active | | 59 | vim | controller-0 | enabled-active | | 60 | vim-api | controller-0 | enabled-active | | 61 | vim-webserver | controller-0 | enabled-active | +-----+-------------------------------+--------------+----------------+ As you can see, I cannot find #1 service (oam-ip), and I think this is one of the main reasons. And also I had a problem when I followed the installation instruction. [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-create providernet-a --type=vlan [wrsroot at controller-0 ~(keystone_admin)]$ neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a [wrsroot at controller-0 ~(keystone_admin)]$ system host-if-modify -c data controller-0 eth1000 -p providernet-a Above instruction was not naturally worked, I mean the third command, 'host-if-modify' failed with the message "DataNetwork providernet-a could not be found." So I googled and I figured out I should command $ system datanetwork-add providernet-a vlan first. But I remember when my first deployment succeed, I didn't need to use that command at all. (The deployment was implemented on Jan.) Any help is appreciated, thank you for your help in advance. Best Regards, Jaewook. ================================================ Jaewook Oh (오재욱) IISTRC - Internet Infra System Technology Research Center 369 Sangdo-ro, Dongjak-gu, 06978, Seoul, Republic of Korea -------------- next part -------------- An HTML attachment was scrubbed... URL: From kennelson11 at gmail.com Tue Feb 12 17:06:20 2019 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 12 Feb 2019 09:06:20 -0800 Subject: [Starlingx-discuss] Denver PTG Attending Teams Message-ID: Hello! The results are in! Here are the list of teams that are planning to attend the upcoming PTG in Denver, following the summit. Hopefully we are getting it to you soon enough to plan travel. If you haven't already registered yet, you can do that here[1]. If you haven't booked your hotel yet, please please please use our hotel block here[2]. ----------------------------------------- Pilot Projects: - Airship - Kata Containers - StarlingX OpenStack Components: - - Barbican - Charms - Cinder - Cyborg - Docs/I18n - Glance - Heat - Horizon - Infrastructure - Ironic - Keystone - LOCI - Manila - Monasca - Neutron - Nova - Octavia - OpenStack Ansible - OpenStack QA - OpenStackClient - Oslo - Placement - Release Management - Requirements - Swift - Tacker - TripleO - Vitrage - OpenStack-Helm SIGs: - API-SIG - AutoScaling SIG - Edge Computing Group - Extended Maintenance SIG - First Contact SIG - Interop WG/RefStack - K8s SIG - Scientific SIG - Security SIG - Self-healing SIG ------------------------------------------ If your team is missing from this list, its because I didn't get a 'yes' response from your PTL/Chair/Contact Person. Have them contact me and we can try to work something out. Now that we have this list, we will start putting together a draft schedule. See you all in Denver! -Kendall (diablo_rojo) [1] https://www.eventbrite.com/e/open-infrastructure-summit-project-teams-gathering-tickets-52606153421 [2] https://www.hyatt.com/en-US/group-booking/DENCC/G-FNTE -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Feb 12 18:26:24 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 12 Feb 2019 18:26:24 +0000 Subject: [Starlingx-discuss] Docs mega spec up for review In-Reply-To: <9A85D2917C58154C960D95352B22818BBFD1D64E@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BBFD1D64E@fmsmsx123.amr.corp.intel.com> Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC09FE60D@ALA-MBD.corp.ad.wrs.com> Hi Bruce, To help with the documentation effort, we've dug back and found some onboarding documents that can be used to as seed documents (topics like Networking, Security, Storage, etc). We've noted in each what needs to be changed to bring it up to date for StarlingX. I've put them up in the shared StarlingX folder: https://drive.google.com/open?id=1YlAlWT7FtSFNyYDdJ2hbFFW4aNGQXCHY Another great source of input for these documents would be the SDL documents that were created previously. Bill... From: Jones, Bruce E Sent: Thursday, February 7, 2019 4:52 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs mega spec up for review The Docs team has been working on a spec for an overhaul of our formal project documentation, and the spec has been posted for review. You can find it at https://review.openstack.org/#/c/635641. Feedback and comments graciously accepted! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Tue Feb 12 20:36:13 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Tue, 12 Feb 2019 20:36:13 +0000 Subject: [Starlingx-discuss] [multiOS][meetings] MulitOS team meeting minutes 2/11/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105EA368@fmsmsx104.amr.corp.intel.com> MulitOS team meeting Agenda for 2/11/2019 - Presentation about separation of monolithic build - Continue Discussion on uploaded specs around MultiOS Multi-OS overview specification - https://review.openstack.org/#/c/619801/ Reorganize Flock Services Source Code repositories - https://review.openstack.org/#/c/631288/ Example of repo based on this spec - https://github.com/starlingx-staging/stx-packaging - MultiOS build system - Opens Notes: Presentation about separation of monolithic build - Ken to share these slides, high level presentation about the multiOS challenge and some implementation details. Topics covered: Separate the OS build from the rest of StarlingX specific services and artifacts, build only specific elements and the non-OS piece. Continue discussion on uploaded specs around MultiOS - -Still no spec around the generation of deb. files and the generation of Ubuntu version of StarlingX There has been some integration and testing on this but need to integrate this as part of a new spec We might want to do the same for newer versions of CentOS This team has to evaluate the option to build each component of the flock services and their 3rd party components and ship those independently -Still no clear scope for multiOS activities for May release - we need to clearly state what are we trying to achieve for next release on the multiOS space. There are some activities that are happening around the cleanup of mocks and dependencies of spec files We are targeting those to be included as part of laying the ground work for multiOS The team is on hold with a few tasks regarding directory structure, adding repos and build system while the meta spec is approved to continue with some of these "ground work tasks" - For may release the multiOS efforts should be focused around these main topics Source code reorganization Directory Layout Dependency cleanup We are waiting for these topics to be described in formal specs so we can start assessing the implementation details and discussing the wins around these for the project AR - start writing the specs only focused on may release topics AR - Come back with an updated proposal AR - Need to start working on the next level of multiOS and details around the implementation so we can discuss them MultiOS build system - this topic was discussed along with the specs discussion, although, I wanted to restate that there are no changes planned to current build system, to support other OS we will bring up tooling and infrastructure native to new supported OS, that and the ground work will set us up for supporting this multiOS effort. Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Tue Feb 12 20:55:07 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Tue, 12 Feb 2019 20:55:07 +0000 Subject: [Starlingx-discuss] Docs mega spec up for review In-Reply-To: <586E8B730EA0DA4A9D6A80A10E486BC09FE60D@ALA-MBD.corp.ad.wrs.com> References: <9A85D2917C58154C960D95352B22818BBFD1D64E@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC09FE60D@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BBFD26959@fmsmsx123.amr.corp.intel.com> Wow, thank you ! This is very cool. Michael, can we please put this topic on the Docs team agenda for this week? brucej From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, February 12, 2019 10:26 AM To: Jones, Bruce E ; starlingx-discuss at lists.starlingx.io Subject: RE: Docs mega spec up for review Hi Bruce, To help with the documentation effort, we've dug back and found some onboarding documents that can be used to as seed documents (topics like Networking, Security, Storage, etc). We've noted in each what needs to be changed to bring it up to date for StarlingX. I've put them up in the shared StarlingX folder: https://drive.google.com/open?id=1YlAlWT7FtSFNyYDdJ2hbFFW4aNGQXCHY Another great source of input for these documents would be the SDL documents that were created previously. Bill... From: Jones, Bruce E > Sent: Thursday, February 7, 2019 4:52 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs mega spec up for review The Docs team has been working on a spec for an overhaul of our formal project documentation, and the spec has been posted for review. You can find it at https://review.openstack.org/#/c/635641. Feedback and comments graciously accepted! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From michael.l.tullis at intel.com Tue Feb 12 21:08:21 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Tue, 12 Feb 2019 21:08:21 +0000 Subject: [Starlingx-discuss] Docs mega spec up for review In-Reply-To: <9A85D2917C58154C960D95352B22818BBFD26959@fmsmsx123.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BBFD1D64E@fmsmsx123.amr.corp.intel.com> <586E8B730EA0DA4A9D6A80A10E486BC09FE60D@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BBFD26959@fmsmsx123.amr.corp.intel.com> Message-ID: <3808363B39586544A6839C76CF81445EA1AB4C41@ORSMSX104.amr.corp.intel.com> Fantastic! Yes, this is on the agenda. From: Jones, Bruce E Sent: Tuesday, February 12, 2019 1:55 PM To: Zvonar, Bill ; Tullis, Michael L Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Docs mega spec up for review Wow, thank you ! This is very cool. Michael, can we please put this topic on the Docs team agenda for this week? brucej From: Zvonar, Bill [mailto:Bill.Zvonar at windriver.com] Sent: Tuesday, February 12, 2019 10:26 AM To: Jones, Bruce E >; starlingx-discuss at lists.starlingx.io Subject: RE: Docs mega spec up for review Hi Bruce, To help with the documentation effort, we've dug back and found some onboarding documents that can be used to as seed documents (topics like Networking, Security, Storage, etc). We've noted in each what needs to be changed to bring it up to date for StarlingX. I've put them up in the shared StarlingX folder: https://drive.google.com/open?id=1YlAlWT7FtSFNyYDdJ2hbFFW4aNGQXCHY Another great source of input for these documents would be the SDL documents that were created previously. Bill... From: Jones, Bruce E > Sent: Thursday, February 7, 2019 4:52 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Docs mega spec up for review The Docs team has been working on a spec for an overhaul of our formal project documentation, and the spec has been posted for review. You can find it at https://review.openstack.org/#/c/635641. Feedback and comments graciously accepted! brucej -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Tue Feb 12 23:07:15 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Tue, 12 Feb 2019 23:07:15 +0000 Subject: [Starlingx-discuss] [ Test ] meeting minutes - 02/12/2019 Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD72C2C@FMSMSX114.amr.corp.intel.com> Notes from 02/12/2019 Attendees: Abraham, Fernando, Maria, Cristopher, Elio, JC, JP, Numan, Victor, Ada, Ghada , Bruce, Richo 1. Release verification plan check - Ada - 15 min - Performance testing - define our set of initial metrics and take our baseline, that would be our May release. - Victor checking OPNFV suites. - Have OPNFV as the first set of metrics for the May release and continue growing the performance framework. Communicate this into the community meeting tomorrow. + Spec ready: 03/08 + Metrics for May release defined by: 02/28 + First set of metrics taken: 03/15 - For automation - define which test cases should be automated first. Ada and Numan to work on it. 2. Repo submissions - Numan - 15 min - Opt 1 - each test case as a single file - Opt 2 - all the test cases for a domain, in a single file - Opt 3 - a single file for each sub-domain (areas). For example: nova_livemigration in one file. - Numan to send the proposal for this one. Looks more manageable. Send your comments by EOW. + https://drive.google.com/file/d/1EJnQrz-JSafVW9ZNYBbGdo0TgwcrfI9w/view?usp=sharing 3. Release notes generation (for testing repo) - Abraham/Cristopher - 15 min - Presentation: https://drive.google.com/file/d/14DNcnISH1hUfb36kUruQhzutJnGRKY7f/view?usp=sharing - doc and releasenotes are populated automatically with every commit. - Do we need release notes for the test cases? Abraham to help us checking what other teams related to OpenStack are doing regarding release notes for tests. 4. Opens - all N/A > -----Original Message----- > From: Cabrales, Ada [mailto:ada.cabrales at intel.com] > Sent: Monday, February 11, 2019 7:22 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] [ Test ] meeting agenda - 02/12/2019 > > Agenda for 02/12/2019 > > 1. Release verification plan check - Ada - 15 min > > 2. Repo submissions - Numan - 15 min > > 3. Release notes generation (for testing repo) - Abraham/Cristopher - 15 min > > 4. Opens - all > > > Regards > Ada > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.carlos.alonso at intel.com Wed Feb 13 00:03:21 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 13 Feb 2019 00:03:21 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190212 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9A431@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-12 (link) Sanity Test is executed in a Bare Metal Environment Status: GREEN Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 43 TCs PASS ] =========================================== Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 47 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 22 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From dtroyer at gmail.com Wed Feb 13 00:31:05 2019 From: dtroyer at gmail.com (Dean Troyer) Date: Tue, 12 Feb 2019 18:31:05 -0600 Subject: [Starlingx-discuss] [Container] Is 'nova boot' command still supported? In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C96F0C@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C96F0C@FMSMSX108.amr.corp.intel.com> Message-ID: On Mon, Feb 11, 2019 at 10:46 AM Alonso, Juan Carlos wrote: > Openstack commands work correctly but nova, glance, cinder are asking for user name, user id, project name, tenant id, etc.. I think some this parameters are defined on /etc/openstack/clouds.yaml, right? > Does clouds.yaml need extra parameters? A modern python-novaclient (and friends) has support for clouds.yaml, the pike versions currently in the build do not. Aside from Python dependency issues you can use modern CLIs against old API servers. (Also excepting any WRS changes to the clients, there are a few.) > That’s why also I am asking if nova, glance, cinder, etc commands won’t be supported? In order to update to openstack command. Are there things you need to use the old clients for that OSC does not support? dt -- Dean Troyer dtroyer at gmail.com From sgw at linux.intel.com Wed Feb 13 05:25:23 2019 From: sgw at linux.intel.com (Saul Wold) Date: Tue, 12 Feb 2019 21:25:23 -0800 Subject: [Starlingx-discuss] [TSC] Request TSC Members please review and comment on MultiOS Specs Message-ID: <77c29f01-10e8-3690-663c-dad3042f4861@linux.intel.com> TSC Members, There have been some additional changes and updates to the MultiOS related specifications [0], I would request that TSC members review these specifications ahead of Thursday's TSC Meeting so we can have some discussion on these in order to move them forward. Thanks for your contributions Sau! [0] https://review.openstack.org/#/q/topic:multi-os From marcel at schaible-consulting.de Wed Feb 13 11:56:44 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 12:56:44 +0100 (CET) Subject: [Starlingx-discuss] Build for Duplex Configuration In-Reply-To: References: Message-ID: <601521071.413736.1550059004249@communicator.strato.com> Hi, since I am still striggling with the installation of a duplex configuration: Which build do you currently recommend for testing? Thanks Marcel From cindy.xie at intel.com Wed Feb 13 14:31:54 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 13 Feb 2019 14:31:54 +0000 Subject: [Starlingx-discuss] Notes: Weekly StarlingX non-OpenStack Distro meeting, 2/13 Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E862A8@SHSMSX104.ccr.corp.intel.com> Agenda & Notes for 2/13 meeting: 1. CentOS 7.6 upgrade, test status (Shuicheng, Ada/Numan) 2 bugs reported, and 2 fixes merged into f/centOS7.6; Branch rebase from master to feature branch. New test images built from f/CentOS7.6 has been provided. One more test failure reported from GDC: >>> 1 Test Case Failed Reject changing interface MTU size to values smaller than MTU of provider network / The system accepts MTU values smaller than MTU of provider network. Details to be clarified w/ test reproduce steps; and we need to check if the same issue can be seen in the latest master. Numan is taking day off today. They already have one image today and should be able to re-start the testing, #1814360 has been resolved which were block the WR testing. stx-upstream has several conficts btw f/Stein and f/CentOS76: 6 packages in stx-upstream being upgraded. 4 of them were not changed in Stein branch; 2 of them were changed from Pike to Stein. Python-heatclient and Python-openStack-Client. The proposed solution is to take the components from f/Stein (and it will land into Master earlier than Cent7.6). Still plan to have feature branch merged into Master on Feb 22nd, with risk on test results. 2. Ceph upgrade status (Frank, Changcheng) Ovidiu successed built the ISO and installed the storage nodes not yet provision the OSD yet. Daniel is debugging the OSD issue by reviewing the patches. Ovidiu went through all commit and have high level view for what needs to be chagned. Will summarize that for Changcheng and will meet w/ Changcheng, Vivian and Yong for the recommendations for the next steps. Suggestion to split the work btw the two teams. Yong: may have some hints regarding the issues of create OSD. Will work w/ Ovidiu offline to share the workaround. Rough plan: good idea for the remaining tasks, will provide ETA after the meeting. 3. Bug triage (Cindy) #1814595: to be assigned #1814360 patch merged to f/CentOS76 #1814345 assigned to Haitao #1814335 patch merged to f/CentOS76 4. Opens (All) None -----Original Appointment----- From: Xie, Cindy Sent: Monday, November 5, 2018 2:27 PM To: 'Khalil, Ghada'; Sun, Austin; Somerville, Jim; 'Rowsell, Brent'; Liu, ZhipengS; Wold, Saul; starlingx-discuss at lists.starlingx.io; Shang, Dehao; Waheed, Numan; Troyer, Dean; Jones, Bruce E; Lin, Shuicheng; Zhu, Vivian; Hu, Yong; Xie, Cindy; 'Khalil, Ghada'; Somerville, Jim; 'Rowsell, Brent'; starlingx-discuss at lists.starlingx.io; Waheed, Numan Cc: Hu, Wei W; 'Seiler, Glenn'; Gomez, Juan P; 'Chen, Jacky'; Perez Rodriguez, Humberto I; 'Young, Ken'; Cobbley, David A; 'Waines, Greg'; Arce Moreno, Abraham; 'Eslimi, Dariush'; Lara, Cesar; Perez Carranza, Jose; 'Hellmann, Gil'; Armstrong, Robert H; Martinez Landa, Hayde; Martinez Monroy, Elio; Fang, Liang A; 'Seiler, Glenn'; 'Chen, Jacky'; Perez Rodriguez, Humberto I; 'Young, Ken'; 'Waines, Greg'; 'Eslimi, Dariush'; 'Hellmann, Gil'; Poncea, Ovidiu Subject: Weekly StarlingX non-OpenStack Distro meeting When: Wednesday, February 13, 2019 6:00 AM-7:00 AM (UTC-08:00) Pacific Time (US & Canada). Where: https://zoom.us/j/342730236 . Cadence and time slot: o Wednesday 9AM Winter EDT (10PM China time, US PDT Winter time 6AM) . Call Details: o Zoom link: https://zoom.us/j/342730236 o Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ . Meeting Agenda and Minutes: o https://etherpad.openstack.org/p/stx-distro-other From marcel at schaible-consulting.de Wed Feb 13 15:28:24 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 16:28:24 +0100 (CET) Subject: [Starlingx-discuss] neutron: cannot create a providernet Message-ID: <1317027184.431577.1550071704349@communicator.strato.com> Hi, I am getting the following error when initially creating a providernet: DataNetwork providernet-a could not be found. I am following the installation guide from here: https://docs.starlingx.io/installation_guide/duplex.html#duplex Any idea is welcome! Thanks Marcel ---------------------------------------- The following configuration will be applied: System Configuration -------------------- Time Zone: Europe/Berlin System mode: duplex PXEBoot Network Configuration ----------------------------- Separate PXEBoot network not configured PXEBoot Controller floating hostname: pxecontroller Management Network Configuration -------------------------------- Management interface name: enp9s20f7 Management interface: enp9s20f7 Management interface MTU: 1500 Management subnet: 172.27.1.0/24 Controller floating address: 172.27.1.2 Controller 0 address: 172.27.1.3 Controller 1 address: 172.27.1.4 NFS Management Address 1: 172.27.1.5 NFS Management Address 2: 172.27.1.6 Controller floating hostname: controller Controller hostname prefix: controller- OAM Controller floating hostname: oamcontroller Dynamic IP address allocation is selected Management multicast subnet: 239.1.1.0/28 Infrastructure Network Configuration ------------------------------------ Infrastructure interface not configured External OAM Network Configuration ---------------------------------- External OAM interface name: bond0 External OAM interface: bond0 External OAM interface MTU: 1500 External OAM ae member 0: ens1f1 External OAM ae member 1: ens1f3 External OAM ae policy : active-backup External OAM subnet: 10.62.150.0/24 External OAM gateway address: 10.62.150.1 External OAM floating address: 10.62.150.211 External OAM 0 address: 10.62.150.212 External OAM 1 address: 10.62.150.213 Apply the above configuration? [y/n]: y Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05/08: Creating system configuration ... DONE 06/08: Applying controller manifest ... DONE 07/08: Finalize controller configuration ... DONE 08/08: Waiting for service activation ... DONE Configuration was applied Please complete any out of service commissioning steps with system commands and unlock controller to proceed. localhost:~# source /etc/nova/openrc [root at localhost ~(keystone_admin)]# neutron providernet-create providernet-a --type=vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new providernet: +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | | | id | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | | mtu | 1500 | | name | providernet-a | | ranges | | | status | DOWN | | type | vlan | | vlan_transparent | False | +------------------+--------------------------------------+ [root at localhost ~(keystone_admin)]# neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new providernet_range: +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | | | id | ce8bba6a-5bb0-4ddb-b430-bd216912a5c6 | | maximum | 400 | | minimum | 100 | | name | providernet-a-range1 | | project_id | 4f59503b008e49be9f43e393bf89a19e | | providernet_id | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | | providernet_name | providernet-a | | providernet_type | vlan | | shared | False | | tenant_id | 4f59503b008e49be9f43e393bf89a19e | +------------------+--------------------------------------+ [root at localhost ~(keystone_admin)]# system host-if-list -a controller-0 +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | data networks | +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ | 04665f98-8318-4855-a591-1307231a40b3 | bond0 | platform | ae | None | [] | [u'ens1f1', u'ens1f3'] | [] | MTU=1500,AE_MODE=active_standby | [] | | 2a6b70cb-53ad-43c3-adc7-b92c4c44470d | enp9s20f6 | None | ethernet | None | [u'enp9s20f6'] | [] | [] | MTU=1500 | [] | | 2dcbe277-530e-400c-b156-43545b4507c8 | ens1f1 | None | ethernet | None | [u'ens1f1'] | [] | [u'bond0'] | MTU=1500 | [] | | 3acd41aa-aeb0-400c-86ba-79f0fdb24829 | ens1f3 | None | ethernet | None | [u'ens1f3'] | [] | [u'bond0'] | MTU=1500 | [] | | 86b8867a-2aee-4cb9-a04a-7587ff8d833d | enp3s0f0 | None | ethernet | None | [u'enp3s0f0'] | [] | [] | MTU=1500 | [] | | ce78e71f-4300-4a6b-ae0a-43e3139a5515 | enp3s0f1 | None | ethernet | None | [u'enp3s0f1'] | [] | [] | MTU=1500 | [] | | d3e101a0-a873-4c76-b7b5-83368626acd4 | enp9s21f0 | None | ethernet | None | [u'enp9s21f0'] | [] | [] | MTU=1500 | [] | | e93f1d26-e6ee-4c7f-84ef-8b68e312ef16 | enp9s21f1 | None | ethernet | None | [u'enp9s21f1'] | [] | [] | MTU=1500 | [] | | ed1b47c8-0311-4e39-8840-9931c69c147e | ens1f0 | None | ethernet | None | [u'ens1f0'] | [] | [] | MTU=1500 | [] | | ee4ce037-1408-4695-b05d-e5491f30e274 | enp9s20f7 | platform | ethernet | None | [u'enp9s20f7'] | [] | [] | MTU=1500 | [] | +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ [root at localhost ~(keystone_admin)]# neutron providernet-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ | id | name | type | mtu | ranges | +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | providernet-a | vlan | 1500 | {"minimum": 100, "maximum": 400, "name": "providernet-a-range1"} | +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ enp9s21f1 -p providernet-a_admin)]# system host-if-modify -c data controller-0 DataNetwork providernet-a could not be found. [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ec:9e:cd:1f:7e:b0 brd ff:ff:ff:ff:ff:ff 3: ens1f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether b4:96:91:1a:01:60 brd ff:ff:ff:ff:ff:ff 4: ens1f1: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff 5: enp3s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ec:9e:cd:1f:7e:b1 brd ff:ff:ff:ff:ff:ff 6: ens1f3: mtu 1500 qdisc mq master bond0 state DOWN group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff 7: enp9s20f6: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:01:15 brd ff:ff:ff:ff:ff:ff 8: enp9s20f7: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether 02:01:00:10:02:15 brd ff:ff:ff:ff:ff:ff inet 172.27.1.3/24 brd 172.27.1.255 scope global enp9s20f7 valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.2/24 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.5/24 brd 172.27.1.255 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.6/24 brd 172.27.1.255 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet6 fe80::1:ff:fe10:215/64 scope link valid_lft forever preferred_lft forever 9: enp9s21f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:01:16 brd ff:ff:ff:ff:ff:ff 10: enp9s21f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:02:16 brd ff:ff:ff:ff:ff:ff 11: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff inet 10.62.150.211/24 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::b696:91ff:fe1a:161/64 scope link valid_lft forever preferred_lft forever [root at localhost ~(keystone_admin)]# From Brent.Rowsell at windriver.com Wed Feb 13 15:35:39 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Wed, 13 Feb 2019 15:35:39 +0000 Subject: [Starlingx-discuss] neutron: cannot create a providernet In-Reply-To: <1317027184.431577.1550071704349@communicator.strato.com> References: <1317027184.431577.1550071704349@communicator.strato.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3D1B7C@ALA-MBD.corp.ad.wrs.com> Hi, This has been changed. See the following mail post http://lists.starlingx.io/pipermail/starlingx-discuss/2019-January/002788.html Brent -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 10:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] neutron: cannot create a providernet Hi, I am getting the following error when initially creating a providernet: DataNetwork providernet-a could not be found. I am following the installation guide from here: https://docs.starlingx.io/installation_guide/duplex.html#duplex Any idea is welcome! Thanks Marcel ---------------------------------------- The following configuration will be applied: System Configuration -------------------- Time Zone: Europe/Berlin System mode: duplex PXEBoot Network Configuration ----------------------------- Separate PXEBoot network not configured PXEBoot Controller floating hostname: pxecontroller Management Network Configuration -------------------------------- Management interface name: enp9s20f7 Management interface: enp9s20f7 Management interface MTU: 1500 Management subnet: 172.27.1.0/24 Controller floating address: 172.27.1.2 Controller 0 address: 172.27.1.3 Controller 1 address: 172.27.1.4 NFS Management Address 1: 172.27.1.5 NFS Management Address 2: 172.27.1.6 Controller floating hostname: controller Controller hostname prefix: controller- OAM Controller floating hostname: oamcontroller Dynamic IP address allocation is selected Management multicast subnet: 239.1.1.0/28 Infrastructure Network Configuration ------------------------------------ Infrastructure interface not configured External OAM Network Configuration ---------------------------------- External OAM interface name: bond0 External OAM interface: bond0 External OAM interface MTU: 1500 External OAM ae member 0: ens1f1 External OAM ae member 1: ens1f3 External OAM ae policy : active-backup External OAM subnet: 10.62.150.0/24 External OAM gateway address: 10.62.150.1 External OAM floating address: 10.62.150.211 External OAM 0 address: 10.62.150.212 External OAM 1 address: 10.62.150.213 Apply the above configuration? [y/n]: y Applying configuration (this will take several minutes): 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... DONE 03/08: Persisting local configuration ... DONE 04/08: Populating initial system inventory ... DONE 05/08: Creating system configuration ... DONE 06/08: Applying controller manifest ... DONE 07/08: Finalize controller configuration ... DONE 08/08: Waiting for service activation ... DONE Configuration was applied Please complete any out of service commissioning steps with system commands and unlock controller to proceed. localhost:~# source /etc/nova/openrc [root at localhost ~(keystone_admin)]# neutron providernet-create providernet-a --type=vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new providernet: +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | | | id | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | | mtu | 1500 | | name | providernet-a | | ranges | | | status | DOWN | | type | vlan | | vlan_transparent | False | +------------------+--------------------------------------+ [root at localhost ~(keystone_admin)]# neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. Created a new providernet_range: +------------------+--------------------------------------+ | Field | Value | +------------------+--------------------------------------+ | description | | | id | ce8bba6a-5bb0-4ddb-b430-bd216912a5c6 | | maximum | 400 | | minimum | 100 | | name | providernet-a-range1 | | project_id | 4f59503b008e49be9f43e393bf89a19e | | providernet_id | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | | providernet_name | providernet-a | | providernet_type | vlan | | shared | False | | tenant_id | 4f59503b008e49be9f43e393bf89a19e | +------------------+--------------------------------------+ [root at localhost ~(keystone_admin)]# system host-if-list -a controller-0 +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | data networks | +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ | 04665f98-8318-4855-a591-1307231a40b3 | bond0 | platform | ae | None | [] | [u'ens1f1', u'ens1f3'] | [] | MTU=1500,AE_MODE=active_standby | [] | | 2a6b70cb-53ad-43c3-adc7-b92c4c44470d | enp9s20f6 | None | ethernet | None | [u'enp9s20f6'] | [] | [] | MTU=1500 | [] | | 2dcbe277-530e-400c-b156-43545b4507c8 | ens1f1 | None | ethernet | None | [u'ens1f1'] | [] | [u'bond0'] | MTU=1500 | [] | | 3acd41aa-aeb0-400c-86ba-79f0fdb24829 | ens1f3 | None | ethernet | None | [u'ens1f3'] | [] | [u'bond0'] | MTU=1500 | [] | | 86b8867a-2aee-4cb9-a04a-7587ff8d833d | enp3s0f0 | None | ethernet | None | [u'enp3s0f0'] | [] | [] | MTU=1500 | [] | | ce78e71f-4300-4a6b-ae0a-43e3139a5515 | enp3s0f1 | None | ethernet | None | [u'enp3s0f1'] | [] | [] | MTU=1500 | [] | | d3e101a0-a873-4c76-b7b5-83368626acd4 | enp9s21f0 | None | ethernet | None | [u'enp9s21f0'] | [] | [] | MTU=1500 | [] | | e93f1d26-e6ee-4c7f-84ef-8b68e312ef16 | enp9s21f1 | None | ethernet | None | [u'enp9s21f1'] | [] | [] | MTU=1500 | [] | | ed1b47c8-0311-4e39-8840-9931c69c147e | ens1f0 | None | ethernet | None | [u'ens1f0'] | [] | [] | MTU=1500 | [] | | ee4ce037-1408-4695-b05d-e5491f30e274 | enp9s20f7 | platform | ethernet | None | [u'enp9s20f7'] | [] | [] | MTU=1500 | [] | +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ [root at localhost ~(keystone_admin)]# neutron providernet-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ | id | name | type | mtu | ranges | +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | providernet-a | vlan | 1500 | | {"minimum": 100, "maximum": 400, "name": "providernet-a-range1"} | +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ enp9s21f1 -p providernet-a_admin)]# system host-if-modify -c data controller-0 DataNetwork providernet-a could not be found. [root at localhost ~(keystone_admin)]# [root at localhost ~(keystone_admin)]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ec:9e:cd:1f:7e:b0 brd ff:ff:ff:ff:ff:ff 3: ens1f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether b4:96:91:1a:01:60 brd ff:ff:ff:ff:ff:ff 4: ens1f1: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff 5: enp3s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ec:9e:cd:1f:7e:b1 brd ff:ff:ff:ff:ff:ff 6: ens1f3: mtu 1500 qdisc mq master bond0 state DOWN group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff 7: enp9s20f6: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:01:15 brd ff:ff:ff:ff:ff:ff 8: enp9s20f7: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether 02:01:00:10:02:15 brd ff:ff:ff:ff:ff:ff inet 172.27.1.3/24 brd 172.27.1.255 scope global enp9s20f7 valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.2/24 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.5/24 brd 172.27.1.255 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.6/24 brd 172.27.1.255 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet6 fe80::1:ff:fe10:215/64 scope link valid_lft forever preferred_lft forever 9: enp9s21f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:01:16 brd ff:ff:ff:ff:ff:ff 10: enp9s21f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:02:16 brd ff:ff:ff:ff:ff:ff 11: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff inet 10.62.150.211/24 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::b696:91ff:fe1a:161/64 scope link valid_lft forever preferred_lft forever [root at localhost ~(keystone_admin)]# _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Feb 13 16:10:01 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 17:10:01 +0100 (CET) Subject: [Starlingx-discuss] neutron: cannot create a providernet In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3D1B7C@ALA-MBD.corp.ad.wrs.com> References: <1317027184.431577.1550071704349@communicator.strato.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D1B7C@ALA-MBD.corp.ad.wrs.com> Message-ID: <365038342.434792.1550074201286@communicator.strato.com> Thanks Brent for your quick response. Before going again thru the disk configuration part, is this looking correct: [root at controller-0 ~(keystone_admin)]# system datanetwork-list +--------------------------------------+---------------+----------+------+ | uuid | name | network_ | mtu | | | | type | | +--------------------------------------+---------------+----------+------+ | 4cb64a85-801e-4a30-aa10-0a073357b81e | providernet-a | vlan | 1500 | +--------------------------------------+---------------+----------+------+ [root at controller-0 ~(keystone_admin)]# system datanetwork-show 4cb64a85-801e-4a3... +--------------+--------------------------------------+ | Property | Value | +--------------+--------------------------------------+ | id | 2 | | uuid | 4cb64a85-801e-4a30-aa10-0a073357b81e | | name | providernet-a | | network_type | vlan | | mtu | 1500 | | description | None | +--------------+--------------------------------------+ Thanks Marcel > "Rowsell, Brent" hat am 13. Februar 2019 um 16:35 geschrieben: > > > Hi, > > This has been changed. See the following mail post > > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-January/002788.html > > Brent > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 10:28 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] neutron: cannot create a providernet > > Hi, > > I am getting the following error when initially creating a providernet: > > DataNetwork providernet-a could not be found. > > I am following the installation guide from here: > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > Any idea is welcome! > > Thanks > > Marcel > > ---------------------------------------- > The following configuration will be applied: > > System Configuration > -------------------- > Time Zone: Europe/Berlin > System mode: duplex > > PXEBoot Network Configuration > ----------------------------- > Separate PXEBoot network not configured > PXEBoot Controller floating hostname: pxecontroller > > Management Network Configuration > -------------------------------- > Management interface name: enp9s20f7 > Management interface: enp9s20f7 > Management interface MTU: 1500 > Management subnet: 172.27.1.0/24 > Controller floating address: 172.27.1.2 > Controller 0 address: 172.27.1.3 > Controller 1 address: 172.27.1.4 > NFS Management Address 1: 172.27.1.5 > NFS Management Address 2: 172.27.1.6 > Controller floating hostname: controller Controller hostname prefix: controller- OAM Controller floating hostname: oamcontroller Dynamic IP address allocation is selected Management multicast subnet: 239.1.1.0/28 > > Infrastructure Network Configuration > ------------------------------------ > Infrastructure interface not configured > > External OAM Network Configuration > ---------------------------------- > External OAM interface name: bond0 > External OAM interface: bond0 > External OAM interface MTU: 1500 > External OAM ae member 0: ens1f1 > External OAM ae member 1: ens1f3 > External OAM ae policy : active-backup > External OAM subnet: 10.62.150.0/24 > External OAM gateway address: 10.62.150.1 External OAM floating address: 10.62.150.211 External OAM 0 address: 10.62.150.212 External OAM 1 address: 10.62.150.213 > > Apply the above configuration? [y/n]: y > > Applying configuration (this will take several minutes): > > 01/08: Creating bootstrap configuration ... DONE > 02/08: Applying bootstrap manifest ... DONE > 03/08: Persisting local configuration ... DONE > 04/08: Populating initial system inventory ... DONE > 05/08: Creating system configuration ... DONE > 06/08: Applying controller manifest ... DONE > 07/08: Finalize controller configuration ... DONE > 08/08: Waiting for service activation ... DONE > > Configuration was applied > > Please complete any out of service commissioning steps with system commands and unlock controller to proceed. > localhost:~# source /etc/nova/openrc > [root at localhost ~(keystone_admin)]# neutron providernet-create providernet-a --type=vlan neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. > Created a new providernet: > +------------------+--------------------------------------+ > | Field | Value | > +------------------+--------------------------------------+ > | description | | > | id | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | > | mtu | 1500 | > | name | providernet-a | > | ranges | | > | status | DOWN | > | type | vlan | > | vlan_transparent | False | > +------------------+--------------------------------------+ > [root at localhost ~(keystone_admin)]# neutron providernet-range-create --name providernet-a-range1 --range 100-400 providernet-a neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. > Created a new providernet_range: > +------------------+--------------------------------------+ > | Field | Value | > +------------------+--------------------------------------+ > | description | | > | id | ce8bba6a-5bb0-4ddb-b430-bd216912a5c6 | > | maximum | 400 | > | minimum | 100 | > | name | providernet-a-range1 | > | project_id | 4f59503b008e49be9f43e393bf89a19e | > | providernet_id | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | > | providernet_name | providernet-a | > | providernet_type | vlan | > | shared | False | > | tenant_id | 4f59503b008e49be9f43e393bf89a19e | > +------------------+--------------------------------------+ > [root at localhost ~(keystone_admin)]# system host-if-list -a controller-0 > +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ > | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | data networks | > +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ > | 04665f98-8318-4855-a591-1307231a40b3 | bond0 | platform | ae | None | [] | [u'ens1f1', u'ens1f3'] | [] | MTU=1500,AE_MODE=active_standby | [] | > | 2a6b70cb-53ad-43c3-adc7-b92c4c44470d | enp9s20f6 | None | ethernet | None | [u'enp9s20f6'] | [] | [] | MTU=1500 | [] | > | 2dcbe277-530e-400c-b156-43545b4507c8 | ens1f1 | None | ethernet | None | [u'ens1f1'] | [] | [u'bond0'] | MTU=1500 | [] | > | 3acd41aa-aeb0-400c-86ba-79f0fdb24829 | ens1f3 | None | ethernet | None | [u'ens1f3'] | [] | [u'bond0'] | MTU=1500 | [] | > | 86b8867a-2aee-4cb9-a04a-7587ff8d833d | enp3s0f0 | None | ethernet | None | [u'enp3s0f0'] | [] | [] | MTU=1500 | [] | > | ce78e71f-4300-4a6b-ae0a-43e3139a5515 | enp3s0f1 | None | ethernet | None | [u'enp3s0f1'] | [] | [] | MTU=1500 | [] | > | d3e101a0-a873-4c76-b7b5-83368626acd4 | enp9s21f0 | None | ethernet | None | [u'enp9s21f0'] | [] | [] | MTU=1500 | [] | > | e93f1d26-e6ee-4c7f-84ef-8b68e312ef16 | enp9s21f1 | None | ethernet | None | [u'enp9s21f1'] | [] | [] | MTU=1500 | [] | > | ed1b47c8-0311-4e39-8840-9931c69c147e | ens1f0 | None | ethernet | None | [u'ens1f0'] | [] | [] | MTU=1500 | [] | > | ee4ce037-1408-4695-b05d-e5491f30e274 | enp9s20f7 | platform | ethernet | None | [u'enp9s20f7'] | [] | [] | MTU=1500 | [] | > +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+---------------+ > [root at localhost ~(keystone_admin)]# neutron providernet-list neutron CLI is deprecated and will be removed in the future. Use openstack CLI instead. > +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ > | id | name | type | mtu | ranges | > +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ > | e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | providernet-a | vlan | 1500 | > | {"minimum": 100, "maximum": 400, "name": "providernet-a-range1"} | > +--------------------------------------+---------------+------+------+------------------------------------------------------------------+ > enp9s21f1 -p providernet-a_admin)]# system host-if-modify -c data controller-0 DataNetwork providernet-a could not be found. > [root at localhost ~(keystone_admin)]# > [root at localhost ~(keystone_admin)]# ip addr > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: enp3s0f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ec:9e:cd:1f:7e:b0 brd ff:ff:ff:ff:ff:ff > 3: ens1f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether b4:96:91:1a:01:60 brd ff:ff:ff:ff:ff:ff > 4: ens1f1: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff > 5: enp3s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ec:9e:cd:1f:7e:b1 brd ff:ff:ff:ff:ff:ff > 6: ens1f3: mtu 1500 qdisc mq master bond0 state DOWN group default qlen 1000 > link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff > 7: enp9s20f6: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 02:01:00:10:01:15 brd ff:ff:ff:ff:ff:ff > 8: enp9s20f7: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether 02:01:00:10:02:15 brd ff:ff:ff:ff:ff:ff > inet 172.27.1.3/24 brd 172.27.1.255 scope global enp9s20f7 > valid_lft forever preferred_lft forever > inet 169.254.202.2/24 scope global enp9s20f7 > valid_lft forever preferred_lft forever > inet 172.27.1.2/24 scope global secondary enp9s20f7 > valid_lft forever preferred_lft forever > inet 172.27.1.5/24 brd 172.27.1.255 scope global secondary enp9s20f7 > valid_lft forever preferred_lft forever > inet 172.27.1.6/24 brd 172.27.1.255 scope global secondary enp9s20f7 > valid_lft forever preferred_lft forever > inet6 fe80::1:ff:fe10:215/64 scope link > valid_lft forever preferred_lft forever > 9: enp9s21f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 02:01:00:10:01:16 brd ff:ff:ff:ff:ff:ff > 10: enp9s21f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether 02:01:00:10:02:16 brd ff:ff:ff:ff:ff:ff > 11: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff > inet 10.62.150.211/24 scope global bond0 > valid_lft forever preferred_lft forever > inet6 fe80::b696:91ff:fe1a:161/64 scope link > valid_lft forever preferred_lft forever [root at localhost ~(keystone_admin)]# > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Feb 13 16:28:00 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 17:28:00 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Message-ID: <1722106541.435943.1550075280383@communicator.strato.com> Hi, when I unlock the controller-0 after installation following this tutorial https://docs.starlingx.io/installation_guide/duplex.html#duplex the system is rebooting automatically with a lot of errors from drdb. Controller-1 is not configured at this point in time. Must I configure the secondary controller before unlocking controller-0? Thanks Marcel From Don.Penney at windriver.com Wed Feb 13 16:41:44 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 13 Feb 2019 16:41:44 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <1722106541.435943.1550075280383@communicator.strato.com> References: <1722106541.435943.1550075280383@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> Yes, this is the correct behavior. As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 11:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Hi, when I unlock the controller-0 after installation following this tutorial https://docs.starlingx.io/installation_guide/duplex.html#duplex the system is rebooting automatically with a lot of errors from drdb. Controller-1 is not configured at this point in time. Must I configure the secondary controller before unlocking controller-0? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Feb 13 16:47:40 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 17:47:40 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> Message-ID: <1898058568.437106.1550076460788@communicator.strato.com> I am using the build centos-20190206T060000Z. The main problem, beside of the noise is, that the system comes up and reboots immediately again. > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > Yes, this is the correct behavior. > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 11:28 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > Hi, > > when I unlock the controller-0 after installation following this tutorial > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > the system is rebooting automatically with a lot of errors from drdb. > > Controller-1 is not configured at this point in time. > > Must I configure the secondary controller before unlocking controller-0? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From mario.alfredo.c.arevalo at intel.com Wed Feb 13 16:55:35 2019 From: mario.alfredo.c.arevalo at intel.com (Arevalo, Mario Alfredo C) Date: Wed, 13 Feb 2019 16:55:35 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <1722106541.435943.1550075280383@communicator.strato.com> References: <1722106541.435943.1550075280383@communicator.strato.com> Message-ID: <6594B51DBE477C48AAE23675314E6C466457ACDE@fmsmsx107.amr.corp.intel.com> Hi Marcel Can you please share us the output of the drbd error? Thanks. Best regards. Mario. ________________________________________ From: Marcel Schaible [marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 8:28 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Hi, when I unlock the controller-0 after installation following this tutorial https://docs.starlingx.io/installation_guide/duplex.html#duplex the system is rebooting automatically with a lot of errors from drdb. Controller-1 is not configured at this point in time. Must I configure the secondary controller before unlocking controller-0? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Wed Feb 13 17:02:18 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 13 Feb 2019 17:02:18 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <1898058568.437106.1550076460788@communicator.strato.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> <1898058568.437106.1550076460788@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F@ALA-MBD.corp.ad.wrs.com> Ok, my updates related to the DRBD state change failures merged Feb 8th. I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: * install node, run config_controller, do config steps * system host-unlock controller-0 * system reboots * system reboots a second time? At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 11:48 AM To: Penney, Don; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot I am using the build centos-20190206T060000Z. The main problem, beside of the noise is, that the system comes up and reboots immediately again. > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > Yes, this is the correct behavior. > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 11:28 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > Hi, > > when I unlock the controller-0 after installation following this tutorial > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > the system is rebooting automatically with a lot of errors from drdb. > > Controller-1 is not configured at this point in time. > > Must I configure the secondary controller before unlocking controller-0? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Wed Feb 13 17:17:39 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 18:17:39 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F@ALA-MBD.corp.ad.wrs.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> <1898058568.437106.1550076460788@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F@ALA-MBD.corp.ad.wrs.com> Message-ID: <23037907.438773.1550078259297@communicator.strato.com> At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? If yes, I would start to configure the controller-1. root at controller-0 ~(keystone_admin)]# fm alarm-list +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ [root at controller-0 ~(keystone_admin)]# > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > * install node, run config_controller, do config steps > * system host-unlock controller-0 > * system reboots > * system reboots a second time? Yes, exactly. > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. The second reboot starts in about 1 minute after the login prompt is shown. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 11:48 AM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > I am using the build centos-20190206T060000Z. > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > Yes, this is the correct behavior. > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 11:28 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > Hi, > > > > when I unlock the controller-0 after installation following this tutorial > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > Controller-1 is not configured at this point in time. > > > > Must I configure the secondary controller before unlocking controller-0? > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From felipe.de.jesus.ruiz.garcia at intel.com Wed Feb 13 17:19:13 2019 From: felipe.de.jesus.ruiz.garcia at intel.com (Ruiz Garcia, Felipe De Jesus) Date: Wed, 13 Feb 2019 17:19:13 +0000 Subject: [Starlingx-discuss] build-pkgs behavior: packages always compile Message-ID: <6454B6BCFEF18140B07C3AF1FA04DC89496B37FD@FMSMSX108.amr.corp.intel.com> Hi Scott I am working on the bug 1804687, and I have realized that some packages are rebuilt when "build-pkgs" is executed even if the package does not have new changes. The packages mentioned are: sm-common-debuginfo-1.0.0-20.tis.x86_64.rpm sm-common-dev-1.0.0-20.tis.x86_64.rpm sm-common-libs-1.0.0-20.tis.x86_64.rpm sm-eru-1.0.0-20.tis.x86_64.rpm sm-common-1.0.0-20.tis.src.rpm sm-common-1.0.0-20.tis.x86_64.rpm build-info-1.0-4.tis.src.rpm build-info-1.0-4.tis.x86_64.rpm build-info-dev-1.0-4.tis.x86_64.rpm Is this behavior a bug? otherwise why are those packages built every time if they have the same version? Note: The fix for the bug is ready for review https://review.openstack.org/#/c/634513/ Regards Felipe Ruiz / Pipo / Tranzemc Before anything else, preparation is the key to success. ( Alexander Graham Bell ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Wed Feb 13 17:20:33 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 18:20:33 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <6594B51DBE477C48AAE23675314E6C466457ACDE@fmsmsx107.amr.corp.intel.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6594B51DBE477C48AAE23675314E6C466457ACDE@fmsmsx107.amr.corp.intel.com> Message-ID: <92325439.438880.1550078433683@communicator.strato.com> controller-0 login: wrsroot Password: [ 447.539473] block drbd5: State change failed: Device is held open by someone [ 447.547353] block drbd5: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.557656] block drbd5: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.616216] block drbd1: State change failed: Device is held open by someone [ 447.624093] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.625520] block drbd2: State change failed: Device is held open by someone [ 447.625523] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.625524] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.663063] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.695930] block drbd0: State change failed: Device is held open by someone [ 447.703809] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.714117] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.717735] block drbd5: State change failed: Device is held open by someone [ 447.717737] block drbd5: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.717739] block drbd5: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.744998] block drbd2: State change failed: Device is held open by someone [ 447.745001] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.745002] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.790092] block drbd1: State change failed: Device is held open by someone [ 447.792927] block drbd0: State change failed: Device is held open by someone [ 447.792930] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.792931] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 447.826631] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 447.826633] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 448.813520] block drbd2: State change failed: Device is held open by someone [ 448.816397] block drbd0: State change failed: Device is held open by someone [ 448.816399] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 448.816400] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 448.850061] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 448.850062] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 448.851141] block drbd1: State change failed: Device is held open by someone [ 448.851143] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 448.851144] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 449.902198] block drbd0: State change failed: Device is held open by someone [ 449.910077] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 449.920380] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 449.933114] block drbd1: State change failed: Device is held open by someone [ 449.935475] block drbd2: State change failed: Device is held open by someone [ 449.935478] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 449.935479] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 449.969656] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 449.969658] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 450.975871] block drbd2: State change failed: Device is held open by someone [ 450.983748] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 450.994052] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 451.015768] block drbd0: State change failed: Device is held open by someone [ 451.023643] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 451.033947] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 451.038700] block drbd1: State change failed: Device is held open by someone [ 451.038702] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 451.038703] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 451.954434] block drbd3: State change failed: Device is held open by someone [ 451.962309] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 451.972608] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 452.007079] block drbd3: State change failed: Device is held open by someone [ 452.012542] block drbd2: State change failed: Device is held open by someone [ 452.012544] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 452.012545] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 452.043617] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 452.043619] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 452.074582] block drbd1: State change failed: Device is held open by someone [ 452.080401] block drbd0: State change failed: Device is held open by someone [ 452.080403] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 452.080404] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 452.111118] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 452.111120] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 453.045601] block drbd2: State change failed: Device is held open by someone [ 453.053478] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 453.063780] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 453.096723] block drbd3: State change failed: Device is held open by someone [ 453.104599] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 453.114898] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 453.137325] block drbd0: State change failed: Device is held open by someone [ 453.142791] block drbd1: State change failed: Device is held open by someone [ 453.142793] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 453.142794] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 453.173863] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 453.173864] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 454.091972] block drbd2: State change failed: Device is held open by someone [ 454.099849] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 454.110150] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 454.145690] block drbd3: State change failed: Device is held open by someone [ 454.153561] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 454.163867] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 454.176301] block drbd1: State change failed: Device is held open by someone [ 454.184182] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 454.184184] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 454.207939] block drbd0: State change failed: Device is held open by someone [ 454.215814] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 454.226114] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 455.139180] block drbd2: State change failed: Device is held open by someone [ 455.147055] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 455.157356] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 455.187247] block drbd3: State change failed: Device is held open by someone [ 455.195124] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 455.205430] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 455.206796] block drbd1: State change failed: Device is held open by someone [ 455.206798] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 455.206800] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 455.254455] block drbd0: State change failed: Device is held open by someone [ 455.262332] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 455.272633] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 456.186309] block drbd2: State change failed: Device is held open by someone [ 456.194185] block drbd2: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 456.204488] block drbd2: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 456.223846] block drbd1: State change failed: Device is held open by someone [ 456.224096] block drbd3: State change failed: Device is held open by someone [ 456.224098] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 456.224099] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 456.260386] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 456.260388] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 456.300789] block drbd0: State change failed: Device is held open by someone [ 456.308664] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 456.318965] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 457.242948] block drbd3: State change failed: Device is held open by someone [ 457.250834] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 457.250835] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 457.279143] block drbd1: State change failed: Device is held open by someone [ 457.287019] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 457.297319] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 457.346954] block drbd0: State change failed: Device is held open by someone [ 457.354829] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 457.365129] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 458.310997] block drbd3: State change failed: Device is held open by someone [ 458.318872] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 458.326017] block drbd1: State change failed: Device is held open by someone [ 458.326019] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 458.326020] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 458.357835] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 458.393913] block drbd0: State change failed: Device is held open by someone [ 458.401786] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 458.412085] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 459.346078] block drbd1: State change failed: Device is held open by someone [ 459.353953] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 459.364252] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 459.375826] block drbd3: State change failed: Device is held open by someone [ 459.383716] block drbd3: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 459.383717] block drbd3: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 459.440444] block drbd0: State change failed: Device is held open by someone [ 459.448319] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 459.458619] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 460.394649] block drbd1: State change failed: Device is held open by someone [ 460.402529] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 460.402530] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 460.498012] block drbd0: State change failed: Device is held open by someone [ 460.505890] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 460.505891] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 461.458950] block drbd1: State change failed: Device is held open by someone [ 461.466830] block drbd1: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 461.466831] block drbd1: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } [ 461.529731] block drbd0: State change failed: Device is held open by someone [ 461.537611] block drbd0: state = { cs:WFConnection ro:Primary/Unknown ds:UpToDate/DUnknown r----- } [ 461.537612] block drbd0: wanted = { cs:WFConnection ro:Secondary/Unknown ds:UpToDate/DUnknown r----- } > "Arevalo, Mario Alfredo C" hat am 13. Februar 2019 um 17:55 geschrieben: > > > Hi Marcel > > Can you please share us the output of the drbd error? > > Thanks. > > Best regards. > Mario. > ________________________________________ > From: Marcel Schaible [marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 8:28 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > Hi, > > when I unlock the controller-0 after installation following this tutorial > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > the system is rebooting automatically with a lot of errors from drdb. > > Controller-1 is not configured at this point in time. > > Must I configure the secondary controller before unlocking controller-0? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Wed Feb 13 17:20:11 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 13 Feb 2019 17:20:11 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <23037907.438773.1550078259297@communicator.strato.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> <1898058568.437106.1550076460788@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F@ALA-MBD.corp.ad.wrs.com> <23037907.438773.1550078259297@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D27F@ALA-MBD.corp.ad.wrs.com> I would suggest checking the puppet logs for a failure.... grep for Error. The redundancy alarms are because you don't have controller-1 yet, and can be ignored. The port failure alarm is concerning, I think... maybe that's related to the configuration error. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 12:18 PM To: Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? If yes, I would start to configure the controller-1. root at controller-0 ~(keystone_admin)]# fm alarm-list +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ [root at controller-0 ~(keystone_admin)]# > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > * install node, run config_controller, do config steps > * system host-unlock controller-0 > * system reboots > * system reboots a second time? Yes, exactly. > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. The second reboot starts in about 1 minute after the login prompt is shown. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 11:48 AM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > I am using the build centos-20190206T060000Z. > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > Yes, this is the correct behavior. > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 11:28 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > Hi, > > > > when I unlock the controller-0 after installation following this tutorial > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > Controller-1 is not configured at this point in time. > > > > Must I configure the secondary controller before unlocking controller-0? > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Wed Feb 13 17:23:26 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 13 Feb 2019 17:23:26 +0000 Subject: [Starlingx-discuss] build-pkgs behavior: packages always compile In-Reply-To: <6454B6BCFEF18140B07C3AF1FA04DC89496B37FD@FMSMSX108.amr.corp.intel.com> References: <6454B6BCFEF18140B07C3AF1FA04DC89496B37FD@FMSMSX108.amr.corp.intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D2A8@ALA-MBD.corp.ad.wrs.com> The build-info package rebuilds every time, as it generates the /etc/build-info file with info about the build. You can skip this by using "build-pkgs --no-build-info". SM rebuilds because it has a dependency on build-info. From: Ruiz Garcia, Felipe De Jesus [mailto:felipe.de.jesus.ruiz.garcia at intel.com] Sent: Wednesday, February 13, 2019 12:19 PM To: Little, Scott Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] build-pkgs behavior: packages always compile Hi Scott I am working on the bug 1804687, and I have realized that some packages are rebuilt when "build-pkgs" is executed even if the package does not have new changes. The packages mentioned are: sm-common-debuginfo-1.0.0-20.tis.x86_64.rpm sm-common-dev-1.0.0-20.tis.x86_64.rpm sm-common-libs-1.0.0-20.tis.x86_64.rpm sm-eru-1.0.0-20.tis.x86_64.rpm sm-common-1.0.0-20.tis.src.rpm sm-common-1.0.0-20.tis.x86_64.rpm build-info-1.0-4.tis.src.rpm build-info-1.0-4.tis.x86_64.rpm build-info-dev-1.0-4.tis.x86_64.rpm Is this behavior a bug? otherwise why are those packages built every time if they have the same version? Note: The fix for the bug is ready for review https://review.openstack.org/#/c/634513/ Regards Felipe Ruiz / Pipo / Tranzemc Before anything else, preparation is the key to success. ( Alexander Graham Bell ) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Wed Feb 13 17:27:01 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Wed, 13 Feb 2019 18:27:01 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D27F@ALA-MBD.corp.ad.wrs.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> <1898058568.437106.1550076460788@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F@ALA-MBD.corp.ad.wrs.com> <23037907.438773.1550078259297@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D27F@ALA-MBD.corp.ad.wrs.com> Message-ID: <1103410681.439174.1550078821148@communicator.strato.com> Ok. The latest puppet logs from controller-0 do not contain any errors. Should I try to install a newer image? > "Penney, Don" hat am 13. Februar 2019 um 18:20 geschrieben: > > > I would suggest checking the puppet logs for a failure.... grep for Error. The redundancy alarms are because you don't have controller-1 yet, and can be ignored. The port failure alarm is concerning, I think... maybe that's related to the configuration error. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 12:18 PM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. > > Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? > If yes, I would start to configure the controller-1. > > > root at controller-0 ~(keystone_admin)]# fm alarm-list > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | > | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | > | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | > | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | > | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | > | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | > | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | > | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | > | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | > | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | > | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | > | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > [root at controller-0 ~(keystone_admin)]# > > > > > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > > * install node, run config_controller, do config steps > > * system host-unlock controller-0 > > * system reboots > > * system reboots a second time? > > Yes, exactly. > > > > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. > > The second reboot starts in about 1 minute after the login prompt is shown. > > > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 11:48 AM > > To: Penney, Don; starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > I am using the build centos-20190206T060000Z. > > > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > > > > Yes, this is the correct behavior. > > > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > > > -----Original Message----- > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > Sent: Wednesday, February 13, 2019 11:28 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > > > Hi, > > > > > > when I unlock the controller-0 after installation following this tutorial > > > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > > > Controller-1 is not configured at this point in time. > > > > > > Must I configure the secondary controller before unlocking controller-0? > > > > > > Thanks > > > > > > Marcel > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Don.Penney at windriver.com Wed Feb 13 17:31:21 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 13 Feb 2019 17:31:21 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot In-Reply-To: <1103410681.439174.1550078821148@communicator.strato.com> References: <1722106541.435943.1550075280383@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D211@ALA-MBD.corp.ad.wrs.com> <1898058568.437106.1550076460788@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F@ALA-MBD.corp.ad.wrs.com> <23037907.438773.1550078259297@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D27F@ALA-MBD.corp.ad.wrs.com> <1103410681.439174.1550078821148@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D2C9@ALA-MBD.corp.ad.wrs.com> You'd need to look at the puppet log from the first reboot to see if some failure occurred there, or whether the reboot was triggered by a config change. I don't know if the configuration failure alarm is related to puppet or not. Given the OAM port alarm, you should double-check your OAM interface and configuration. Maybe John Kung has better advice on what to check... John? -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 12:27 PM To: Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Ok. The latest puppet logs from controller-0 do not contain any errors. Should I try to install a newer image? > "Penney, Don" hat am 13. Februar 2019 um 18:20 geschrieben: > > > I would suggest checking the puppet logs for a failure.... grep for Error. The redundancy alarms are because you don't have controller-1 yet, and can be ignored. The port failure alarm is concerning, I think... maybe that's related to the configuration error. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 12:18 PM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. > > Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? > If yes, I would start to configure the controller-1. > > > root at controller-0 ~(keystone_admin)]# fm alarm-list > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | > | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | > | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | > | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | > | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | > | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | > | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | > | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | > | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | > | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | > | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | > | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > [root at controller-0 ~(keystone_admin)]# > > > > > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > > * install node, run config_controller, do config steps > > * system host-unlock controller-0 > > * system reboots > > * system reboots a second time? > > Yes, exactly. > > > > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. > > The second reboot starts in about 1 minute after the login prompt is shown. > > > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 11:48 AM > > To: Penney, Don; starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > I am using the build centos-20190206T060000Z. > > > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > > > > Yes, this is the correct behavior. > > > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > > > -----Original Message----- > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > Sent: Wednesday, February 13, 2019 11:28 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > > > Hi, > > > > > > when I unlock the controller-0 after installation following this tutorial > > > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > > > Controller-1 is not configured at this point in time. > > > > > > Must I configure the secondary controller before unlocking controller-0? > > > > > > Thanks > > > > > > Marcel > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Wed Feb 13 18:05:51 2019 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 13 Feb 2019 19:05:51 +0100 Subject: [Starlingx-discuss] Open Infrastructure Summit and PTG planning Message-ID: <1F2C4A3F-7E11-4C9E-8993-14BD20BB25D1@gmail.com> Hi StarlingX Community, As we are getting closer to the Open Infrastructure Summit it is time to start to think about planning for the working sessions such as the Forum and PTG. The first priority at this point is the Forum as the session proposal phase opens on February 22. You can find the full timeline here: https://wiki.openstack.org/wiki/Forum I created an etherpad to brainstorm about topics to cover so we don’t have duplications when we submit session ideas: https://etherpad.openstack.org/p/stx-forum-preparation-denver-2019 Our team is currently working on scheduling for the PTG which will happen right after the Summit. You can see in a previous email the teams who applied for space which information we can use when we plan cross-project sessions: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003057.html I created another etherpad to plan PTG topics as we get closer to the event: https://etherpad.openstack.org/p/stx-ptg-preparation-denver-2019 Please let me know if you have any questions. Thanks and Best Regards, Ildikó From John.Kung at windriver.com Wed Feb 13 18:27:48 2019 From: John.Kung at windriver.com (Kung, John) Date: Wed, 13 Feb 2019 18:27:48 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes Message-ID: Marcel, controller-0 should be unlocked successfully for the first time before attempting to provision/configure controller-1 As noted, the presence of 'Error' logs on controller-0 (under directory /var/log/puppet ) would indicate the reason for the configuration failure (200.011). In regards to the OAM interface configured, please check 'system host-if-list -a controller-0' and 'system host-ethernet-port-list controller-0' to determine the current interface configuration. Furthermore, after the 'host-unlock controller-0', the configured OAM interface IP should be reachable from your OAM network 'system host-addr-list controller-0' to see the addresses allocated for that host. John ---------------------------------------------------------------------- Message: 1 Date: Wed, 13 Feb 2019 17:02:18 +0000 From: "Penney, Don" To: Marcel Schaible , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D23F at ALA-MBD.corp.ad.wrs.com> Content-Type: text/plain; charset="utf-8" Ok, my updates related to the DRBD state change failures merged Feb 8th. I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: * install node, run config_controller, do config steps * system host-unlock controller-0 * system reboots * system reboots a second time? At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 11:48 AM To: Penney, Don; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot I am using the build centos-20190206T060000Z. The main problem, beside of the noise is, that the system comes up and reboots immediately again. > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > Yes, this is the correct behavior. > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 11:28 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > Hi, > > when I unlock the controller-0 after installation following this tutorial > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > the system is rebooting automatically with a lot of errors from drdb. > > Controller-1 is not configured at this point in time. > > Must I configure the secondary controller before unlocking controller-0? > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ Message: 2 Date: Wed, 13 Feb 2019 18:17:39 +0100 (CET) From: Marcel Schaible To: "Penney, Don" , starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Message-ID: <23037907.438773.1550078259297 at communicator.strato.com> Content-Type: text/plain; charset=UTF-8 At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? If yes, I would start to configure the controller-1. root at controller-0 ~(keystone_admin)]# fm alarm-list +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ [root at controller-0 ~(keystone_admin)]# > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > * install node, run config_controller, do config steps > * system host-unlock controller-0 > * system reboots > * system reboots a second time? Yes, exactly. > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. The second reboot starts in about 1 minute after the login prompt is shown. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 11:48 AM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > I am using the build centos-20190206T060000Z. > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > Yes, this is the correct behavior. > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 11:28 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > Hi, > > > > when I unlock the controller-0 after installation following this tutorial > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > Controller-1 is not configured at this point in time. > > > > Must I configure the secondary controller before unlocking controller-0? > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From John.Kung at windriver.com Wed Feb 13 18:36:07 2019 From: John.Kung at windriver.com (Kung, John) Date: Wed, 13 Feb 2019 18:36:07 +0000 Subject: [Starlingx-discuss] Starlingx-discuss Digest, Vol 9, Issue 54 In-Reply-To: References: Message-ID: Marcel, controller-0 should be unlocked successfully for the first time before attempting to provision/configure controller-1 As noted, the presence of 'Error' logs on controller-0 (under directory /var/log/puppet ) would indicate the reason for the configuration failure (200.011). In regards to the OAM interface configured, please check 'system host-if-list -a controller-0' and 'system host-ethernet-port-list controller-0' to determine the current interface configuration. Furthermore, after the 'host-unlock controller-0', the configured OAM interface IP should be reachable from your OAM network 'system host-addr-list controller-0' to see the addresses allocated for that host. John ---------------------------------------------------------------------- Message: 1 Date: Wed, 13 Feb 2019 17:31:21 +0000 From: "Penney, Don" To: Marcel Schaible , "starlingx-discuss at lists.starlingx.io" Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D2C9 at ALA-MBD.corp.ad.wrs.com> Content-Type: text/plain; charset="utf-8" You'd need to look at the puppet log from the first reboot to see if some failure occurred there, or whether the reboot was triggered by a config change. I don't know if the configuration failure alarm is related to puppet or not. Given the OAM port alarm, you should double-check your OAM interface and configuration. Maybe John Kung has better advice on what to check... John? -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Wednesday, February 13, 2019 12:27 PM To: Penney, Don; starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot Ok. The latest puppet logs from controller-0 do not contain any errors. Should I try to install a newer image? > "Penney, Don" hat am 13. Februar 2019 um 18:20 geschrieben: > > > I would suggest checking the puppet logs for a failure.... grep for Error. The redundancy alarms are because you don't have controller-1 yet, and can be ignored. The port failure alarm is concerning, I think... maybe that's related to the configuration error. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 12:18 PM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. > > Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? > If yes, I would start to configure the controller-1. > > > root at controller-0 ~(keystone_admin)]# fm alarm-list > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | > | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | > | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | > | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | > | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | > | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | > | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | > | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | > | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | > | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | > | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | > | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > [root at controller-0 ~(keystone_admin)]# > > > > > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > > * install node, run config_controller, do config steps > > * system host-unlock controller-0 > > * system reboots > > * system reboots a second time? > > Yes, exactly. > > > > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. > > The second reboot starts in about 1 minute after the login prompt is shown. > > > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 11:48 AM > > To: Penney, Don; starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > I am using the build centos-20190206T060000Z. > > > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > > > > Yes, this is the correct behavior. > > > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > > > -----Original Message----- > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > Sent: Wednesday, February 13, 2019 11:28 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > > > Hi, > > > > > > when I unlock the controller-0 after installation following this tutorial > > > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > > > Controller-1 is not configured at this point in time. > > > > > > Must I configure the secondary controller before unlocking controller-0? > > > > > > Thanks > > > > > > Marcel > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yang.liu at windriver.com Wed Feb 13 19:01:35 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Wed, 13 Feb 2019 19:01:35 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it's wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy ; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Wed Feb 13 19:21:48 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 13 Feb 2019 19:21:48 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190213 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9B6A2@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-13 (link) Sanity Test is executed in a Bare Metal Environment Status: GREEN Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 43 TCs PASS ] =========================================== Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 47 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 22 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Wed Feb 13 20:21:05 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Wed, 13 Feb 2019 20:21:05 +0000 Subject: [Starlingx-discuss] ceilometer event-list Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9B703@FMSMSX108.amr.corp.intel.com> Hi, I am trying to get the output of ceilometer event-list command, from an STX container system, but I am getting the below error: [wrsroot at controller-0 ~(keystone_admin)]$ ceilometer event-list internalURL endpoint for metering service in RegionOne region not found I used source /etc/platform/openrc authentication. I also use the keystone in the container https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints, but get the same error. Do you know how to resolve this issue? The command have been updated? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From yang.liu at windriver.com Wed Feb 13 20:43:09 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Wed, 13 Feb 2019 20:43:09 +0000 Subject: [Starlingx-discuss] ceilometer event-list In-Reply-To: <8557B550001AFB46A43A0CCC314BF85153C9B703@FMSMSX108.amr.corp.intel.com> References: <8557B550001AFB46A43A0CCC314BF85153C9B703@FMSMSX108.amr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F862183920@ALA-MBD.corp.ad.wrs.com> If you sourced to /etc/platform/openrc, then even if you do export OS_CLOUD=openstack_helm later on, the OS_AUTH_URL in the openrc will still be used. You have two options: 1. source /etc/platform/openrc; export OS_AUTH_URL=http://keystone.openstack.svc.cluster.local/v3; ceilometer event-list 2. open a new shell (DO NOT source to platform openrc) > export OS_CLOUD=openstack_helm; ceilometer event-list (or openstack event list) BR, Yang From: Alonso, Juan Carlos [mailto:juan.carlos.alonso at intel.com] Sent: February-13-19 3:21 PM To: 'starlingx-discuss at lists.starlingx.io' Subject: [Starlingx-discuss] ceilometer event-list Hi, I am trying to get the output of ceilometer event-list command, from an STX container system, but I am getting the below error: [wrsroot at controller-0 ~(keystone_admin)]$ ceilometer event-list internalURL endpoint for metering service in RegionOne region not found I used source /etc/platform/openrc authentication. I also use the keystone in the container https://wiki.openstack.org/wiki/StarlingX/Containers/Installation#Verify_the_cluster_endpoints, but get the same error. Do you know how to resolve this issue? The command have been updated? Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed Feb 13 20:58:25 2019 From: scott.little at windriver.com (Scott Little) Date: Wed, 13 Feb 2019 15:58:25 -0500 Subject: [Starlingx-discuss] build-pkgs behavior: packages always compile In-Reply-To: <6454B6BCFEF18140B07C3AF1FA04DC89496B37FD@FMSMSX108.amr.corp.intel.com> References: <6454B6BCFEF18140B07C3AF1FA04DC89496B37FD@FMSMSX108.amr.corp.intel.com> Message-ID: <151a4f37-f239-5ed1-e5e2-d2cc317b3e62@windriver.com> build-info is always rebuilt, unless the --no-build-info flag is used. the others likely have a BuildRequires on build-info.  We do that in case there is a .h change that might alter the compilation of 'descendent' packages.  You can suppress that behaviour with  --no-descendants. Scott On 2019-02-13 12:19 p.m., Ruiz Garcia, Felipe De Jesus wrote: > > Hi Scott > > I am working on the bug 1804687 > , and I have > realized that some packages are rebuilt when "build-pkgs" is executed > even if the package does not have new changes. > > The packages mentioned are: > > sm-common-debuginfo-1.0.0-20.tis.x86_64.rpm > > sm-common-dev-1.0.0-20.tis.x86_64.rpm > > sm-common-libs-1.0.0-20.tis.x86_64.rpm > > sm-eru-1.0.0-20.tis.x86_64.rpm > > sm-common-1.0.0-20.tis.src.rpm > > sm-common-1.0.0-20.tis.x86_64.rpm > > build-info-1.0-4.tis.src.rpm > > build-info-1.0-4.tis.x86_64.rpm > > build-info-dev-1.0-4.tis.x86_64.rpm > > Is this behavior a bug? otherwise why are those packages built every > time if they have the same version? > > Note: > > The fix for the bug is ready for review > https://review.openstack.org/#/c/634513/ > > > Regards > *Felipe Ruiz / Pipo / Tranzemc* > /Before anything else, preparation is the key to success. (  Alexander > Graham Bell  )/ > -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Wed Feb 13 21:45:44 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Wed, 13 Feb 2019 21:45:44 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> ,<19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> Message-ID: <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> Hi, Yang Sorry about the issue! It’s interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it’s wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the “two node system” below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here’s an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won’t test. We don’t have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Wed Feb 13 22:00:53 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Wed, 13 Feb 2019 22:00:53 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> ,<19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like "tail -f /var/log/daemon.log | grep -i tftp" while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don't see this file transferred, I'd recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It's interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it's wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Wed Feb 13 23:22:20 2019 From: serverascode at gmail.com (Curtis) Date: Wed, 13 Feb 2019 18:22:20 -0500 Subject: [Starlingx-discuss] Open Infrastructure Summit and PTG planning In-Reply-To: <1F2C4A3F-7E11-4C9E-8993-14BD20BB25D1@gmail.com> References: <1F2C4A3F-7E11-4C9E-8993-14BD20BB25D1@gmail.com> Message-ID: On Wed, Feb 13, 2019 at 1:07 PM Ildiko Vancsa wrote: > Hi StarlingX Community, > > As we are getting closer to the Open Infrastructure Summit it is time to > start to think about planning for the working sessions such as the Forum > and PTG. > > The first priority at this point is the Forum as the session proposal > phase opens on February 22. You can find the full timeline here: > https://wiki.openstack.org/wiki/Forum > > I created an etherpad to brainstorm about topics to cover so we don’t have > duplications when we submit session ideas: > https://etherpad.openstack.org/p/stx-forum-preparation-denver-2019 I added a potential session regarding OpenStack Operators, and if there is anyone else in the community who would like to help moderate this potential session, I'm happy to have help. When I say "operator" I don't mean that in the telecommunication sense, but rather "OpenStack Operators"--people charged with day to day management of an OpenStack cloud. Thanks, Curtis PS. There is an upcoming OpenStack Operators meetup in Berlin: http://lists.openstack.org/pipermail/openstack-discuss/2019-February/002436.html should there be anyone on this list capable of attending. I wish I could. I love Berlin. :) > > > > Our team is currently working on scheduling for the PTG which will happen > right after the Summit. You can see in a previous email the teams who > applied for space which information we can use when we plan cross-project > sessions: > > http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003057.html > > I created another etherpad to plan PTG topics as we get closer to the > event: https://etherpad.openstack.org/p/stx-ptg-preparation-denver-2019 > > > Please let me know if you have any questions. > > Thanks and Best Regards, > Ildikó > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Wed Feb 13 23:57:52 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Wed, 13 Feb 2019 23:57:52 +0000 Subject: [Starlingx-discuss] [build][meetings] Build team meeting agenda 2/14/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105EC160@fmsmsx104.amr.corp.intel.com> Build team meeting Agenda for 2/14/2019 - Cengn update - CVE scan integration - Opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Feb 14 00:22:25 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 13 Feb 2019 19:22:25 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_wheels - Build # 46 - Failure! Message-ID: <336072122.23.1550103749380.JavaMail.javamailuser@localhost> Project: STX_build_wheels Build #: 46 Status: Failure Timestamp: 20190214T000905Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190213T213623Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190213T213623Z OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190213T213623Z/logs OPENSTACK_RELEASE: pike OS_VERSION: 7.5.1804 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190213T213623Z/logs From build.starlingx at gmail.com Thu Feb 14 00:22:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 13 Feb 2019 19:22:31 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_docker_images - Build # 46 - Failure! Message-ID: <1430595795.26.1550103752545.JavaMail.javamailuser@localhost> Project: STX_build_docker_images Build #: 46 Status: Failure Timestamp: 20190214T000721Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190213T213623Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20190213T213623Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190213T213623Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190213T213623Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos PUBLISH_TIMESTAMP: 20190213T213623Z DOCKER_BUILD_ID: jenkins-master-20190213T213623Z-builder OPENSTACK_RELEASE: pike TIMESTAMP: 20190213T213623Z OS_VERSION: 7.5.1804 PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/20190213T213623Z/inputs PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/20190213T213623Z/outputs From build.starlingx at gmail.com Thu Feb 14 00:22:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 13 Feb 2019 19:22:35 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 138 - Failure! Message-ID: <442856396.29.1550103755915.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 138 Status: Failure Timestamp: 20190213T213623Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190213T213623Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: true From michael.l.tullis at intel.com Thu Feb 14 00:29:01 2019 From: michael.l.tullis at intel.com (Tullis, Michael L) Date: Thu, 14 Feb 2019 00:29:01 +0000 Subject: [Starlingx-discuss] [docs][meetings] docs team meeting minutes 2/13/2019 Message-ID: <3808363B39586544A6839C76CF81445EA1AB6766@ORSMSX104.amr.corp.intel.com> For notes and new action items from our docs team meeting today, see our etherpad: https://etherpad.openstack.org/p/stx-documentation Join us if you have interest in StarlingX docs! We meet Wednesdays, and call logistics are here: https://wiki.openstack.org/wiki/Starlingx/Meetings. -- Mike -------------- next part -------------- An HTML attachment was scrubbed... URL: From cindy.xie at intel.com Thu Feb 14 00:53:54 2019 From: cindy.xie at intel.com (Xie, Cindy) Date: Thu, 14 Feb 2019 00:53:54 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> ,<19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> Message-ID: <2FD5DDB5A04D264C80D42CA35194914F35E86C0A@SHSMSX104.ccr.corp.intel.com> Thanks, Don for the details. We will do as instructed and ensure the test is done correctly. Thx. - cindy From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like "tail -f /var/log/daemon.log | grep -i tftp" while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don't see this file transferred, I'd recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It's interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it's wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Thu Feb 14 07:31:33 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Thu, 14 Feb 2019 07:31:33 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> ,<19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the "grubx64.efi" in "export/dist/isolinux/pxeboot/EFI/" is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the "grubx64.efi" is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: " 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi " 3. Austin confirmed "install -D -m 755" will set the grubx64.efi with 755 permission mode. " -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x " 4. I try to go through the build log. Here is the log from grub2's build.log " + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi " And in build-iso script, the file will be extracted and copied to the EFI folder: " extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot ... \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ " Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is "install -D -m 755 grubx64.efi" in the "loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log" or not. 2. Please help extract "grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm" in "loadbuild/std/rpmbuild/RPMS", and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the "grubx64.efi" in "export/dist/isolinux/pxeboot/EFI/" folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like "tail -f /var/log/daemon.log | grep -i tftp" while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don't see this file transferred, I'd recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It's interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it's wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From cheng1.li at intel.com Thu Feb 14 08:06:10 2019 From: cheng1.li at intel.com (Li, Cheng1) Date: Thu, 14 Feb 2019 08:06:10 +0000 Subject: [Starlingx-discuss] STX ovs containerization Message-ID: Hi Matt, Joseph, I have updated the patch per your comments. https://review.openstack.org/633924 Regarding per-host overrides of network information( network -> auto_bridge_add), seems openstack-helm doesn't support it. I prefer to leave auto_bridge_add scope as it is until openstack-helm support per-host overrides of network info. If that, we need add nic to bridge by hand, instead of initContainer. I believe Joseph is trying to find an approach for auto_bridge_add, let's waiting for the result. Thanks, Cheng -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Thu Feb 14 12:39:14 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 14 Feb 2019 13:39:14 +0100 (CET) Subject: [Starlingx-discuss] Starlingx-discuss Digest, Vol 9, Issue 54 In-Reply-To: References: Message-ID: <1274355318.481180.1550147954209@communicator.strato.com> Hi John, thanks for your help! At the moment the controller-0 is stable and spontaneous rebooting. The box is not reachable thru our lab net with the ip 10.62.150.211. The box responds on ICMP at this address but the web server is not reachable. The network config looks like this: Any ideas what is wrong with that? -0oot at controller-0 ~(keystone_admin)]# system host-if-list -a controller-0- +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+--------------------+ | uuid | name | class | type | vlan id | ports | uses i/f | used by i/f | attributes | data networks | +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+--------------------+ | 04665f98-8318-4855-a591-1307231a40b3 | bond0 | platform | ae | None | [] | [u'ens1f3', u'ens1f1'] | [] | MTU=1500,AE_MODE=active_standby | [] | | 2a6b70cb-53ad-43c3-adc7-b92c4c44470d | enp9s20f6 | None | ethernet | None | [u'enp9s20f6'] | [] | [] | MTU=1500 | [] | | 2dcbe277-530e-400c-b156-43545b4507c8 | ens1f1 | None | ethernet | None | [u'ens1f1'] | [] | [u'bond0'] | MTU=1500 | [] | | 3acd41aa-aeb0-400c-86ba-79f0fdb24829 | ens1f3 | None | ethernet | None | [u'ens1f3'] | [] | [u'bond0'] | MTU=1500 | [] | | 86b8867a-2aee-4cb9-a04a-7587ff8d833d | enp3s0f0 | None | ethernet | None | [u'enp3s0f0'] | [] | [] | MTU=1500 | [] | | ce78e71f-4300-4a6b-ae0a-43e3139a5515 | enp3s0f1 | None | ethernet | None | [u'enp3s0f1'] | [] | [] | MTU=1500 | [] | | d3e101a0-a873-4c76-b7b5-83368626acd4 | enp9s21f0 | None | ethernet | None | [u'enp9s21f0'] | [] | [] | MTU=1500 | [] | | e93f1d26-e6ee-4c7f-84ef-8b68e312ef16 | enp9s21f1 | data | ethernet | None | [u'enp9s21f1'] | [] | [] | MTU=1500,accelerated=True | [u'providernet-a'] | | ed1b47c8-0311-4e39-8840-9931c69c147e | ens1f0 | None | ethernet | None | [u'ens1f0'] | [] | [] | MTU=1500 | [] | | ee4ce037-1408-4695-b05d-e5491f30e274 | enp9s20f7 | platform | ethernet | None | [u'enp9s20f7'] | [] | [] | MTU=1500 | [] | +--------------------------------------+-----------+----------+----------+---------+----------------+------------------------+-------------+---------------------------------+--------------------+ -0oot at controller-0 ~(keystone_admin)]# system host-ethernet-port-list controller- +--------------------------------------+-----------+-------------------+--------------+-----------+----------+--------------------------------------------+----------+ | uuid | name | mac address | pci address | processor | auto neg | device type | boot i/f | +--------------------------------------+-----------+-------------------+--------------+-----------+----------+--------------------------------------------+----------+ | b93a396b-0fcb-4f51-b121-9e85ded4b49d | enp3s0f0 | ec:9e:cd:1f:7e:b0 | 0000:03:00.0 | 0 | Yes | Ethernet Connection X552 10 GbE Backplane | False | | e24e7cca-9e78-48a5-8d84-dd563fea2626 | enp3s0f1 | ec:9e:cd:1f:7e:b1 | 0000:03:00.1 | 0 | Yes | Ethernet Connection X552 10 GbE Backplane | False | | 41a17a4d-f5eb-4dea-9bcb-0decb10137a4 | enp9s20f6 | 02:01:00:10:01:15 | 0000:09:14.6 | 0 | Yes | 82599 Ethernet Controller Virtual Function | False | | 9d159107-2199-40bb-9e6b-189d6285ca4d | enp9s20f7 | 02:01:00:10:02:15 | 0000:09:14.7 | 0 | Yes | 82599 Ethernet Controller Virtual Function | True | | c0bab514-5c77-48a7-97e5-8a9790df1584 | enp9s21f0 | 02:01:00:10:01:16 | 0000:09:15.0 | 0 | Yes | 82599 Ethernet Controller Virtual Function | False | | 55d31ed0-3370-4b5b-822f-d2766d0b769b | enp9s21f1 | 02:01:00:10:02:16 | 0000:09:15.1 | 0 | Yes | 82599 Ethernet Controller Virtual Function | False | | f4105b07-041e-4a75-90c5-f9f097c33616 | ens1f0 | b4:96:91:1a:01:60 | 0000:07:00.0 | 0 | Yes | I350 Gigabit Network Connection | False | | 000f730d-626e-4087-9a3b-33ac673d688c | ens1f1 | b4:96:91:1a:01:61 | 0000:07:00.1 | 0 | Yes | I350 Gigabit Network Connection | False | | cd202069-f5a0-43bb-a445-d3b9b58ce631 | ens1f3 | b4:96:91:1a:01:63 | 0000:07:00.3 | 0 | Yes | I350 Gigabit Network Connection | False | +--------------------------------------+-----------+-------------------+--------------+-----------+----------+--------------------------------------------+----------+ [root at controller-0 ~(keystone_admin)]# [root at controller-0 ~(keystone_admin)]# ip addr 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: enp3s0f0: mtu 1500 qdisc mq state UP group default qlen 1000 link/ether ec:9e:cd:1f:7e:b0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ee9e:cdff:fe1f:7eb0/64 scope link valid_lft forever preferred_lft forever 3: ens1f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether b4:96:91:1a:01:60 brd ff:ff:ff:ff:ff:ff 4: ens1f1: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff 5: enp3s0f1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ec:9e:cd:1f:7e:b1 brd ff:ff:ff:ff:ff:ff 6: ens1f3: mtu 1500 qdisc mq master bond0 state DOWN group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff 7: enp9s20f6: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:01:15 brd ff:ff:ff:ff:ff:ff 8: enp9s20f7: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether 02:01:00:10:02:15 brd ff:ff:ff:ff:ff:ff inet 172.27.1.3/24 brd 172.27.1.255 scope global enp9s20f7 valid_lft forever preferred_lft forever inet 169.254.202.2/24 scope global enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.2/24 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.5/24 brd 172.27.1.255 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet 172.27.1.6/24 brd 172.27.1.255 scope global secondary enp9s20f7 valid_lft forever preferred_lft forever inet6 fe80::1:ff:fe10:215/64 scope link valid_lft forever preferred_lft forever 9: enp9s21f0: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether 02:01:00:10:01:16 brd ff:ff:ff:ff:ff:ff 11: bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b4:96:91:1a:01:61 brd ff:ff:ff:ff:ff:ff inet 10.62.150.211/24 scope global bond0 valid_lft forever preferred_lft forever inet6 fe80::b696:91ff:fe1a:161/64 scope link valid_lft forever preferred_lft forever 12: ovs-netdev: mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether a2:a0:71:9e:02:81 brd ff:ff:ff:ff:ff:ff 13: br-int: mtu 1500 qdisc pfifo_fast state DOWN group default qlen 1000 link/ether 06:dc:56:31:96:42 brd ff:ff:ff:ff:ff:ff 16: br-phy0: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether 56:e3:6b:8a:ff:45 brd ff:ff:ff:ff:ff:ff inet6 fe80::54e3:6bff:fe8a:ff45/64 scope link valid_lft forever preferred_lft forever 17: lldp55d31ed0-33: mtu 1500 qdisc pfifo_fast state UNKNOWN group default qlen 1000 link/ether e6:e6:7b:46:1a:b2 brd ff:ff:ff:ff:ff:ff inet6 fe80::e4e6:7bff:fe46:1ab2/64 scope link valid_lft forever preferred_lft forever [root at controller-0 ~(keystone_admin)]# Thanks Marcel > "Kung, John" hat am 13. Februar 2019 um 19:36 geschrieben: > > > Marcel, > > controller-0 should be unlocked successfully for the first time before attempting to provision/configure controller-1 > > As noted, the presence of 'Error' logs on controller-0 (under directory /var/log/puppet ) would indicate the reason for the configuration failure (200.011). > > In regards to the OAM interface configured, please check 'system host-if-list -a controller-0' and 'system host-ethernet-port-list controller-0' to determine the current interface configuration. Furthermore, after the 'host-unlock controller-0', the configured OAM interface IP should be reachable from your OAM network 'system host-addr-list controller-0' to see the addresses allocated for that host. > > > John > ---------------------------------------------------------------------- > > Message: 1 > Date: Wed, 13 Feb 2019 17:31:21 +0000 > From: "Penney, Don" > To: Marcel Schaible , > "starlingx-discuss at lists.starlingx.io" > > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of > controller-0 causes reboot > Message-ID: > <6703202FD9FDFF4A8DA9ACF104AE129FBA43D2C9 at ALA-MBD.corp.ad.wrs.com> > Content-Type: text/plain; charset="utf-8" > > You'd need to look at the puppet log from the first reboot to see if some failure occurred there, or whether the reboot was triggered by a config change. > > I don't know if the configuration failure alarm is related to puppet or not. Given the OAM port alarm, you should double-check your OAM interface and configuration. > > Maybe John Kung has better advice on what to check... John? > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Wednesday, February 13, 2019 12:27 PM > To: Penney, Don; starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > Ok. The latest puppet logs from controller-0 do not contain any errors. > > Should I try to install a newer image? > > > > "Penney, Don" hat am 13. Februar 2019 um 18:20 geschrieben: > > > > > > I would suggest checking the puppet logs for a failure.... grep for Error. The redundancy alarms are because you don't have controller-1 yet, and can be ignored. The port failure alarm is concerning, I think... maybe that's related to the configuration error. > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Wednesday, February 13, 2019 12:18 PM > > To: Penney, Don; starlingx-discuss at lists.starlingx.io > > Subject: RE: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > At the moment the system seems to stablize after 2 reboots. I'll keep an eye on that. > > > > Is the alarm list looking reasonable after the controller-0 is unlocked and rebooted? > > If yes, I would start to configure the controller-1. > > > > > > root at controller-0 ~(keystone_admin)]# fm alarm-list > > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > > | Alarm ID | Reason Text | Entity ID | Severity | Time Stamp | > > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > > | 200.011 | controller-0 experienced a configuration failure. | host=controller-0 | critical | 2019-02-13T18:57:13.772108 | > > | 400.002 | Service group controller-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=controller-services | major | 2019-02-13T18:56:15.215083 | > > | 400.002 | Service group vim-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=vim-services | major | 2019-02-13T18:56:15.174083 | > > | 400.002 | Service group cloud-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=cloud-services | major | 2019-02-13T18:56:12.213065 | > > | 400.002 | Service group oam-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=oam-services | major | 2019-02-13T18:56:05.330099 | > > | 400.002 | Service group patching-services loss of redundancy; expected 1 standby member but no standby members available | service_domain=controller.service_group=patching-services | major | 2019-02-13T18:56:04.276424 | > > | 400.002 | Service group directory-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=directory-services | major | 2019-02-13T18:56:04.195081 | > > | 400.002 | Service group web-services loss of redundancy; expected 2 active members but only 1 active member available | service_domain=controller.service_group=web-services | major | 2019-02-13T18:56:04.114064 | > > | 400.005 | Communication failure detected with peer over port enp9s20f7 on host controller-0 | host=controller-0.network=mgmt | major | 2019-02-13T18:56:03.790070 | > > | 100.106 | 'OAM' Port failed. | host=controller-0.port=cd202069-f5a0-43bb-a445-d3b9b58ce631 | major | 2019-02-13T18:56:00.563067 | > > | 100.107 | 'OAM' Interface degraded. | host=controller-0.interface=oam | major | 2019-02-13T18:56:00.515836 | > > | 300.004 | No enabled compute host with connectivity to provider network. | service=networking.providernet=e9eef26b-d266-4b3f-b1b7-6d9c2cd3a86b | major | 2019-02-13T17:20:21.110405 | > > +----------+-------------------------------------------------------------------------------------------------------------------+---------------------------------------------------------------------+----------+----------------------------+ > > [root at controller-0 ~(keystone_admin)]# > > > > > > > > > "Penney, Don" hat am 13. Februar 2019 um 18:02 geschrieben: > > > > > > > > > Ok, my updates related to the DRBD state change failures merged Feb 8th. > > > > > > I wouldn't expect to see an additional reboot. So what you're seeing, just to confirm, is: > > > * install node, run config_controller, do config steps > > > * system host-unlock controller-0 > > > * system reboots > > > * system reboots a second time? > > > > Yes, exactly. > > > > > > > > At what point is it rebooting a second time? Maybe a reboot is being triggered due to config changes made when the puppet manifests apply, which I think we could confirm from the puppet logs. > > > > The second reboot starts in about 1 minute after the login prompt is shown. > > > > > > > > > > -----Original Message----- > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > Sent: Wednesday, February 13, 2019 11:48 AM > > > To: Penney, Don; starlingx-discuss at lists.starlingx.io > > > Subject: Re: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > > > I am using the build centos-20190206T060000Z. > > > > > > The main problem, beside of the noise is, that the system comes up and reboots immediately again. > > > > > > > > > > > > > "Penney, Don" hat am 13. Februar 2019 um 17:41 geschrieben: > > > > > > > > > > > > Yes, this is the correct behavior. > > > > > > > > As for the drbd warnings, there are some state change failures that occur as the system shuts down, but aren't a problem (aside from the noise). I've managed to clean up most of these in a recent update. What build are you using? > > > > > > > > -----Original Message----- > > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > > Sent: Wednesday, February 13, 2019 11:28 AM > > > > To: starlingx-discuss at lists.starlingx.io > > > > Subject: [Starlingx-discuss] Duplex Configuration: Unlock of controller-0 causes reboot > > > > > > > > Hi, > > > > > > > > when I unlock the controller-0 after installation following this tutorial > > > > > > > > https://docs.starlingx.io/installation_guide/duplex.html#duplex > > > > > > > > the system is rebooting automatically with a lot of errors from drdb. > > > > > > > > Controller-1 is not configured at this point in time. > > > > > > > > Must I configure the secondary controller before unlocking controller-0? > > > > > > > > Thanks > > > > > > > > Marcel > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From yang.liu at windriver.com Thu Feb 14 13:36:46 2019 From: yang.liu at windriver.com (Liu, Yang) Date: Thu, 14 Feb 2019 13:36:46 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> ,<19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> Message-ID: <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> Hi Shuicheng, I checked the 3 items as per your instructions, build log does contain the expected step, however the results are different. @ Scott/Don, any thoughts on this? 1. + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi 2. -rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi 3. -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi BR, Yang From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: February-14-19 2:32 AM To: Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the "grubx64.efi" in "export/dist/isolinux/pxeboot/EFI/" is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the "grubx64.efi" is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: " 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi " 3. Austin confirmed "install -D -m 755" will set the grubx64.efi with 755 permission mode. " -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x " 4. I try to go through the build log. Here is the log from grub2's build.log " + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi " And in build-iso script, the file will be extracted and copied to the EFI folder: " extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot ... \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ " Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is "install -D -m 755 grubx64.efi" in the "loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log" or not. 2. Please help extract "grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm" in "loadbuild/std/rpmbuild/RPMS", and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the "grubx64.efi" in "export/dist/isolinux/pxeboot/EFI/" folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like "tail -f /var/log/daemon.log | grep -i tftp" while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don't see this file transferred, I'd recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It's interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it's wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the "two node system" below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here's an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won't test. We don't have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From Numan.Waheed at windriver.com Thu Feb 14 14:33:26 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Thu, 14 Feb 2019 14:33:26 +0000 Subject: [Starlingx-discuss] Gnocchi Test Cases In-Reply-To: <0483622846A57742B81A944248DD69042FC015C7@fmsmsx101.amr.corp.intel.com> References: <0483622846A57742B81A944248DD69042FC0053E@fmsmsx101.amr.corp.intel.com> <0483622846A57742B81A944248DD69042FC015C7@fmsmsx101.amr.corp.intel.com> Message-ID: <3CAA827B7A79BA46B15B280EC82088FE48273C77@ALA-MBD.corp.ad.wrs.com> Hi Juan, I have posted Gnocchi test plan at the following shared drive. I hope you will find it helpful. https://drive.google.com/drive/folders/1nab5AW18HIxpbkjAR6-dwk-kt1rv_8HQ Thanks, Numan. From: Gomez, Juan P Sent: February-13-19 12:33 PM To: Waheed, Numan Cc: Cabrales, Ada Subject: RE: Gnocchi Test Cases Hi Numan, Any update on this? Thanks, JP From: Gomez, Juan P Sent: Monday, February 11, 2019 5:08 PM To: Waheed, Numan > Cc: Cabrales, Ada > Subject: Gnocchi Test Cases Hi Numan, Could you give me a light for Gnocchi Test Cases? Is there any existing test cases that We can port to cover Gnocchi? Which components integrate Gnocchi? Is there any relation between Ceilometer and Gnocchi? Thanks and Best Regards, JP Juan Pablo Gomez Software Quality Assurance Engineer OTC Edge Computing -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Thu Feb 14 15:38:11 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 14 Feb 2019 16:38:11 +0100 (CET) Subject: [Starlingx-discuss] Change of OAM Floating IP on active controller Message-ID: <1325149833.495158.1550158691943@communicator.strato.com> Hi, I must change the OAM floating IP on our active controller-0 (duplex configuration and controller-1 is not configured). I have changed the IP over WebUI and must now lock->unlock the controller-0. This is not possible for the active controller. When I reboot the box the old OAM IP ist getting applied again. What is teh recommended procedure for doing this? Thanks Marcel From juan.carlos.alonso at intel.com Thu Feb 14 15:47:29 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 14 Feb 2019 15:47:29 +0000 Subject: [Starlingx-discuss] Change of OAM Floating IP on active controller In-Reply-To: <1325149833.495158.1550158691943@communicator.strato.com> References: <1325149833.495158.1550158691943@communicator.strato.com> Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9B8C5@FMSMSX108.amr.corp.intel.com> Hi, I think you need to configure the controller-1, then perform a controller swact to make the controller-0 standby and controller-1 the active one. You can lock/unlock the controller-0 now. Regards. Juan Carlos Alonso -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Thursday, February 14, 2019 9:38 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Change of OAM Floating IP on active controller Hi, I must change the OAM floating IP on our active controller-0 (duplex configuration and controller-1 is not configured). I have changed the IP over WebUI and must now lock->unlock the controller-0. This is not possible for the active controller. When I reboot the box the old OAM IP ist getting applied again. What is teh recommended procedure for doing this? Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From serverascode at gmail.com Thu Feb 14 15:53:58 2019 From: serverascode at gmail.com (Curtis) Date: Thu, 14 Feb 2019 10:53:58 -0500 Subject: [Starlingx-discuss] Packet.com baremetal cloud opportunity In-Reply-To: References: Message-ID: Hi All, If you are interested in this potential project we are working on having a meeting in the near future for next steps, so please add your email to the bottom of the etherpad [1] if you'd like to be included. We are still moving forward with initial investigations, and would like to come up with some concrete next steps. :) Thanks, Curtis [1]: https://etherpad.openstack.org/p/starlingx-packet-edge-pilot On Thu, Jan 31, 2019 at 10:59 AM Curtis wrote: > Hi All, > > There is an opportunity to work with the Packet.com cloud in terms of them > providing cloud resources to the STX community in a couple of different > ways, but you can read about all that in the etherpad [1] and add any > comments/questions/ideas, etc. :) > > Obviously there is some due diligence and information gathering to be > completed, but overall, from my own perspective, having been on a few > related calls, it seems like the STX TSC and other community members that > have had input are thus far positive towards this opportunity. > > Do let us know what you think! > > Thanks kindly, > Curtis > > [1]: https://etherpad.openstack.org/p/starlingx-packet-edge-pilot > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Volker.Hoesslin at swsn.de Thu Feb 14 16:23:09 2019 From: Volker.Hoesslin at swsn.de (von Hoesslin, Volker) Date: Thu, 14 Feb 2019 16:23:09 +0000 Subject: [Starlingx-discuss] Nova instance - Change system product name Message-ID: hi, is there any given glance image metadata to change the instance product name? i want to change the xml entry for file libvirt.xml from instance: # by default it looks like this: OpenStack Nova thx, volker... -------------- next part -------------- An HTML attachment was scrubbed... URL: From Numan.Waheed at windriver.com Thu Feb 14 16:26:53 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Thu, 14 Feb 2019 16:26:53 +0000 Subject: [Starlingx-discuss] contact for OPNFV In-Reply-To: <0F0AD23D-45CF-4F11-A22D-2FC63DCDF549@intel.com> References: <0EECF16D-278B-46E9-848E-E138060D906C@intel.com> <0F0AD23D-45CF-4F11-A22D-2FC63DCDF549@intel.com> Message-ID: <3CAA827B7A79BA46B15B280EC82088FE48273E9B@ALA-MBD.corp.ad.wrs.com> Hi Victor, Peng from my team can help you with any question regarding OPNFV and Refstack test suite. Thanks, Numan. -----Original Message----- From: Rodriguez Bahena, Victor Sent: February-14-19 10:57 AM To: Waheed, Numan Subject: Re: contact for OPNFV Hi Friendly reminder -----Original Message----- From: "Rodriguez Bahena, Victor" Date: Tuesday, February 12, 2019 at 2:33 PM To: "Numan.Waheed at windriver.com" Subject: contact for OPNFV Hi Numan I was wondering if you have the information about the steps to run the performance tests from OPNFV ? Regards Victor Rodriguez From sgw at linux.intel.com Thu Feb 14 17:50:33 2019 From: sgw at linux.intel.com (Saul Wold) Date: Thu, 14 Feb 2019 09:50:33 -0800 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 Message-ID: Folks, I was doing some experimentation with an un-patched CentOS and running config_controller. One of the main issues I found is that doing the initial installation and execution discovered many un-resolved runtime requirements. I will start sending some pull requests to fault, metal, and config with more detailed "Requires:" statements. Another item is that since that we are rebuilding openstack-keystone among other openstack related packages with additional configuration and scripts, which are needed for controller-0. In the stx-integ (base OS) case, we re-factored many of the packages to remove configuration and additional scripts to a separate package, I would like to see something similar here for packages are are needed for controller-0 (ie the things we are not installing from PyPi directly). What I saw is that we include the CentOS-Openstack RPM repo along with, of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack packages directly along with some StarlingX specific additions in a seperate package, rather than creating a new package with both upstream and StarlingX content. Thoughts, Sau! From vm.rod25 at gmail.com Thu Feb 14 18:04:56 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Thu, 14 Feb 2019 12:04:56 -0600 Subject: [Starlingx-discuss] contact for OPNFV In-Reply-To: <3CAA827B7A79BA46B15B280EC82088FE48273E9B@ALA-MBD.corp.ad.wrs.com> References: <0EECF16D-278B-46E9-848E-E138060D906C@intel.com> <0F0AD23D-45CF-4F11-A22D-2FC63DCDF549@intel.com> <3CAA827B7A79BA46B15B280EC82088FE48273E9B@ALA-MBD.corp.ad.wrs.com> Message-ID: Thanks a lot, Numan Peng , Numan mention last testing meeting about a wiki with the steps to run the OPNFV performance framework on top of STX, could you please share those steps? Regards On Thu, Feb 14, 2019 at 10:27 AM Waheed, Numan wrote: > > Hi Victor, > > Peng from my team can help you with any question regarding OPNFV and Refstack test suite. > > Thanks, > > Numan. > > -----Original Message----- > From: Rodriguez Bahena, Victor > Sent: February-14-19 10:57 AM > To: Waheed, Numan > Subject: Re: contact for OPNFV > > Hi > > Friendly reminder > > > -----Original Message----- > From: "Rodriguez Bahena, Victor" > Date: Tuesday, February 12, 2019 at 2:33 PM > To: "Numan.Waheed at windriver.com" > Subject: contact for OPNFV > > Hi Numan > > I was wondering if you have the information about the steps to run the performance tests from OPNFV ? > > Regards > > Victor Rodriguez > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Thu Feb 14 18:39:35 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 14 Feb 2019 19:39:35 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout Message-ID: <701275276.504630.1550169575394@communicator.strato.com> Hi, I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: Console log: ============= [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. [ OK ] Started Show Plymouth Boot Screen. [ OK ] Started Device-Mapper Multipath Device Controller. Starting Open-iSCSI... [ OK ] Reached target Paths. [ OK ] Reached target Basic System. [ OK ] Started Open-iSCSI. Starting dracut initqueue hook... [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts ... [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts [Warning: /dev/root does not exist Generating "/run/initramfs/rdsosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# Kernel parameter: ================== dracut:/# journalctl | grep -i boot Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 ======= The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? Any idea is welcome! Thanks Marcel From Don.Penney at windriver.com Thu Feb 14 18:50:15 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 14 Feb 2019 18:50:15 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <701275276.504630.1550169575394@communicator.strato.com> References: <701275276.504630.1550169575394@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: * problems with the lighttpd server on the active controller * NICs that are unsupported by the initrd kernel modules * some other comms issue What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Thursday, February 14, 2019 1:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout Hi, I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: Console log: ============= [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. [ OK ] Started Show Plymouth Boot Screen. [ OK ] Started Device-Mapper Multipath Device Controller. Starting Open-iSCSI... [ OK ] Reached target Paths. [ OK ] Reached target Basic System. [ OK ] Started Open-iSCSI. Starting dracut initqueue hook... [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts ... [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts [Warning: /dev/root does not exist Generating "/run/initramfs/rdsosreport.txt" Entering emergency mode. Exit the shell to continue. Type "journalctl" to view system logs. You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot after mounting them and attach it to a bug report. dracut:/# Kernel parameter: ================== dracut:/# journalctl | grep -i boot Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 ======= The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? Any idea is welcome! Thanks Marcel _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Thu Feb 14 18:57:18 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 14 Feb 2019 19:57:18 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> References: <701275276.504630.1550169575394@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> Message-ID: <1434107517.505254.1550170638899@communicator.strato.com> I am using the release ISO centos-20190206T060000Z. > "Penney, Don" hat am 14. Februar 2019 um 19:50 geschrieben: > > > This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: > * problems with the lighttpd server on the active controller > * NICs that are unsupported by the initrd kernel modules > * some other comms issue > > What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Thursday, February 14, 2019 1:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > Hi, > > I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: > > Console log: > ============= > > [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. > [ OK ] Started Show Plymouth Boot Screen. > [ OK ] Started Device-Mapper Multipath Device Controller. > Starting Open-iSCSI... > [ OK ] Reached target Paths. > [ OK ] Reached target Basic System. > [ OK ] Started Open-iSCSI. > Starting dracut initqueue hook... > [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > ... > [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [Warning: /dev/root does not exist > > Generating "/run/initramfs/rdsosreport.txt" > > Entering emergency mode. Exit the shell to continue. > Type "journalctl" to view system logs. > You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot > after mounting them and attach it to a bug report. > > dracut:/# > > Kernel parameter: > ================== > > dracut:/# journalctl | grep -i boot > Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 > ======= > > The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? > > Any idea is welcome! > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From bruce.e.jones at intel.com Thu Feb 14 18:57:29 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 14 Feb 2019 18:57:29 +0000 Subject: [Starlingx-discuss] upstream Nova NUMA live migration work Message-ID: <9A85D2917C58154C960D95352B22818BC3AD0D3C@fmsmsx121.amr.corp.intel.com> It looks like there is work moving forward in the Nova community on this topic. There are active and recent code submissions in review here: [0]. Brent, Chris - I have some questions for you. Are you aware of these reviews? Are you planning to review the code? More importantly, does this code meet StarlingX's needs - is it heading in the right direction for us? If not, how and where might we want to influence where the work is going? I have one other ask, for the Test team. Are there existing test cases that we can run to test live migration, to make sure that it 1) works in stx.2019.10 and fails in the current builds? Thanks! Brucej [0] https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From ada.cabrales at intel.com Thu Feb 14 19:24:53 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Thu, 14 Feb 2019 19:24:53 +0000 Subject: [Starlingx-discuss] upstream Nova NUMA live migration work In-Reply-To: <9A85D2917C58154C960D95352B22818BC3AD0D3C@fmsmsx121.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BC3AD0D3C@fmsmsx121.amr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD73BA9@FMSMSX114.amr.corp.intel.com> Yes, we have tests that can verify live migration + NUMA. I didn't understand the last part of your question: you want to run those test cases (on a containers config) and verify those don't work, then try those into the stx.2019.10 (October this year)? A. From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, February 14, 2019 12:57 PM To: Rowsell, Brent ; Friesen, Chris Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] upstream Nova NUMA live migration work It looks like there is work moving forward in the Nova community on this topic. There are active and recent code submissions in review here: [0]. Brent, Chris - I have some questions for you. Are you aware of these reviews? Are you planning to review the code? More importantly, does this code meet StarlingX's needs - is it heading in the right direction for us? If not, how and where might we want to influence where the work is going? I have one other ask, for the Test team. Are there existing test cases that we can run to test live migration, to make sure that it 1) works in stx.2019.10 and fails in the current builds? Thanks! Brucej [0] https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Feb 14 19:38:00 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 14 Feb 2019 19:38:00 +0000 Subject: [Starlingx-discuss] upstream Nova NUMA live migration work In-Reply-To: <4F6AACE4B0F173488D033B02A8BB5B7E7CD73BA9@FMSMSX114.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BC3AD0D3C@fmsmsx121.amr.corp.intel.com> <4F6AACE4B0F173488D033B02A8BB5B7E7CD73BA9@FMSMSX114.amr.corp.intel.com> Message-ID: <9A85D2917C58154C960D95352B22818BC3AD0F04@fmsmsx121.amr.corp.intel.com> I fat fingered the release number in my note below :(. I'd like to know if those tests pass on the 2018.10 release - the one that has the StarlingX NUMA live migration patches. And I'd like to know what tests fail and how when we run the same tests today against the Stein feature branch that does not have those patches. I'd like to also know that if/when the current upstream Nova work completes, do we pass the tests when we run a version of StarlingX that includes those changes. brucej From: Cabrales, Ada Sent: Thursday, February 14, 2019 11:25 AM To: Jones, Bruce E ; Rowsell, Brent ; Friesen, Chris Cc: starlingx-discuss at lists.starlingx.io Subject: RE: upstream Nova NUMA live migration work Yes, we have tests that can verify live migration + NUMA. I didn't understand the last part of your question: you want to run those test cases (on a containers config) and verify those don't work, then try those into the stx.2019.10 (October this year)? A. From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, February 14, 2019 12:57 PM To: Rowsell, Brent >; Friesen, Chris > Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] upstream Nova NUMA live migration work It looks like there is work moving forward in the Nova community on this topic. There are active and recent code submissions in review here: [0]. Brent, Chris - I have some questions for you. Are you aware of these reviews? Are you planning to review the code? More importantly, does this code meet StarlingX's needs - is it heading in the right direction for us? If not, how and where might we want to influence where the work is going? I have one other ask, for the Test team. Are there existing test cases that we can run to test live migration, to make sure that it 1) works in stx.2019.10 and fails in the current builds? Thanks! Brucej [0] https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From marcel at schaible-consulting.de Thu Feb 14 19:40:32 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Thu, 14 Feb 2019 20:40:32 +0100 Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> References: <701275276.504630.1550169575394@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> Message-ID: <543FEB76-F032-4865-A12F-B45C56A3B0B7@schaible-consulting.de> Hi Don, which build do you recommend? Is there a workaround so I can keep my installation for now? Thanks Marcel Von meinem iPhone gesendet > Am 14.02.2019 um 19:50 schrieb Penney, Don : > > This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: > * problems with the lighttpd server on the active controller > * NICs that are unsupported by the initrd kernel modules > * some other comms issue > > What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Thursday, February 14, 2019 1:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > Hi, > > I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: > > Console log: > ============= > > [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. > [ OK ] Started Show Plymouth Boot Screen. > [ OK ] Started Device-Mapper Multipath Device Controller. > Starting Open-iSCSI... > [ OK ] Reached target Paths. > [ OK ] Reached target Basic System. > [ OK ] Started Open-iSCSI. > Starting dracut initqueue hook... > [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > ... > [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [Warning: /dev/root does not exist > > Generating "/run/initramfs/rdsosreport.txt" > > Entering emergency mode. Exit the shell to continue. > Type "journalctl" to view system logs. > You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot > after mounting them and attach it to a bug report. > > dracut:/# > > Kernel parameter: > ================== > > dracut:/# journalctl | grep -i boot > Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 > ======= > > The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? > > Any idea is welcome! > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ada.cabrales at intel.com Thu Feb 14 19:43:40 2019 From: ada.cabrales at intel.com (Cabrales, Ada) Date: Thu, 14 Feb 2019 19:43:40 +0000 Subject: [Starlingx-discuss] upstream Nova NUMA live migration work In-Reply-To: <9A85D2917C58154C960D95352B22818BC3AD0F04@fmsmsx121.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BC3AD0D3C@fmsmsx121.amr.corp.intel.com> <4F6AACE4B0F173488D033B02A8BB5B7E7CD73BA9@FMSMSX114.amr.corp.intel.com> <9A85D2917C58154C960D95352B22818BC3AD0F04@fmsmsx121.amr.corp.intel.com> Message-ID: <4F6AACE4B0F173488D033B02A8BB5B7E7CD73BF1@FMSMSX114.amr.corp.intel.com> That's better :) Yes, we can run on a stx.2018.10 image the tests. We haven't try a setup with stein: we've been working with master. However, this is next in the list. Would it be OK to begin with that next week? A. From: Jones, Bruce E Sent: Thursday, February 14, 2019 1:38 PM To: Cabrales, Ada ; Rowsell, Brent ; Friesen, Chris Cc: starlingx-discuss at lists.starlingx.io Subject: RE: upstream Nova NUMA live migration work I fat fingered the release number in my note below :(. I'd like to know if those tests pass on the 2018.10 release - the one that has the StarlingX NUMA live migration patches. And I'd like to know what tests fail and how when we run the same tests today against the Stein feature branch that does not have those patches. I'd like to also know that if/when the current upstream Nova work completes, do we pass the tests when we run a version of StarlingX that includes those changes. brucej From: Cabrales, Ada Sent: Thursday, February 14, 2019 11:25 AM To: Jones, Bruce E >; Rowsell, Brent >; Friesen, Chris > Cc: starlingx-discuss at lists.starlingx.io Subject: RE: upstream Nova NUMA live migration work Yes, we have tests that can verify live migration + NUMA. I didn't understand the last part of your question: you want to run those test cases (on a containers config) and verify those don't work, then try those into the stx.2019.10 (October this year)? A. From: Jones, Bruce E [mailto:bruce.e.jones at intel.com] Sent: Thursday, February 14, 2019 12:57 PM To: Rowsell, Brent >; Friesen, Chris > Cc: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] upstream Nova NUMA live migration work It looks like there is work moving forward in the Nova community on this topic. There are active and recent code submissions in review here: [0]. Brent, Chris - I have some questions for you. Are you aware of these reviews? Are you planning to review the code? More importantly, does this code meet StarlingX's needs - is it heading in the right direction for us? If not, how and where might we want to influence where the work is going? I have one other ask, for the Test team. Are there existing test cases that we can run to test live migration, to make sure that it 1) works in stx.2019.10 and fails in the current builds? Thanks! Brucej [0] https://review.openstack.org/#/q/topic:bp/numa-aware-live-migration+(status:open+OR+status:merged) -------------- next part -------------- An HTML attachment was scrubbed... URL: From Barton.Wensley at windriver.com Thu Feb 14 19:45:00 2019 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Thu, 14 Feb 2019 19:45:00 +0000 Subject: [Starlingx-discuss] Nominate Kevin Smith as stx-nfv core Message-ID: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA65301@ALA-MBD.corp.ad.wrs.com> I'd like to add Kevin Smith as a core reviewer for stx-nfv. Kevin has been contributing to the stx-nfv project since it was created and has been both a valuable reviewer and code author. I'd like confirmation (or objections) from the existing cores please... Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From Al.Bailey at windriver.com Thu Feb 14 19:45:53 2019 From: Al.Bailey at windriver.com (Bailey, Henry Albert (Al)) Date: Thu, 14 Feb 2019 19:45:53 +0000 Subject: [Starlingx-discuss] Nominate Kevin Smith as stx-nfv core In-Reply-To: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA65301@ALA-MBD.corp.ad.wrs.com> References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA65301@ALA-MBD.corp.ad.wrs.com> Message-ID: +1 From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Thursday, February 14, 2019 2:45 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Nominate Kevin Smith as stx-nfv core I'd like to add Kevin Smith as a core reviewer for stx-nfv. Kevin has been contributing to the stx-nfv project since it was created and has been both a valuable reviewer and code author. I'd like confirmation (or objections) from the existing cores please... Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Feb 14 19:48:48 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 14 Feb 2019 19:48:48 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <543FEB76-F032-4865-A12F-B45C56A3B0B7@schaible-consulting.de> References: <701275276.504630.1550169575394@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> <543FEB76-F032-4865-A12F-B45C56A3B0B7@schaible-consulting.de> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DAC9@ALA-MBD.corp.ad.wrs.com> Your load shouldn't have the http port change, which was merged the next day. So I would suggest checking that the lighttpd server is running fine on the active controller as the first step. If it is, then if you have some shell access from the failed installation, maybe you can confirm that the boot interface is supported by the initrd and rule out comms issues. -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Thursday, February 14, 2019 2:41 PM To: Penney, Don Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout Hi Don, which build do you recommend? Is there a workaround so I can keep my installation for now? Thanks Marcel Von meinem iPhone gesendet > Am 14.02.2019 um 19:50 schrieb Penney, Don : > > This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: > * problems with the lighttpd server on the active controller > * NICs that are unsupported by the initrd kernel modules > * some other comms issue > > What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Thursday, February 14, 2019 1:40 PM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > Hi, > > I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: > > Console log: > ============= > > [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. > [ OK ] Started Show Plymouth Boot Screen. > [ OK ] Started Device-Mapper Multipath Device Controller. > Starting Open-iSCSI... > [ OK ] Reached target Paths. > [ OK ] Reached target Basic System. > [ OK ] Started Open-iSCSI. > Starting dracut initqueue hook... > [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > ... > [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > [Warning: /dev/root does not exist > > Generating "/run/initramfs/rdsosreport.txt" > > Entering emergency mode. Exit the shell to continue. > Type "journalctl" to view system logs. > You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot > after mounting them and attach it to a bug report. > > dracut:/# > > Kernel parameter: > ================== > > dracut:/# journalctl | grep -i boot > Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 > ======= > > The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? > > Any idea is welcome! > > Thanks > > Marcel > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Thu Feb 14 19:59:51 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 14 Feb 2019 14:59:51 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 149 - Failure! Message-ID: <1581176458.33.1550174393521.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 149 Status: Failure Timestamp: 20190214T182011Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/centos76/centos/20190214T150726Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/f-centos76/20190214T150726Z DOCKER_BUILD_ID: jenkins-f-centos76-20190214T150726Z-builder MY_REPO: /localdisk/designer/jenkins/f-centos76/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/centos76/centos/20190214T150726Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/feature/centos76/centos/20190214T150726Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/f-centos76 From build.starlingx at gmail.com Thu Feb 14 19:59:56 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 14 Feb 2019 14:59:56 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_centos76_pike - Build # 1 - Failure! Message-ID: <785911781.36.1550174397483.JavaMail.javamailuser@localhost> Project: STX_build_centos76_pike Build #: 1 Status: Failure Timestamp: 20190214T150726Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/feature/centos76/centos/20190214T150726Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From Don.Penney at windriver.com Thu Feb 14 20:55:43 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 14 Feb 2019 20:55:43 +0000 Subject: [Starlingx-discuss] Nominate Kevin Smith as stx-nfv core In-Reply-To: References: <5CDBBEDBFFF82E4C9E004A2C0F42FE58BAA65301@ALA-MBD.corp.ad.wrs.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DB89@ALA-MBD.corp.ad.wrs.com> +1 from me, as well. Kevin has done a lot of work in the stx-nfv repo. From: Bailey, Henry Albert (Al) [mailto:Al.Bailey at windriver.com] Sent: Thursday, February 14, 2019 2:46 PM To: Wensley, Barton; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Nominate Kevin Smith as stx-nfv core +1 From: Wensley, Barton [mailto:Barton.Wensley at windriver.com] Sent: Thursday, February 14, 2019 2:45 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Nominate Kevin Smith as stx-nfv core I'd like to add Kevin Smith as a core reviewer for stx-nfv. Kevin has been contributing to the stx-nfv project since it was created and has been both a valuable reviewer and code author. I'd like confirmation (or objections) from the existing cores please... Bart -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Thu Feb 14 21:14:02 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 14 Feb 2019 21:14:02 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190214 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9B9E0@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-14 (link) Sanity Test is executed in a Bare Metal Environment Status: GREEN Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 43 TCs PASS ] =========================================== Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 47 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 22 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From Numan.Waheed at windriver.com Thu Feb 14 21:25:04 2019 From: Numan.Waheed at windriver.com (Waheed, Numan) Date: Thu, 14 Feb 2019 21:25:04 +0000 Subject: [Starlingx-discuss] Unable to Modify StarlingX Wiki Page Message-ID: <3CAA827B7A79BA46B15B280EC82088FE4827423B@ALA-MBD.corp.ad.wrs.com> Hi Bruce and Abraham, There are few pages on StarlingX wiki that are locked and cannot be modified. We were looking for adding some instructions on Wiki and the page where we want to make the changes are locked. In specific, I would like to add some instructions under Documentation page and the pages under it are locked. Can you please give me (Numan Waheed, email: numan.waheed at windriver.com) and Yang Liu (email: yang.liu at windriver.com) privileges to update these pages. If you are not the right person to provide this privilege, please let me know who can do it. Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Thu Feb 14 21:39:39 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Thu, 14 Feb 2019 21:39:39 +0000 Subject: [Starlingx-discuss] Unable to Modify StarlingX Wiki Page In-Reply-To: <3CAA827B7A79BA46B15B280EC82088FE4827423B@ALA-MBD.corp.ad.wrs.com> References: <3CAA827B7A79BA46B15B280EC82088FE4827423B@ALA-MBD.corp.ad.wrs.com> Message-ID: <9A85D2917C58154C960D95352B22818BC3AD100C@fmsmsx121.amr.corp.intel.com> Ildiko, can you help please? From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] Sent: Thursday, February 14, 2019 1:25 PM To: starlingx-discuss at lists.starlingx.io; Jones, Bruce E ; Arce Moreno, Abraham Cc: Liu, Yang Subject: Unable to Modify StarlingX Wiki Page Hi Bruce and Abraham, There are few pages on StarlingX wiki that are locked and cannot be modified. We were looking for adding some instructions on Wiki and the page where we want to make the changes are locked. In specific, I would like to add some instructions under Documentation page and the pages under it are locked. Can you please give me (Numan Waheed, email: numan.waheed at windriver.com) and Yang Liu (email: yang.liu at windriver.com) privileges to update these pages. If you are not the right person to provide this privilege, please let me know who can do it. Thanks, Numan. -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Thu Feb 14 23:22:25 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Thu, 14 Feb 2019 23:22:25 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190214 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9BA3C@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-14 (link) Sanity Test is executed in a Containers - Virtual Environment Status: YELLOW Simplex Setup Manual [PASS] Provisioning Manual [PASS] Sanity OpenStack 36 TCs [PASS] | 1 TC [FAIL] Sanity Platform In Development TOTAL: [ 36 TCs PASS | 1 TC FAIL ] ------------------------------------------------------------------ Ceilometer command does not retrieve output. Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1815976 ------------------------------------------------------------------ List of test cases executed: Test Scenario Test Case Status Instances-From-Image Create Flavors for Instances PASS Create Images for Instances PASS Create Networks for Instances PASS Launch Instances PASS Suspend Resume Instances PASS Set Error Active Flags Instances PASS Pause Unpause Instances PASS Stop Start Instances PASS Lock Unlock Instances PASS Reboot Instance PASS Rebuild Instances PASS Resize Instances PASS Set Unset Properties Instances PASS Instances-From-Volume Create Flavors for Instances PASS Create Images for Instances PASS Create Networks for Instances PASS Create Volume for Instances PASS Launch Instances PASS Suspend Resume Instances PASS Set Error Active Flags Instances PASS Pause Unpause Instances PASS Stop Start Instances PASS Lock Unlock Instances PASS Reboot Instance PASS Rebuild Instances PASS Resize Instances PASS Set Unset Properties Instances PASS test_openstack_services_healthy PASS test_reapply_stx_openstack PASS test_stx_openstack_helm_override_update_and_reset PASS test_horizon_create_delete_instance PASS test_heat_template PASS test_add_host_simplex_negative PASS test_measurements_for_metric PASS test_ceilometer_meters_exist FAIL test_vm_meta_data_retrieval PASS test_nova_actions PASS Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Wed Feb 13 15:56:39 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Wed, 13 Feb 2019 15:56:39 +0000 Subject: [Starlingx-discuss] Community meeting Feb 13th notes Message-ID: <9A85D2917C58154C960D95352B22818BBFD271D9@fmsmsx123.amr.corp.intel.com> Agenda and notes for the Feb 13th call * Bruce will be OOO next week. Can someone else chair this meeting? [Bill Z volunteers] * The outreach working group has the following suggestions for things we'd like to see to help community outreach * A "getting started" SIG / sub-project to help new contributors get up to speed * Office hours on IRC * Marking LP/SB entries that are simple/easy to also help new contributors ? Mark stories with "stx.easy" or similar? Use stx.help.wanted - Bill to look for bugs to label * This is something many projects do, especially in github where labelling is quite easy, eg. Kata Containers, "help wanted" - https://github.com/kata-containers/runtime/issues?q=is%3Aissue+is%3Aopen+label%3A%22help+wanted%22 * Would need people who know what issues/bugs/features in all the projects are good for new contributors to label them, and then have somewhere that those labels are linked from (perhaps off the main website?). ? Define new stories for easy but not yet staffed efforts e.g. Unit Testing? ? Posting a todo list? ? Ask each sub-project to identify easy work items in their domains? * A series of blog posts in the OSF blog about the new features in the May release (to be posted post May) ? e.g containers, distributed cloud, CentOS 7.6, Ceph 13, etc... ? Brent to write a container blog. * Meanwhile please do everything you can to spread the word about StarlingX within your company and communities, to welcome new community members to the project and help make them successful. * Abraham will help. Curtis too. * Our wiki needs updating. Any objections to me (Bruce) doing some major refactoring? None heard :) * Open Infrastructure Summit and PTG planning (ildikov) * Forum session proposal planning: ? https://wiki.openstack.org/wiki/Forum ? https://etherpad.openstack.org/p/stx-forum-preparation-denver-2019 * PTG planning: ? http://lists.starlingx.io/pipermail/starlingx-discuss/2019-February/003057.html ? https://etherpad.openstack.org/p/stx-ptg-preparation-denver-2019 * Open ARs from Mini-PTG https://docs.google.com/spreadsheets/d/1F-JKh8_gLlUzbrUJRbsf4u65yBGVBl8HGe-RnUQoyc4/edit?usp=sharing * Sub-project updates: * Release (Ghada) ? stx.209.05 Release Gating Bugs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.2019.05 ? The bug trend is upward :( ? see updated trend line at https://docs.google.com/spreadsheets/d/1DZZgqrCIL6wxv51_yFBk6Lfmtf1AqPD6z7e5hEs3prU/edit?usp=sharing * Containers (Frank) - are we getting a Valentine's Day present? :) ? Team working very hard on 3 gating issues - will not make tomorrow. Day by day. * 4 high priority LPs to be addressed (one may close) * docker mirror story board needs to merge - pending on Mingyuan's review * Sanity results from Ada's team - looking for them to come out tommorrow. * please use stx.containers for any new issues so the team can find them! * Security (Ken) ? work in progress on CVE scans (as per ARs above) ? Github is doing security scans of our repos there. Ken getting them indirectly. Dean to help see if we can add recipients * Ceph upgrade (Vivian) ? Team discussing how to divide and conquer the remaining work. Key team members still on CNY. * CentOS upgrade (Cindy) ? First run of tests done by Ada and Numan's team. UEFI issue found which blocked some testing. Fixed, second run of testing can proceed. ? Plan is to merge Feb 22nd but might be delayed due to test cycle. * Networking (Ghada) ? Continue to push on neutron upstreaming. Working on OVS in a container. * Docs (Michael / Bruce) ? Mega-spec working through the approval process. WR has contributed some additional seed documents, thank you! * Build (Cesar) ? Build is fairly stable, no issues expected for container cut-over * Dist-Cloud (Dariush) ? Looking for issues that need resolution that might be caused by the container changes. * Test (Ada) ? See above :) Work continues to review the regression suite and decide which tests can be kept and which need to change. * Multi-OS (Cesar) ? We had an _interesting_ meeting on Monday and reset expectations for what we want to do for May. Multi-OS spec has been updated and the team is looking for review and feedback! -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Feb 14 14:56:14 2019 From: scott.little at windriver.com (Scott Little) Date: Thu, 14 Feb 2019 09:56:14 -0500 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> Message-ID: <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> I think it's on our side. The in house jenkins script I cloned for the 76 build does not include an installer rebuild. Scott On 2019-02-14 8:36 a.m., Liu, Yang wrote: > > Hi Shuicheng, > > I checked the 3 items as per your instructions, build log does contain > the expected step, however the results are different. > > @ Scott/Don, any thoughts on this? > > 1.+ install -D -m 755 grubx64.efi > /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi > > 2.-rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi > > 3.-rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi > > BR, > > Yang > > *From:*Lin, Shuicheng [mailto:shuicheng.lin at intel.com] > *Sent:* February-14-19 2:32 AM > *To:* Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin > *Cc:* starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, > Numan; Little, Scott > *Subject:* RE: CentOS7.6 testing status - blocked > > Hi Yang/Don, > > We double checked the issue today. Here is our finding: > > 1.I try to revert the fix [0], then do build-pkgs and build-iso, the > “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” is with 700 > permission mode. > > Add the fix [0] back, then build-pkgs and build-iso, the “grubx64.efi” > is changed to 755 permission mode. > > I also checked the grubx64.efi file in both ISO image, it has the same > mode as upper file. > > 2.Martin confirmed there is tftp log in the deployment: > > “ > > 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent > /pxeboot/EFI/grubx64.efi to 169.254.202.76 > > controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi  -l > > -rwxr-xr-x. 1 root root 1234192 Feb  3 06:52 /pxeboot/EFI/grubx64.efi > > “ > > 3.Austin confirmed “install -D -m 755” will set the grubx64.efi with > 755 permission mode. > > “ > > -m, --mode=MODE > > set permission mode (as in chmod), instead of rwxr-xr-x > > “ > > 4.I try to go through the build log. Here is the log from grub2’s > build.log > > “ > > + install -m 700 grubx64.efi > /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned > > + install -m 700 gcdx64.efi > /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned > > + install -D -m 755 grubx64.efi > /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi > > + install -m 700 grubx64.efi > /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi > > + install -m 700 gcdx64.efi > /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi > > “ > > And in build-iso script, the file will be extracted and copied to the > EFI folder: > > “ > > extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} > grub2-efi-x64-pxeboot > > … > > \cp --preserve=all pxeboot/EFI/grubx64.efi > $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ > > “ > > Due to we cannot reproduce the issue, we are not sure which step cause > the issue yet. > > So could you help me have a check with below step to narrow down the > issue? Thanks. > > 1.Please help check whether there is “install -D -m 755 grubx64.efi” > in the > “loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log” > or not. > > 2.Please help extract > “grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm” in > “loadbuild/std/rpmbuild/RPMS”, and check whether the grubx64.efi file > is with 755 mode or not. > > Extract cmd: rpm2cpio > grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv > > 3.Please help check the “grubx64.efi” in > “export/dist/isolinux/pxeboot/EFI/” folder is with 755 mode or not. > > [0]: https://review.openstack.org/634559 > > > Best Regards > > Shuicheng > > *From:* Penney, Don [mailto:Don.Penney at windriver.com] > *Sent:* Thursday, February 14, 2019 6:01 AM > *To:* Xie, Cindy ; Liu, Yang > *Cc:* starlingx-discuss at lists.starlingx.io; Cabrales, Ada > ; Waheed, Numan ; > Little, Scott > *Subject:* Re: [Starlingx-discuss] CentOS7.6 testing status - blocked > > Hi Cindy, > > In a successful case, you should see a TFTP log in daemon.log on the > active controller indicating the file was transferred, such as: > > 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent > /pxeboot/EFI/grubx64.efi to 192.168.204.4 > > I would suggest doing something like “tail -f /var/log/daemon.log | > grep -i tftp” while doing the installation of nodes from the active > controller, to verify the expected file is getting transferred. If the > host installs and you don’t see this file transferred, I’d recommend > reconfirming that the node is installing via UEFI. > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* Wednesday, February 13, 2019 4:46 PM > *To:* Liu, Yang > *Cc:* starlingx-discuss at lists.starlingx.io > ; Cabrales, Ada; Waheed, > Numan; Little, Scott > *Subject:* Re: [Starlingx-discuss] CentOS7.6 testing status - blocked > > Hi, Yang > > Sorry about the issue! It’s interesting as I did have my engineer > tested the scenarios. There must be something missing from my side. > > We will redo the patch and test. In the same time, can you manually > change the file permissions as temporarily workaround and unblock the > test cycle? > > Thanks! Cindy > > Sent from my iPhone > > > On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: > > Hi Cindy, > > We are still seeing the same file permission issue for grubx64.efi > under pxeboot/EFI, causing UEFI pxeboot to fail. > > We need the grubx64.efi to be readable by others as well. > > ../pxeboot/EFI/ > > total 1220 > > drwxrwsr-x 3 jenkins mock    4096 Feb 12 16:48 . > > drwxrwsr-x 3 jenkins mock    4096 Feb 12 16:48 .. > > drwxrwsr-x 3 jenkins mock    4096 Feb 12 16:48 centos > > -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi > > The patch seems to have changed the dir permission for centos from > 700 to 755, but not grubx64.efi. > > For the dir permission for centos, I believe the original 700 > should be sufficient (@Scott, please correct me if it’s wrong). > > BR, > > Yang > > *From:*Liu, Yang > *Sent:* February-12-19 9:00 AM > *To:* 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io > ; Cabrales, Ada; > Waheed, Numan > *Subject:* RE: CentOS7.6 testing status - blocked > > Thanks Cindy. Will do. > > BR, > > Yang > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* February-11-19 8:10 PM > *To:* Liu, Yang; starlingx-discuss at lists.starlingx.io > ; Cabrales, Ada; > Waheed, Numan > *Subject:* RE: CentOS7.6 testing status - blocked > > Hi, Numan/Yang, > > The last pending patch (https://review.openstack.org/#/c/634559/) > which was blocking your testing (#1814360) was just merged. Please > get new build ISO from Jason so you can continue the testing. > > Thx. - cindy > > *From:* Liu, Yang [mailto:yang.liu at windriver.com] > *Sent:* Saturday, February 9, 2019 10:18 AM > *To:* Xie, Cindy >; > starlingx-discuss at lists.starlingx.io > > *Subject:* RE: CentOS7.6 testing status - blocked > > Correct. > > BR, > > Yang > > *From:*Xie, Cindy [mailto:cindy.xie at intel.com] > *Sent:* February-08-19 8:28 PM > *To:* Liu, Yang; starlingx-discuss at lists.starlingx.io > > *Subject:* RE: CentOS7.6 testing status - blocked > > Hi, Yang, > > Thanks for the report. > > Are the “two node system” below referring to Duplex? Just want to > confirm because #1814360 we have a patch pending and we do want to > ensure it works on Duplex as well. > > Th.x - cindy > > *From:* Liu, Yang [mailto:yang.liu at windriver.com] > *Sent:* Saturday, February 9, 2019 2:09 AM > *To:* starlingx-discuss at lists.starlingx.io > > *Subject:* [Starlingx-discuss] CentOS7.6 testing status - blocked > > Hi folks, > > Here’s an update for CentOS7.6 testing. > > We are currently blocked due to pxeboot from controller-0 does not > work for EFI. (#1814360) > > We will continue after that issue is resolved. > > System > > > > NICs > > Mgmt;infra;data > > > > Special Configs > > > > Test coverage after Install and Config > > > > Status/Issues > > Dedicated storage > > > > X540-AT2; X540-AT2; fortville > > > > IPv6 > > > > Sanity, nova > > > > Completed. New issues logged. > > #1814336 CentOS7.6: Unable to launch vm directly from virsh > > > #1814335 CentOS7.6: Unable to launch vm with UEFI boot > > > One node system > > > > none; none; X522/X577-AT > > > > > > Sanity, basic regression > > > > Completed. Passed. > > Two node system > > > > fortville; fortville; fortville > > > > tboot, tpm, https, > > extended security profile > > > > Sanity, security > > > > Blocked by #1814360 > > Multi-node system > > > > BCM5720; Niantic; Niantic > > > > Sriov(niantic),pcipt(niantic) > > > > Sanity, networking > > > > Completed. Passed. > > Two node system > > > > Fortville; none; Fortville > > > > Low latency, UEFI > > > > Sanity, basic regression, cyclictest > > > > Blocked by #1814360 > > Two node system > > > > Fortville; none; Fortville > > > > Secure boot > > > > Sanity, security > > > > Blocked by #1814360 > > Multi-node system > > > > I350; Niantic/cx3; cx3 > > > > Pxeboot script > > > > Sanity > > > > Completed. Passed. > > Only compute-0 was used, since compute-1 has CX3 data nic. > > ?? > > > > CX4 on infra or mgmt, but NOT data > > > > > > Won’t test. We don’t have a system have required nics. > > BR, > > yang > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Thu Feb 14 15:39:53 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Thu, 14 Feb 2019 15:39:53 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D7D9@ALA-MBD.corp.ad.wrs.com> Looking at our in-house jenkins build output, I see: $ rpm -qp --dump std/results/jenkins-STX_Feature_centos76_Build-2019-02-12_14-43-05-tis-r6-pike-std/grub2-2.02-0.76.el7.centos.tis.12/grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm /pxeboot/EFI/grubx64.efi 1233016 1550007456 3d0f3ae9293f23e1ebe6f56e1eb04fc6 0100700 root root 0 0 0 X Looking at the grub.macro file, which is providing the %install and %files directives, it certainly seems like this should be 755. The %defattr being set ignores the permissions. And I don’t see anything in the build.log that would indicate another chmod is happening after. From: Little, Scott Sent: Thursday, February 14, 2019 9:56 AM To: Liu, Yang; Lin, Shuicheng; Penney, Don; Xie, Cindy; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: Re: CentOS7.6 testing status - blocked I think it's on our side. The in house jenkins script I cloned for the 76 build does not include an installer rebuild. Scott On 2019-02-14 8:36 a.m., Liu, Yang wrote: Hi Shuicheng, I checked the 3 items as per your instructions, build log does contain the expected step, however the results are different. @ Scott/Don, any thoughts on this? 1. + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi 2. -rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi 3. -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi BR, Yang From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: February-14-19 2:32 AM To: Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the “grubx64.efi” is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: “ 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi “ 3. Austin confirmed “install -D -m 755” will set the grubx64.efi with 755 permission mode. “ -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x “ 4. I try to go through the build log. Here is the log from grub2’s build.log “ + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi “ And in build-iso script, the file will be extracted and copied to the EFI folder: “ extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot … \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ “ Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is “install -D -m 755 grubx64.efi” in the “loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log” or not. 2. Please help extract “grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm” in “loadbuild/std/rpmbuild/RPMS”, and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like “tail -f /var/log/daemon.log | grep -i tftp” while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don’t see this file transferred, I’d recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It’s interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it’s wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the “two node system” below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here’s an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won’t test. We don’t have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From bruce.e.jones at intel.com Fri Feb 15 00:05:55 2019 From: bruce.e.jones at intel.com (Jones, Bruce E) Date: Fri, 15 Feb 2019 00:05:55 +0000 Subject: [Starlingx-discuss] Proposal: Build and Multi-OS strategy Message-ID: <9A85D2917C58154C960D95352B22818BC3AD11C1@fmsmsx121.amr.corp.intel.com> I would like to start a thread that I hope will result in more focus and direction for the Build and Multi-OS teams. It's right before I disappear for vacation, so I'll ask Saul and Cesar to address any follow-ups. What I would like to propose is that we as a community take on the task of delivering support for Ubuntu as a StarlingX host OS for the November 2019 release. This would allow us to support the ~35% of the cloud ecosystem that doesn't run on RHEL or CentOS. It will require a lot of work and therefore we should start as soon as possible. What I would propose we do is: 1) The Build team to create a new and separate build system for an Ubuntu LTS hosted ISO [0] 2) The MultiOS team to review the outstanding carried patches and apply those needed to the Ubuntu packages 3) The MultiOS team to update the system as needed to use an Ubuntu installer to get controller-0 fully installed 4) Which would then lead to work in the MultiOS team to bring up the Ubuntu hosted StarlingX in Simplex mode [1]. We would then have a (kind of) working StarlingX image that will enable the broader community to contribute to all of the other work needed to deliver a fully supported Ubuntu host for the November release. That work would include bringing up the other configurations beyond Simplex, changes to the StarlingX software management and update services, the additional testing needed, and other tasks which can be parallelized. Meanwhile, on the Intel side we have received new guidance from our new management on the requirement for Clear Linux support. We will continue but slow down that work for now and focus on Ubuntu. So for November, the goal is to support 2 Host OS's, not 3. We will need support and contributions from the community to achieve this goal in time for November. The MultiOS team in particular will need help and additional contribtutors. Brucej [0] Making the build system common between Ubuntu and CentOS is hard and probably should not be attempted. We should leverage what we can, of course, from our own code and the broader ecosystem. [1] Or in which ever configuration is easier.... -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Feb 15 01:07:36 2019 From: serverascode at gmail.com (Curtis) Date: Thu, 14 Feb 2019 20:07:36 -0500 Subject: [Starlingx-discuss] Proposal: Build and Multi-OS strategy In-Reply-To: <9A85D2917C58154C960D95352B22818BC3AD11C1@fmsmsx121.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BC3AD11C1@fmsmsx121.amr.corp.intel.com> Message-ID: On Thu, Feb 14, 2019 at 7:07 PM Jones, Bruce E wrote: > I would like to start a thread that I hope will result in more focus and > direction for the Build and Multi-OS teams. It’s right before I disappear > for vacation, so I’ll ask Saul and Cesar to address any follow-ups. > > > > What I would like to propose is that we as a community take on the task of > delivering support for Ubuntu as a StarlingX host OS for the November 2019 > release. This would allow us to support the ~35% of the cloud ecosystem > that doesn’t run on RHEL or CentOS. > > > > It will require a lot of work and therefore we should start as soon as > possible. What I would propose we do is: > > 1) The Build team to create a new and separate build system for an > Ubuntu LTS hosted ISO [0] > > 2) The MultiOS team to review the outstanding carried patches and > apply those needed to the Ubuntu packages > > 3) The MultiOS team to update the system as needed to use an Ubuntu > installer to get controller-0 fully installed > > 4) Which would then lead to work in the MultiOS team to bring up the > Ubuntu hosted StarlingX in Simplex mode [1]. > > > > We would then have a (kind of) working StarlingX image that will enable > the broader community to contribute to all of the other work needed to > deliver a fully supported Ubuntu host for the November release. That > work would include bringing up the other configurations beyond Simplex, > changes to the StarlingX software management and update services, the > additional testing needed, and other tasks which can be parallelized. > > > > Meanwhile, on the Intel side we have received new guidance from our new > management on the requirement for Clear Linux support. We will continue > but slow down that work for now and focus on Ubuntu. So for November, > the goal is to support 2 Host OS’s, not 3. > > > > We will need support and contributions from the community to achieve this > goal in time for November. The MultiOS team in particular will need help > and additional contribtutors. > I think something has to change, as I personally view the multios direction as a bit risky for the project overall. I don't mean that it shouldn't be done, but that it carries some measure of risk and that risk has to be taken into consideration. (Of course, I'm not privy to any previous discussions on the topic as I'm new to the project.) That said I can't quantify the risk in any meaningful way, other than to say it's a gut reaction from somewhat of an outsider. :) I certainly like the idea of simplifying in some way. As well I think it's important to admit when things might be getting too complex--I believe there are some hidden dependencies out there. I look forward to hearing other peoples thoughts and ideas in this thread! Thanks, Curtis > > > Brucej > > > > [0] Making the build system common between Ubuntu and CentOS is hard and > probably should not be attempted. We should leverage what we can, of > course, from our own code and the broader ecosystem. > > [1] Or in which ever configuration is easier…. > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Fri Feb 15 00:59:01 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Fri, 15 Feb 2019 00:59:01 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D7D9@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D7D9@ALA-MBD.corp.ad.wrs.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D8561F1532@CDSMSX102.ccr.corp.intel.com> I check this command in my build machine [hellen at 5b2f0e3259aa starlingx]$ [hellen at 5b2f0e3259aa starlingx]$ rpm -qp --dump std/results/hellen-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm /pxeboot/EFI/grubx64.efi 1234192 1549176759 783bd7e57e64d8a74f1c44849e51f76a2fd3062f7b81071053edf3f3771c2d43 0100755 root root 0 0 0 X [hellen at 5b2f0e3259aa starlingx]$ When I build iso, I follow these steps, Shuicheng used to share to me. 1, build-pkgs ; build-iso ; build-srpms --installer ; build-rpms --installer ; build-iso ; update-pxe-network-installer 2, copy generated vmlinuz, squashfs.img, initrd.img to /import/mirror/CentOS/stx-installer/ (depends on pxe-network-installer.spec) 3, build-pkgs ; build-iso Maybe we could root cause, what’s the difference. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 11:40 PM To: Little, Scott ; Liu, Yang ; Lin, Shuicheng ; Xie, Cindy ; Chen, Haochuan Z ; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Looking at our in-house jenkins build output, I see: $ rpm -qp --dump std/results/jenkins-STX_Feature_centos76_Build-2019-02-12_14-43-05-tis-r6-pike-std/grub2-2.02-0.76.el7.centos.tis.12/grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm /pxeboot/EFI/grubx64.efi 1233016 1550007456 3d0f3ae9293f23e1ebe6f56e1eb04fc6 0100700 root root 0 0 0 X Looking at the grub.macro file, which is providing the %install and %files directives, it certainly seems like this should be 755. The %defattr being set ignores the permissions. And I don’t see anything in the build.log that would indicate another chmod is happening after. From: Little, Scott Sent: Thursday, February 14, 2019 9:56 AM To: Liu, Yang; Lin, Shuicheng; Penney, Don; Xie, Cindy; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: Re: CentOS7.6 testing status - blocked I think it's on our side. The in house jenkins script I cloned for the 76 build does not include an installer rebuild. Scott On 2019-02-14 8:36 a.m., Liu, Yang wrote: Hi Shuicheng, I checked the 3 items as per your instructions, build log does contain the expected step, however the results are different. @ Scott/Don, any thoughts on this? 1. + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi 2. -rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi 3. -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi BR, Yang From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: February-14-19 2:32 AM To: Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the “grubx64.efi” is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: “ 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi “ 3. Austin confirmed “install -D -m 755” will set the grubx64.efi with 755 permission mode. “ -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x “ 4. I try to go through the build log. Here is the log from grub2’s build.log “ + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi “ And in build-iso script, the file will be extracted and copied to the EFI folder: “ extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot … \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ “ Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is “install -D -m 755 grubx64.efi” in the “loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log” or not. 2. Please help extract “grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm” in “loadbuild/std/rpmbuild/RPMS”, and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like “tail -f /var/log/daemon.log | grep -i tftp” while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don’t see this file transferred, I’d recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It’s interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it’s wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the “two node system” below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here’s an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won’t test. We don’t have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Fri Feb 15 02:08:41 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 15 Feb 2019 02:08:41 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE8486C@SHSMSX101.ccr.corp.intel.com> Hi Scott, What do you mean “installer rebuild”? Grub2 package is not included in “update-pxe-network-installer”. Could you share me more details? Thanks. Best Regards Shuicheng From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, February 14, 2019 10:56 PM To: Liu, Yang ; Lin, Shuicheng ; Penney, Don ; Xie, Cindy ; Chen, Haochuan Z ; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: Re: CentOS7.6 testing status - blocked I think it's on our side. The in house jenkins script I cloned for the 76 build does not include an installer rebuild. Scott On 2019-02-14 8:36 a.m., Liu, Yang wrote: Hi Shuicheng, I checked the 3 items as per your instructions, build log does contain the expected step, however the results are different. @ Scott/Don, any thoughts on this? 1. + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi 2. -rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi 3. -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi BR, Yang From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: February-14-19 2:32 AM To: Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the “grubx64.efi” is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: “ 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi “ 3. Austin confirmed “install -D -m 755” will set the grubx64.efi with 755 permission mode. “ -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x “ 4. I try to go through the build log. Here is the log from grub2’s build.log “ + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi “ And in build-iso script, the file will be extracted and copied to the EFI folder: “ extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot … \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ “ Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is “install -D -m 755 grubx64.efi” in the “loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log” or not. 2. Please help extract “grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm” in “loadbuild/std/rpmbuild/RPMS”, and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like “tail -f /var/log/daemon.log | grep -i tftp” while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don’t see this file transferred, I’d recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It’s interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it’s wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the “two node system” below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here’s an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won’t test. We don’t have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From haochuan.z.chen at intel.com Fri Feb 15 02:14:15 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Fri, 15 Feb 2019 02:14:15 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE8486C@SHSMSX101.ccr.corp.intel.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> <9700A18779F35F49AF027300A49E7C765FE8486C@SHSMSX101.ccr.corp.intel.com> Message-ID: <56829C2A36C2E542B0CCB9854828E4D8561F158A@CDSMSX102.ccr.corp.intel.com> I think Penny means we should explicityly add file attribute in such way? %{expand:%%files %{1}-pxeboot} \ %defattr(-,root,root,-) \ %attr(0755,root,root)/pxeboot/EFI/%{grubefiname} \ Wait for your comment. Thanks Martin, Chen SSP, Software Engineer 021-61164330 From: Lin, Shuicheng Sent: Friday, February 15, 2019 10:09 AM To: Scott Little ; Liu, Yang ; Penney, Don ; Xie, Cindy ; Chen, Haochuan Z ; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi Scott, What do you mean “installer rebuild”? Grub2 package is not included in “update-pxe-network-installer”. Could you share me more details? Thanks. Best Regards Shuicheng From: Scott Little [mailto:scott.little at windriver.com] Sent: Thursday, February 14, 2019 10:56 PM To: Liu, Yang >; Lin, Shuicheng >; Penney, Don >; Xie, Cindy >; Chen, Haochuan Z >; Sun, Austin > Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada >; Waheed, Numan > Subject: Re: CentOS7.6 testing status - blocked I think it's on our side. The in house jenkins script I cloned for the 76 build does not include an installer rebuild. Scott On 2019-02-14 8:36 a.m., Liu, Yang wrote: Hi Shuicheng, I checked the 3 items as per your instructions, build log does contain the expected step, however the results are different. @ Scott/Don, any thoughts on this? 1. + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi 2. -rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi 3. -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi BR, Yang From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: February-14-19 2:32 AM To: Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the “grubx64.efi” is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: “ 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi “ 3. Austin confirmed “install -D -m 755” will set the grubx64.efi with 755 permission mode. “ -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x “ 4. I try to go through the build log. Here is the log from grub2’s build.log “ + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi “ And in build-iso script, the file will be extracted and copied to the EFI folder: “ extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot … \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ “ Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is “install -D -m 755 grubx64.efi” in the “loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log” or not. 2. Please help extract “grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm” in “loadbuild/std/rpmbuild/RPMS”, and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like “tail -f /var/log/daemon.log | grep -i tftp” while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don’t see this file transferred, I’d recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It’s interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it’s wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the “two node system” below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here’s an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won’t test. We don’t have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From shuicheng.lin at intel.com Fri Feb 15 02:21:48 2019 From: shuicheng.lin at intel.com (Lin, Shuicheng) Date: Fri, 15 Feb 2019 02:21:48 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43D7D9@ALA-MBD.corp.ad.wrs.com> References: <19C65A6E92EA384D809B1772130CD7F862182B19@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E8182D@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862182D18@ALA-MBD.corp.ad.wrs.com> <2FD5DDB5A04D264C80D42CA35194914F35E83FB0@SHSMSX104.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F8621838AE@ALA-MBD.corp.ad.wrs.com> <40048C47-DECC-4096-844D-7B8C3047F51B@intel.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D454@ALA-MBD.corp.ad.wrs.com> <9700A18779F35F49AF027300A49E7C765FE84643@SHSMSX101.ccr.corp.intel.com> <19C65A6E92EA384D809B1772130CD7F862183A40@ALA-MBD.corp.ad.wrs.com> <7c5185d8-daaa-41de-597c-ae566b76918a@windriver.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D7D9@ALA-MBD.corp.ad.wrs.com> Message-ID: <9700A18779F35F49AF027300A49E7C765FE84894@SHSMSX101.ccr.corp.intel.com> Hi Don, So it seems “%defattr” has different effect in your and our build system. Is my understanding correct? In grub.macros, “%defattr” just define the default user/group, the default permission is not set. It seems like an issue of the spec, what’s your suggestion to fix it? Thanks. Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 11:40 PM To: Little, Scott ; Liu, Yang ; Lin, Shuicheng ; Xie, Cindy ; Chen, Haochuan Z ; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Looking at our in-house jenkins build output, I see: $ rpm -qp --dump std/results/jenkins-STX_Feature_centos76_Build-2019-02-12_14-43-05-tis-r6-pike-std/grub2-2.02-0.76.el7.centos.tis.12/grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm /pxeboot/EFI/grubx64.efi 1233016 1550007456 3d0f3ae9293f23e1ebe6f56e1eb04fc6 0100700 root root 0 0 0 X Looking at the grub.macro file, which is providing the %install and %files directives, it certainly seems like this should be 755. The %defattr being set ignores the permissions. And I don’t see anything in the build.log that would indicate another chmod is happening after. From: Little, Scott Sent: Thursday, February 14, 2019 9:56 AM To: Liu, Yang; Lin, Shuicheng; Penney, Don; Xie, Cindy; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: Re: CentOS7.6 testing status - blocked I think it's on our side. The in house jenkins script I cloned for the 76 build does not include an installer rebuild. Scott On 2019-02-14 8:36 a.m., Liu, Yang wrote: Hi Shuicheng, I checked the 3 items as per your instructions, build log does contain the expected step, however the results are different. @ Scott/Don, any thoughts on this? 1. + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi 2. -rwx------ 1 yliu12 users 1233016 Feb 12 16:37 ./pxeboot/EFI/grubx64.efi 3. -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi BR, Yang From: Lin, Shuicheng [mailto:shuicheng.lin at intel.com] Sent: February-14-19 2:32 AM To: Penney, Don; Xie, Cindy; Liu, Yang; Chen, Haochuan Z; Sun, Austin Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: RE: CentOS7.6 testing status - blocked Hi Yang/Don, We double checked the issue today. Here is our finding: 1. I try to revert the fix [0], then do build-pkgs and build-iso, the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” is with 700 permission mode. Add the fix [0] back, then build-pkgs and build-iso, the “grubx64.efi” is changed to 755 permission mode. I also checked the grubx64.efi file in both ISO image, it has the same mode as upper file. 2. Martin confirmed there is tftp log in the deployment: “ 2019-02-11T00:36:52.000 controller-0 dnsmasq-tftp[8262]: info sent /pxeboot/EFI/grubx64.efi to 169.254.202.76 controller-0:/var/log$ ls /pxeboot/EFI/grubx64.efi -l -rwxr-xr-x. 1 root root 1234192 Feb 3 06:52 /pxeboot/EFI/grubx64.efi “ 3. Austin confirmed “install -D -m 755” will set the grubx64.efi with 755 permission mode. “ -m, --mode=MODE set permission mode (as in chmod), instead of rwxr-xr-x “ 4. I try to go through the build log. Here is the log from grub2’s build.log “ + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi.unsigned + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi.unsigned + install -D -m 755 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/pxeboot/EFI/grubx64.efi + install -m 700 grubx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/grubx64.efi + install -m 700 gcdx64.efi /builddir/build/BUILDROOT/grub2-2.02-0.76.el7.centos.tis.12.x86_64/boot/efi/EFI/centos/gcdx64.efi “ And in build-iso script, the file will be extracted and copied to the EFI folder: “ extract_pkg_from_local_repo ${MY_YUM_CONF} ${STD_REPO_ID} grub2-efi-x64-pxeboot … \cp --preserve=all pxeboot/EFI/grubx64.efi $OUTPUT_DIST_DIR/isolinux/pxeboot/EFI/ “ Due to we cannot reproduce the issue, we are not sure which step cause the issue yet. So could you help me have a check with below step to narrow down the issue? Thanks. 1. Please help check whether there is “install -D -m 755 grubx64.efi” in the “loadbuild/std/results/slin14-starlingx-tis-r5-pike-std/grub2-2.02-0.76.el7.centos.tis.12/build.log” or not. 2. Please help extract “grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm” in “loadbuild/std/rpmbuild/RPMS”, and check whether the grubx64.efi file is with 755 mode or not. Extract cmd: rpm2cpio grub2-efi-x64-pxeboot-2.02-0.76.el7.centos.tis.12.x86_64.rpm | cpio -idmv 3. Please help check the “grubx64.efi” in “export/dist/isolinux/pxeboot/EFI/” folder is with 755 mode or not. [0]: https://review.openstack.org/634559 Best Regards Shuicheng From: Penney, Don [mailto:Don.Penney at windriver.com] Sent: Thursday, February 14, 2019 6:01 AM To: Xie, Cindy ; Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada ; Waheed, Numan ; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi Cindy, In a successful case, you should see a TFTP log in daemon.log on the active controller indicating the file was transferred, such as: 2019-02-12T13:20:45.000 controller-0 dnsmasq-tftp[200877]: info sent /pxeboot/EFI/grubx64.efi to 192.168.204.4 I would suggest doing something like “tail -f /var/log/daemon.log | grep -i tftp” while doing the installation of nodes from the active controller, to verify the expected file is getting transferred. If the host installs and you don’t see this file transferred, I’d recommend reconfirming that the node is installing via UEFI. From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: Wednesday, February 13, 2019 4:46 PM To: Liu, Yang Cc: starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan; Little, Scott Subject: Re: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi, Yang Sorry about the issue! It’s interesting as I did have my engineer tested the scenarios. There must be something missing from my side. We will redo the patch and test. In the same time, can you manually change the file permissions as temporarily workaround and unblock the test cycle? Thanks! Cindy Sent from my iPhone On Feb 14, 2019, at 3:02 AM, Liu, Yang > wrote: Hi Cindy, We are still seeing the same file permission issue for grubx64.efi under pxeboot/EFI, causing UEFI pxeboot to fail. We need the grubx64.efi to be readable by others as well. ../pxeboot/EFI/ total 1220 drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 . drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 .. drwxrwsr-x 3 jenkins mock 4096 Feb 12 16:48 centos -rwx------ 1 jenkins mock 1233016 Feb 12 16:37 grubx64.efi The patch seems to have changed the dir permission for centos from 700 to 755, but not grubx64.efi. For the dir permission for centos, I believe the original 700 should be sufficient (@Scott, please correct me if it’s wrong). BR, Yang From: Liu, Yang Sent: February-12-19 9:00 AM To: 'Xie, Cindy'; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Thanks Cindy. Will do. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-11-19 8:10 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io; Cabrales, Ada; Waheed, Numan Subject: RE: CentOS7.6 testing status - blocked Hi, Numan/Yang, The last pending patch (https://review.openstack.org/#/c/634559/) which was blocking your testing (#1814360) was just merged. Please get new build ISO from Jason so you can continue the testing. Thx. - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 10:18 AM To: Xie, Cindy >; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Correct. BR, Yang From: Xie, Cindy [mailto:cindy.xie at intel.com] Sent: February-08-19 8:28 PM To: Liu, Yang; starlingx-discuss at lists.starlingx.io Subject: RE: CentOS7.6 testing status - blocked Hi, Yang, Thanks for the report. Are the “two node system” below referring to Duplex? Just want to confirm because #1814360 we have a patch pending and we do want to ensure it works on Duplex as well. Th.x - cindy From: Liu, Yang [mailto:yang.liu at windriver.com] Sent: Saturday, February 9, 2019 2:09 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked Hi folks, Here’s an update for CentOS7.6 testing. We are currently blocked due to pxeboot from controller-0 does not work for EFI. (#1814360) We will continue after that issue is resolved. System NICs Mgmt;infra;data Special Configs Test coverage after Install and Config Status/Issues Dedicated storage X540-AT2; X540-AT2; fortville IPv6 Sanity, nova Completed. New issues logged. #1814336 CentOS7.6: Unable to launch vm directly from virsh #1814335 CentOS7.6: Unable to launch vm with UEFI boot One node system none; none; X522/X577-AT Sanity, basic regression Completed. Passed. Two node system fortville; fortville; fortville tboot, tpm, https, extended security profile Sanity, security Blocked by #1814360 Multi-node system BCM5720; Niantic; Niantic Sriov(niantic),pcipt(niantic) Sanity, networking Completed. Passed. Two node system Fortville; none; Fortville Low latency, UEFI Sanity, basic regression, cyclictest Blocked by #1814360 Two node system Fortville; none; Fortville Secure boot Sanity, security Blocked by #1814360 Multi-node system I350; Niantic/cx3; cx3 Pxeboot script Sanity Completed. Passed. Only compute-0 was used, since compute-1 has CX3 data nic. ?? CX4 on infra or mgmt, but NOT data Won’t test. We don’t have a system have required nics. BR, yang -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko at openstack.org Fri Feb 15 09:23:24 2019 From: ildiko at openstack.org (Ildiko Vancsa) Date: Fri, 15 Feb 2019 10:23:24 +0100 Subject: [Starlingx-discuss] Unable to Modify StarlingX Wiki Page In-Reply-To: <9A85D2917C58154C960D95352B22818BC3AD100C@fmsmsx121.amr.corp.intel.com> References: <3CAA827B7A79BA46B15B280EC82088FE4827423B@ALA-MBD.corp.ad.wrs.com> <9A85D2917C58154C960D95352B22818BC3AD100C@fmsmsx121.amr.corp.intel.com> Message-ID: <50986D35-FDCE-4471-92BB-C5365329DDF4@openstack.org> Hi Numan, As far as I know the StarlingX team has moved all their documentation into git and they render it from there to the Documentation web page: * http://git.starlingx.io/cgit/stx-docs/ * https://docs.starlingx.io If you would like to add modifications to the documentation you need to propose a patch to the documentation repository. If this is not what you tried to do, could you please link the exact wiki page and describe the operation you tried to make so I can look into what the issue might be? Thanks and Best Regards, Ildikó > On 2019. Feb 14., at 22:39, Jones, Bruce E wrote: > > Ildiko, can you help please? > > From: Waheed, Numan [mailto:Numan.Waheed at windriver.com] > Sent: Thursday, February 14, 2019 1:25 PM > To: starlingx-discuss at lists.starlingx.io; Jones, Bruce E ; Arce Moreno, Abraham > Cc: Liu, Yang > Subject: Unable to Modify StarlingX Wiki Page > > Hi Bruce and Abraham, > > There are few pages on StarlingX wiki that are locked and cannot be modified. We were looking for adding some instructions on Wiki and the page where we want to make the changes are locked. In specific, I would like to add some instructions under Documentation page and the pages under it are locked. > > Can you please give me (Numan Waheed, email: numan.waheed at windriver.com) and Yang Liu (email: yang.liu at windriver.com) privileges to update these pages. > > If you are not the right person to provide this privilege, please let me know who can do it. > > Thanks, > > Numan. From xiongzhiwei at baicells.com Fri Feb 15 10:32:07 2019 From: xiongzhiwei at baicells.com (xiongzhiwei at baicells.com) Date: Fri, 15 Feb 2019 18:32:07 +0800 Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server References: , <20190202153337711807156@baicells.com> Message-ID: <20190215183206776070302@baicells.com> Hi Yong and Cindy, I had deployed successfully Simplex All-in-one on bear metal , the root cause is error RAID configuration and caused incompatible hard disks. Thanks BR Tim Xiong From: xiongzhiwei at baicells.com Date: 2019-02-02 15:33 To: Hu, Yong; Xie, Cindy; starlingx-discuss Subject: Re: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Yong, puppet.log attached. Thanks From: Hu, Yong Date: 2019-02-02 15:10 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Pls share us your /var/log/puppet/latest/puppet.log. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 3:07 PM To: "xiongzhiwei at baicells.com" , "Hu, Yong" , "Xie, Cindy" , starlingx-discuss Subject: Re: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Yong and Cindy, The same error appeared after removed these two sata disks. Thanks Tim From: xiongzhiwei at baicells.com Date: 2019-02-02 14:34 To: Hu, Yong; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Thanks Hu Yong and Cindy, I am trying to again after remove these two SATA HD. Will tell you once successed. Regards Tim From: Hu, Yong Date: 2019-02-02 14:27 To: xiongzhiwei at baicells.com; Xie, Cindy; starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server If using 240G SATA HD as the boot disk, the storage might not be enough. At least, in our virtual environment, the boot disk has to be larger than 250 GB. From: "xiongzhiwei at baicells.com" Date: Saturday, 2 February 2019 at 2:17 PM To: "Xie, Cindy" , starlingx-discuss Subject: Re: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi Cindy, This image was build by myself, fetched on 24th Jan. It is working normally in my VM enviroment but failed on the bear metal server. My server is Huawei RH2288v3: E5-2630 v3 at 2.4GHz, 2*8cores, 16*8G DDR4 RAM, 2*900G SAS+2*240G SATA HD. Thanks Tim Xiong From: Xie, Cindy Date: 2019-02-02 13:00 To: xiongzhiwei at baicells.com; starlingx-discuss Subject: RE: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server HI, Tim, Can you please provide the following info: - Exact version of the StarlingX: if you downloaded it from Cengen, please provide the link; if you built it by yourself, please provide the date on master. - Your HW config for your bare metal server. Our recommended HW config can be found here: https://docs.starlingx.io/installation_guide/index.html Thanks. - cindy From: xiongzhiwei at baicells.com [mailto:xiongzhiwei at baicells.com] Sent: Saturday, February 2, 2019 12:38 PM To: starlingx-discuss Subject: [Starlingx-discuss] EXT-fs error when deploying staringx on bearmetal server Hi, I am trying to deploy starlingx on a bearmetal server, but failed, After execute "sudo config_controller" and some default configuration confirmed(all-in-one, simplex), exception printed as below: 01/08: Creating bootstrap configuration ... DONE 02/08: Applying bootstrap manifest ... [ 452.567312] EXT4-fs error (device drbd1): ext4_journal_check_start:56: Detected aborted journal [ 452.576753] EXT4-fs (drbd1): Remounting filesystem read-only [ 466.880792] EXT4-fs error (device drbd3): ext4_journal_check_start:56: Detected aborted journal [ 466.886032] EXT4-fs (drbd3): Remounting filesystem read-only [ 479.755269] EXT4-fs error (device drbd0): ext4_journal_check_start:56: Detected aborted journal [ 479.760818] EXT4-fs (drbd0): Remounting filesystem read-only [ 479.766425] EXT4-fs (drbd0): ext4_writepages: jbd2_start: 13312 pages, ino 28; err -30 Failed to execute bootstrap manifest Configuration failed: failed to apply bootstrap manifest. See /var/log/puppet/latest/puppet.log for details. I had deployed successfully on a qemu VM with same image, also all-in-one and simplex. Is there any configurations missed for the bear metal server? I had recovered all BIOS configurations to default for it. Could anyone help me to fix it? Thanks BR Tim Xiong -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Feb 15 13:23:47 2019 From: serverascode at gmail.com (Curtis) Date: Fri, 15 Feb 2019 08:23:47 -0500 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: References: Message-ID: On Thu, Feb 14, 2019 at 12:51 PM Saul Wold wrote: > > Folks, > > I was doing some experimentation with an un-patched CentOS and running > config_controller. One of the main issues I found is that doing the > initial installation and execution discovered many un-resolved runtime > requirements. > Thanks for looking into this Saul, I think this is a good thing to do to work towards getting a understanding of dependencies. > > I will start sending some pull requests to fault, metal, and config with > more detailed "Requires:" statements. > > Another item is that since that we are rebuilding openstack-keystone > among other openstack related packages with additional configuration and > scripts, which are needed for controller-0. In the stx-integ (base OS) > case, we re-factored many of the packages to remove configuration and > additional scripts to a separate package, I would like to see something > similar here for packages are are needed for controller-0 (ie the things > we are not installing from PyPi directly). > Do we install things directly from PyPi? When does that happen? > What I saw is that we include the CentOS-Openstack RPM repo along with, > of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack > packages directly along with some StarlingX specific additions in a > seperate package, rather than creating a new package with both upstream > and StarlingX content. > > I don't know what the extra things are that we are packaging, but if they are only helper scripts and the like and don't affect the actual keystone code then I'd hope we would use the upstream RPMs. My two cents. :) Thanks, Curtis > Thoughts, > > Sau! > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Feb 15 15:12:02 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 15 Feb 2019 15:12:02 +0000 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: References: Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> Comments inline. From: Curtis [mailto:serverascode at gmail.com] Sent: Friday, February 15, 2019 8:24 AM To: Saul Wold Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 On Thu, Feb 14, 2019 at 12:51 PM Saul Wold > wrote: Folks, I was doing some experimentation with an un-patched CentOS and running config_controller. One of the main issues I found is that doing the initial installation and execution discovered many un-resolved runtime requirements. Thanks for looking into this Saul, I think this is a good thing to do to work towards getting a understanding of dependencies. I will start sending some pull requests to fault, metal, and config with more detailed "Requires:" statements. Another item is that since that we are rebuilding openstack-keystone among other openstack related packages with additional configuration and scripts, which are needed for controller-0. In the stx-integ (base OS) case, we re-factored many of the packages to remove configuration and additional scripts to a separate package, I would like to see something similar here for packages are are needed for controller-0 (ie the things we are not installing from PyPi directly). Do we install things directly from PyPi? When does that happen? [Don] No, we don’t install anything from PyPi. What I saw is that we include the CentOS-Openstack RPM repo along with, of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack packages directly along with some StarlingX specific additions in a seperate package, rather than creating a new package with both upstream and StarlingX content. I don't know what the extra things are that we are packaging, but if they are only helper scripts and the like and don't affect the actual keystone code then I'd hope we would use the upstream RPMs. [Don] As much as possible, we look to use unmodified upstream RPMs. My two cents. :) Thanks, Curtis Thoughts, Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Feb 15 15:18:46 2019 From: serverascode at gmail.com (Curtis) Date: Fri, 15 Feb 2019 10:18:46 -0500 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> Message-ID: On Fri, Feb 15, 2019 at 10:12 AM Penney, Don wrote: > Comments inline. > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 8:24 AM > *To:* Saul Wold > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > > > On Thu, Feb 14, 2019 at 12:51 PM Saul Wold wrote: > > > Folks, > > I was doing some experimentation with an un-patched CentOS and running > config_controller. One of the main issues I found is that doing the > initial installation and execution discovered many un-resolved runtime > requirements. > > > > Thanks for looking into this Saul, I think this is a good thing to do to > work towards getting a understanding of dependencies. > > > > > I will start sending some pull requests to fault, metal, and config with > more detailed "Requires:" statements. > > Another item is that since that we are rebuilding openstack-keystone > among other openstack related packages with additional configuration and > scripts, which are needed for controller-0. In the stx-integ (base OS) > case, we re-factored many of the packages to remove configuration and > additional scripts to a separate package, I would like to see something > similar here for packages are are needed for controller-0 (ie the things > we are not installing from PyPi directly). > > > > Do we install things directly from PyPi? When does that happen? > > *[Don] No, we don’t install anything from PyPi.* > Thanks. Good to know. :) > > > > What I saw is that we include the CentOS-Openstack RPM repo along with, > of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack > packages directly along with some StarlingX specific additions in a > seperate package, rather than creating a new package with both upstream > and StarlingX content. > > > > I don't know what the extra things are that we are packaging, but if they > are only helper scripts and the like and don't affect the actual keystone > code then I'd hope we would use the upstream RPMs. > > *[Don] As much as possible, we look to use unmodified upstream RPMs.* > Can you expand on that statement in the context of this particular RPM? (Sorry I'm not familiar with what we are doing with Keystone.) Thanks, Curtis > > > My two cents. :) > > > > Thanks, > > Curtis > > > > > > Thoughts, > > Sau! > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Brent.Rowsell at windriver.com Fri Feb 15 15:42:06 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 15 Feb 2019 15:42:06 +0000 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> A few points to keep in mind here: 1) Config_controller is being removed and replaced with ansible. 2) Openstack deployment will not be part of the initial controller bootstrapping. Openstack will be deployed in containers. 3) We are in the process of moving to vanilla openstack. Brent From: Curtis [mailto:serverascode at gmail.com] Sent: Friday, February 15, 2019 10:19 AM To: Penney, Don Cc: Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 On Fri, Feb 15, 2019 at 10:12 AM Penney, Don > wrote: Comments inline. From: Curtis [mailto:serverascode at gmail.com] Sent: Friday, February 15, 2019 8:24 AM To: Saul Wold Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 On Thu, Feb 14, 2019 at 12:51 PM Saul Wold > wrote: Folks, I was doing some experimentation with an un-patched CentOS and running config_controller. One of the main issues I found is that doing the initial installation and execution discovered many un-resolved runtime requirements. Thanks for looking into this Saul, I think this is a good thing to do to work towards getting a understanding of dependencies. [BR] Keep in mind config_controller is being removed and being replaced with ansible. The bootsta I will start sending some pull requests to fault, metal, and config with more detailed "Requires:" statements. Another item is that since that we are rebuilding openstack-keystone among other openstack related packages with additional configuration and scripts, which are needed for controller-0. In the stx-integ (base OS) case, we re-factored many of the packages to remove configuration and additional scripts to a separate package, I would like to see something similar here for packages are are needed for controller-0 (ie the things we are not installing from PyPi directly). Do we install things directly from PyPi? When does that happen? [Don] No, we don’t install anything from PyPi. Thanks. Good to know. :) What I saw is that we include the CentOS-Openstack RPM repo along with, of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack packages directly along with some StarlingX specific additions in a seperate package, rather than creating a new package with both upstream and StarlingX content. I don't know what the extra things are that we are packaging, but if they are only helper scripts and the like and don't affect the actual keystone code then I'd hope we would use the upstream RPMs. [Don] As much as possible, we look to use unmodified upstream RPMs. Can you expand on that statement in the context of this particular RPM? (Sorry I'm not familiar with what we are doing with Keystone.) Thanks, Curtis My two cents. :) Thanks, Curtis Thoughts, Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Feb 15 16:30:11 2019 From: serverascode at gmail.com (Curtis) Date: Fri, 15 Feb 2019 11:30:11 -0500 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> Message-ID: On Fri, Feb 15, 2019 at 10:42 AM Rowsell, Brent wrote: > A few points to keep in mind here: > > 1) Config_controller is being removed and replaced with ansible. > > 2) Openstack deployment will not be part of the initial controller > bootstrapping. Openstack will be deployed in containers. > > 3) We are in the process of moving to vanilla openstack. > With those points in mind, does that mean after moving to vanilla openstack the keystone code will come from an upstream RPM? Thanks, Curtis > > > Brent > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 10:19 AM > *To:* Penney, Don > *Cc:* Saul Wold ; > starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > > > > > > > On Fri, Feb 15, 2019 at 10:12 AM Penney, Don > wrote: > > Comments inline. > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 8:24 AM > *To:* Saul Wold > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > > > On Thu, Feb 14, 2019 at 12:51 PM Saul Wold wrote: > > > Folks, > > I was doing some experimentation with an un-patched CentOS and running > config_controller. One of the main issues I found is that doing the > initial installation and execution discovered many un-resolved runtime > requirements. > > > > Thanks for looking into this Saul, I think this is a good thing to do to > work towards getting a understanding of dependencies. > > [BR] Keep in mind config_controller is being removed and being replaced > with ansible. The bootsta > > > I will start sending some pull requests to fault, metal, and config with > more detailed "Requires:" statements. > > Another item is that since that we are rebuilding openstack-keystone > among other openstack related packages with additional configuration and > scripts, which are needed for controller-0. In the stx-integ (base OS) > case, we re-factored many of the packages to remove configuration and > additional scripts to a separate package, I would like to see something > similar here for packages are are needed for controller-0 (ie the things > we are not installing from PyPi directly). > > > > Do we install things directly from PyPi? When does that happen? > > *[Don] No, we don’t install anything from PyPi.* > > > > Thanks. Good to know. :) > > > > > > > > > What I saw is that we include the CentOS-Openstack RPM repo along with, > of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack > packages directly along with some StarlingX specific additions in a > seperate package, rather than creating a new package with both upstream > and StarlingX content. > > > > I don't know what the extra things are that we are packaging, but if they > are only helper scripts and the like and don't affect the actual keystone > code then I'd hope we would use the upstream RPMs. > > *[Don] As much as possible, we look to use unmodified upstream RPMs.* > > > > Can you expand on that statement in the context of this particular RPM? > (Sorry I'm not familiar with what we are doing with Keystone.) > > > > Thanks, > > Curtis > > > > > > > > My two cents. :) > > > > Thanks, > > Curtis > > > > > > Thoughts, > > Sau! > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From erich.cordoba.malibran at intel.com Fri Feb 15 16:46:10 2019 From: erich.cordoba.malibran at intel.com (Cordoba Malibran, Erich) Date: Fri, 15 Feb 2019 16:46:10 +0000 Subject: [Starlingx-discuss] Proposal: Build and Multi-OS strategy In-Reply-To: <9A85D2917C58154C960D95352B22818BC3AD11C1@fmsmsx121.amr.corp.intel.com> References: <9A85D2917C58154C960D95352B22818BC3AD11C1@fmsmsx121.amr.corp.intel.com> Message-ID: Hi all. I've been gathering thoughs on how to achieve multi OS since the beginning of the project and this is a good oppportunity to drop all of them here :) ~~~~~~~~~ I think multi OS is a challenge with many faces that can be summarized in two high level areas: building and running. Building ======== Currently, developers plays a mixed role, they are developing and packaging software. They have defined a workflow where in addition of creating new features, fixing bugs and other tasks, they need to be aware of the build system that consumes their work. This model is similar to Yocto's (which I think was used in the past) where developers needs to create/modify recipes to get their changes checked in. This model ain't bad but I think it could be incomplatible with a multi OS approach. It could enforce that developers needs to be aware of additional build layers and add an extra effort everytime they want to make changes. If you take any distribution out there, all of them have the same entry point: a software released by upstream. Let's say there is a `abc` project and they release the `abc-1.2.3` version packaged in a `abc- 1.2.3.tar.gz` file. Any distribution that wants to include `abc` takes that packaged file and create a distribution package. The `abc` team doesn't care on how distros package their software, they just care on creating a good piece of software that is flexible enough to run well on any distribution. Anytime `abc` releases a new version, they increment the version number accordingly and the distribution packagers includes the new version. In my opinion, having a separation between build system and our software components will help the project in many ways. From one side, building for multiple OS is easier because the multi OS team can only care about threating our software as upstream, building, packaging and eventually creating bugs for upstream (as all distros does). In the other side, the software component maintainers can only care about robustness of their project, that can be tested, builded and installed. The downside in this separation is that we need things we currently don't have, like proper versioning and release tools (autotools and friends). But also the mindset to start threating our software components as any software project out there. I believe some of this intentions has been captured in the ongoing multi OS specs, but I want to highlight the importance of the role of software maintainers in this model. Some workflows might need to change (if we take this path), but we can find equivalence in a new scenario. Once we have this separation, the multi OS building effort relies more on packaging and creating bugs for upstream. The next of the challenges are in the running part. Running ======= In my opinion running in multiple OS is a challenge even higher than creating the RPM o DEB packages. At this point we don't have enough information on how the system will behave in Ubuntu, I've seen hard coded paths in the source code, some for executables and some other for configuration files. Extending the example, in distros like Clear Linux, the configuration files needs to be in a different path, this is also true for distros under the Atomic Project[0] (This is just an example I know they are out of the scope right now). The point here is that there's a high level of uncertainty on how the entire system will behave. I think there are a lot of questions, for example, the configuration will work in Ubuntu? (there's a plan to use Ansible that could help), the installation process can work the same? does anaconda works the same way in Ubuntu? What refactors are needed in stx-update to support DEBs? and other questions. The multi OS doesn't have (and I think it shouldn't) enough vision in these topics. The software component maintainers are the best people to offer insights into how their projects will work in a multi OS environment. I we take the build system and source code separation again, then the software projects should only care that the software are flexible and can run well in different operating systems (as upstream source projects does). I believe we might need more involvement of the community to solve this kind of questions or, at least, raise red flags that could help to plan and act accordingly. [0] https://www.projectatomic.io/ On Fri, 2019-02-15 at 00:05 +0000, Jones, Bruce E wrote: > I would like to start a thread that I hope will result in more focus > and direction for the Build and Multi-OS teams. It’s right before I > disappear for vacation, so I’ll ask Saul and Cesar to address any > follow-ups. > > What I would like to propose is that we as a community take on the > task of delivering support for Ubuntu as a StarlingX host OS for the > November 2019 release. This would allow us to support the ~35% of > the cloud ecosystem that doesn’t run on RHEL or CentOS. > > It will require a lot of work and therefore we should start as soon > as possible. What I would propose we do is: > 1) The Build team to create a new and separate build system for > an Ubuntu LTS hosted ISO [0] > 2) The MultiOS team to review the outstanding carried patches > and apply those needed to the Ubuntu packages > 3) The MultiOS team to update the system as needed to use an > Ubuntu installer to get controller-0 fully installed > 4) Which would then lead to work in the MultiOS team to bring up > the Ubuntu hosted StarlingX in Simplex mode [1]. > > We would then have a (kind of) working StarlingX image that will > enable the broader community to contribute to all of the other work > needed to deliver a fully supported Ubuntu host for the November > release. That work would include bringing up the other > configurations beyond Simplex, changes to the StarlingX software > management and update services, the additional testing needed, and > other tasks which can be parallelized. > > Meanwhile, on the Intel side we have received new guidance from our > new management on the requirement for Clear Linux support. We will > continue but slow down that work for now and focus on Ubuntu. So for > November, the goal is to support 2 Host OS’s, not 3. > > We will need support and contributions from the community to achieve > this goal in time for November. The MultiOS team in particular will > need help and additional contribtutors. > > Brucej > > [0] Making the build system common between Ubuntu and CentOS is hard > and probably should not be attempted. We should leverage what we > can, of course, from our own code and the broader ecosystem. > [1] Or in which ever configuration is easier…. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Fri Feb 15 17:16:37 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Fri, 15 Feb 2019 18:16:37 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DAC9@ALA-MBD.corp.ad.wrs.com> References: <701275276.504630.1550169575394@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> <543FEB76-F032-4865-A12F-B45C56A3B0B7@schaible-consulting.de> <6703202FD9FDFF4A8DA9ACF104AE129FBA43DAC9@ALA-MBD.corp.ad.wrs.com> Message-ID: <194188777.555385.1550250997219@communicator.strato.com> Hi Don, your analysis was correct. Our interface ist not supported by the initrd. Do you know by chance how to unpack and pack the initrd correctly? Thanks Marcel > "Penney, Don" hat am 14. Februar 2019 um 20:48 geschrieben: > > > Your load shouldn't have the http port change, which was merged the next day. So I would suggest checking that the lighttpd server is running fine on the active controller as the first step. If it is, then if you have some shell access from the failed installation, maybe you can confirm that the boot interface is supported by the initrd and rule out comms issues. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Thursday, February 14, 2019 2:41 PM > To: Penney, Don > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > Hi Don, > > which build do you recommend? > > Is there a workaround so I can keep my installation for now? > > Thanks > > Marcel > > Von meinem iPhone gesendet > > > Am 14.02.2019 um 19:50 schrieb Penney, Don : > > > > This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: > > * problems with the lighttpd server on the active controller > > * NICs that are unsupported by the initrd kernel modules > > * some other comms issue > > > > What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ > > > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Thursday, February 14, 2019 1:40 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > > > Hi, > > > > I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: > > > > Console log: > > ============= > > > > [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. > > [ OK ] Started Show Plymouth Boot Screen. > > [ OK ] Started Device-Mapper Multipath Device Controller. > > Starting Open-iSCSI... > > [ OK ] Reached target Paths. > > [ OK ] Reached target Basic System. > > [ OK ] Started Open-iSCSI. > > Starting dracut initqueue hook... > > [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > ... > > [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > [Warning: /dev/root does not exist > > > > Generating "/run/initramfs/rdsosreport.txt" > > > > Entering emergency mode. Exit the shell to continue. > > Type "journalctl" to view system logs. > > You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot > > after mounting them and attach it to a bug report. > > > > dracut:/# > > > > Kernel parameter: > > ================== > > > > dracut:/# journalctl | grep -i boot > > Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 > > ======= > > > > The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? > > > > Any idea is welcome! > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From sgw at linux.intel.com Fri Feb 15 17:47:16 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 15 Feb 2019 09:47:16 -0800 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> Message-ID: <6ff71695-020f-020b-a655-3eeccab02eba@linux.intel.com> On 2/15/19 7:42 AM, Rowsell, Brent wrote: > A few points to keep in mind here: > > 1)Config_controller is being removed and replaced with ansible. > I know there is a specification for this work, is there some preliminary work that I can look at or work with to test? > 2)Openstack deployment will not be part of the initial controller > bootstrapping. Openstack will be deployed in containers. > I understood there were still some openstack requirements on controller-0 such as keystone and horizon. I am looking at the stx_container_update PDF from the Chandler meeting. > 3)We are in the process of moving to vanilla openstack. > Yes, I know this. Sau! > Brent > > *From:*Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 10:19 AM > *To:* Penney, Don > *Cc:* Saul Wold ; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > On Fri, Feb 15, 2019 at 10:12 AM Penney, Don > wrote: > > Comments inline. > > *From:*Curtis [mailto:serverascode at gmail.com > ] > *Sent:* Friday, February 15, 2019 8:24 AM > *To:* Saul Wold > *Cc:* starlingx-discuss at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements > on the bare-metal controller-0 > > On Thu, Feb 14, 2019 at 12:51 PM Saul Wold > wrote: > > > Folks, > > I was doing some experimentation with an un-patched CentOS and > running > config_controller. One of the main issues I found is that doing the > initial installation and execution discovered many un-resolved > runtime > requirements. > > Thanks for looking into this Saul, I think this is a good thing to > do to work towards getting a understanding of dependencies. > > [BR] Keep in mind config_controller is being removed and being > replaced with ansible. The bootsta > > > I will start sending some pull requests to fault, metal, and > config with > more detailed "Requires:" statements. > > Another item is that since that we are rebuilding > openstack-keystone > among other openstack related packages with additional > configuration and > scripts, which are needed for controller-0. In the stx-integ > (base OS) > case, we re-factored many of the packages to remove > configuration and > additional scripts to a separate package, I would like to see > something > similar here for packages are are needed for controller-0 (ie > the things > we are not installing from PyPi directly). > > Do we install things directly from PyPi? When does that happen? > > */[Don] No, we don’t install anything from PyPi./* > > Thanks. Good to know. :) > > > What I saw is that we include the CentOS-Openstack RPM repo > along with, > of course, our StarlingX RPM repo. Why can't we use the > CentOS-Openstack > packages directly along with some StarlingX specific additions in a > seperate package, rather than creating a new package with both > upstream > and StarlingX content. > > I don't know what the extra things are that we are packaging, but if > they are only helper scripts and the like and don't affect the > actual keystone code then I'd hope we would use the upstream RPMs. > > */[Don] As much as possible, we look to use unmodified upstream RPMs./* > > Can you expand on that statement in the context of this particular RPM? > (Sorry I'm not familiar with what we are doing with Keystone.) > > Thanks, > > Curtis > > My two cents. :) > > Thanks, > > Curtis > > Thoughts, > > Sau! > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Brent.Rowsell at windriver.com Fri Feb 15 17:59:52 2019 From: Brent.Rowsell at windriver.com (Rowsell, Brent) Date: Fri, 15 Feb 2019 17:59:52 +0000 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> Message-ID: <2588653EBDFFA34B982FAF00F1B4844EBB3D706E@ALA-MBD.corp.ad.wrs.com> See inline From: Curtis [mailto:serverascode at gmail.com] Sent: Friday, February 15, 2019 11:30 AM To: Rowsell, Brent Cc: Penney, Don ; Saul Wold ; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 On Fri, Feb 15, 2019 at 10:42 AM Rowsell, Brent > wrote: A few points to keep in mind here: 1) Config_controller is being removed and replaced with ansible. 2) Openstack deployment will not be part of the initial controller bootstrapping. Openstack will be deployed in containers. 3) We are in the process of moving to vanilla openstack. With those points in mind, does that mean after moving to vanilla openstack the keystone code will come from an upstream RPM? [BR] Since we will be doing CI with openstack master, we will be building our own rpm’s. The upstream centos distro would only have release rpm’s (i.e. rocky). Thanks, Curtis Brent From: Curtis [mailto:serverascode at gmail.com] Sent: Friday, February 15, 2019 10:19 AM To: Penney, Don > Cc: Saul Wold >; starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 On Fri, Feb 15, 2019 at 10:12 AM Penney, Don > wrote: Comments inline. From: Curtis [mailto:serverascode at gmail.com] Sent: Friday, February 15, 2019 8:24 AM To: Saul Wold Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 On Thu, Feb 14, 2019 at 12:51 PM Saul Wold > wrote: Folks, I was doing some experimentation with an un-patched CentOS and running config_controller. One of the main issues I found is that doing the initial installation and execution discovered many un-resolved runtime requirements. Thanks for looking into this Saul, I think this is a good thing to do to work towards getting a understanding of dependencies. [BR] Keep in mind config_controller is being removed and being replaced with ansible. The bootsta I will start sending some pull requests to fault, metal, and config with more detailed "Requires:" statements. Another item is that since that we are rebuilding openstack-keystone among other openstack related packages with additional configuration and scripts, which are needed for controller-0. In the stx-integ (base OS) case, we re-factored many of the packages to remove configuration and additional scripts to a separate package, I would like to see something similar here for packages are are needed for controller-0 (ie the things we are not installing from PyPi directly). Do we install things directly from PyPi? When does that happen? [Don] No, we don’t install anything from PyPi. Thanks. Good to know. :) What I saw is that we include the CentOS-Openstack RPM repo along with, of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack packages directly along with some StarlingX specific additions in a seperate package, rather than creating a new package with both upstream and StarlingX content. I don't know what the extra things are that we are packaging, but if they are only helper scripts and the like and don't affect the actual keystone code then I'd hope we would use the upstream RPMs. [Don] As much as possible, we look to use unmodified upstream RPMs. Can you expand on that statement in the context of this particular RPM? (Sorry I'm not familiar with what we are doing with Keystone.) Thanks, Curtis My two cents. :) Thanks, Curtis Thoughts, Sau! _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -- Blog: serverascode.com -- Blog: serverascode.com -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Fri Feb 15 18:13:56 2019 From: serverascode at gmail.com (Curtis) Date: Fri, 15 Feb 2019 13:13:56 -0500 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3D706E@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D706E@ALA-MBD.corp.ad.wrs.com> Message-ID: On Fri, Feb 15, 2019 at 1:00 PM Rowsell, Brent wrote: > See inline > > > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 11:30 AM > *To:* Rowsell, Brent > *Cc:* Penney, Don ; Saul Wold < > sgw at linux.intel.com>; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > > > > > > > On Fri, Feb 15, 2019 at 10:42 AM Rowsell, Brent < > Brent.Rowsell at windriver.com> wrote: > > A few points to keep in mind here: > > 1) Config_controller is being removed and replaced with ansible. > > 2) Openstack deployment will not be part of the initial controller > bootstrapping. Openstack will be deployed in containers. > > 3) We are in the process of moving to vanilla openstack. > > > > With those points in mind, does that mean after moving to vanilla > openstack the keystone code will come from an upstream RPM? > > [BR] Since we will be doing CI with openstack master, we will be building > our own rpm’s. The upstream centos distro would only have release rpm’s > (i.e. rocky). > > > OK thanks for the answer. Interesting. Thanks, Curtis > Thanks, > > Curtis > > > > > > > > Brent > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 10:19 AM > *To:* Penney, Don > *Cc:* Saul Wold ; > starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > > > > > > > On Fri, Feb 15, 2019 at 10:12 AM Penney, Don > wrote: > > Comments inline. > > > > *From:* Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 8:24 AM > *To:* Saul Wold > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > > > On Thu, Feb 14, 2019 at 12:51 PM Saul Wold wrote: > > > Folks, > > I was doing some experimentation with an un-patched CentOS and running > config_controller. One of the main issues I found is that doing the > initial installation and execution discovered many un-resolved runtime > requirements. > > > > Thanks for looking into this Saul, I think this is a good thing to do to > work towards getting a understanding of dependencies. > > [BR] Keep in mind config_controller is being removed and being replaced > with ansible. The bootsta > > > I will start sending some pull requests to fault, metal, and config with > more detailed "Requires:" statements. > > Another item is that since that we are rebuilding openstack-keystone > among other openstack related packages with additional configuration and > scripts, which are needed for controller-0. In the stx-integ (base OS) > case, we re-factored many of the packages to remove configuration and > additional scripts to a separate package, I would like to see something > similar here for packages are are needed for controller-0 (ie the things > we are not installing from PyPi directly). > > > > Do we install things directly from PyPi? When does that happen? > > *[Don] No, we don’t install anything from PyPi.* > > > > Thanks. Good to know. :) > > > > > > > > > What I saw is that we include the CentOS-Openstack RPM repo along with, > of course, our StarlingX RPM repo. Why can't we use the CentOS-Openstack > packages directly along with some StarlingX specific additions in a > seperate package, rather than creating a new package with both upstream > and StarlingX content. > > > > I don't know what the extra things are that we are packaging, but if they > are only helper scripts and the like and don't affect the actual keystone > code then I'd hope we would use the upstream RPMs. > > *[Don] As much as possible, we look to use unmodified upstream RPMs.* > > > > Can you expand on that statement in the context of this particular RPM? > (Sorry I'm not familiar with what we are doing with Keystone.) > > > > Thanks, > > Curtis > > > > > > > > My two cents. :) > > > > Thanks, > > Curtis > > > > > > Thoughts, > > Sau! > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From sgw at linux.intel.com Fri Feb 15 18:21:18 2019 From: sgw at linux.intel.com (Saul Wold) Date: Fri, 15 Feb 2019 10:21:18 -0800 Subject: [Starlingx-discuss] [Containers] Package Requirements on the bare-metal controller-0 In-Reply-To: <2588653EBDFFA34B982FAF00F1B4844EBB3D706E@ALA-MBD.corp.ad.wrs.com> References: <6703202FD9FDFF4A8DA9ACF104AE129FBA43DE6D@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D6792@ALA-MBD.corp.ad.wrs.com> <2588653EBDFFA34B982FAF00F1B4844EBB3D706E@ALA-MBD.corp.ad.wrs.com> Message-ID: <05f9a850-1723-e27b-1717-04828866dbc7@linux.intel.com> On 2/15/19 9:59 AM, Rowsell, Brent wrote: > See inline > > *From:*Curtis [mailto:serverascode at gmail.com] > *Sent:* Friday, February 15, 2019 11:30 AM > *To:* Rowsell, Brent > *Cc:* Penney, Don ; Saul Wold > ; starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements on > the bare-metal controller-0 > > On Fri, Feb 15, 2019 at 10:42 AM Rowsell, Brent > > wrote: > > A few points to keep in mind here: > > 1)Config_controller is being removed and replaced with ansible. > > 2)Openstack deployment will not be part of the initial controller > bootstrapping. Openstack will be deployed in containers. > > 3)We are in the process of moving to vanilla openstack. > > With those points in mind, does that mean after moving to vanilla > openstack the keystone code will come from an upstream RPM? > > [BR] Since we will be doing CI with openstack master, we will be > building our own rpm’s.  The upstream centos distro would only have > release rpm’s (i.e. rocky). > This is great news, thanks for this update. Sau! > Thanks, > > Curtis > > Brent > > *From:*Curtis [mailto:serverascode at gmail.com > ] > *Sent:* Friday, February 15, 2019 10:19 AM > *To:* Penney, Don > > *Cc:* Saul Wold >; > starlingx-discuss at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] [Containers] Package Requirements > on the bare-metal controller-0 > > On Fri, Feb 15, 2019 at 10:12 AM Penney, Don > > wrote: > > Comments inline. > > *From:*Curtis [mailto:serverascode at gmail.com > ] > *Sent:* Friday, February 15, 2019 8:24 AM > *To:* Saul Wold > *Cc:* starlingx-discuss at lists.starlingx.io > > *Subject:* Re: [Starlingx-discuss] [Containers] Package > Requirements on the bare-metal controller-0 > > On Thu, Feb 14, 2019 at 12:51 PM Saul Wold > wrote: > > > Folks, > > I was doing some experimentation with an un-patched CentOS > and running > config_controller. One of the main issues I found is that > doing the > initial installation and execution discovered many > un-resolved runtime > requirements. > > Thanks for looking into this Saul, I think this is a good thing > to do to work towards getting a understanding of dependencies. > > [BR] Keep in mind config_controller is being removed and being > replaced with ansible. The bootsta > > > I will start sending some pull requests to fault, metal, and > config with > more detailed "Requires:" statements. > > Another item is that since that we are rebuilding > openstack-keystone > among other openstack related packages with additional > configuration and > scripts, which are needed for controller-0. In the stx-integ > (base OS) > case, we re-factored many of the packages to remove > configuration and > additional scripts to a separate package, I would like to > see something > similar here for packages are are needed for controller-0 > (ie the things > we are not installing from PyPi directly). > > Do we install things directly from PyPi? When does that happen? > > */[Don] No, we don’t install anything from PyPi./* > > Thanks. Good to know. :) > > > What I saw is that we include the CentOS-Openstack RPM repo > along with, > of course, our StarlingX RPM repo. Why can't we use the > CentOS-Openstack > packages directly along with some StarlingX specific > additions in a > seperate package, rather than creating a new package with > both upstream > and StarlingX content. > > I don't know what the extra things are that we are packaging, > but if they are only helper scripts and the like and don't > affect the actual keystone code then I'd hope we would use the > upstream RPMs. > > */[Don] As much as possible, we look to use unmodified upstream > RPMs./* > > Can you expand on that statement in the context of this particular > RPM? (Sorry I'm not familiar with what we are doing with Keystone.) > > Thanks, > > Curtis > > My two cents. :) > > Thanks, > > Curtis > > Thoughts, > > Sau! > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > > > > -- > > Blog: serverascode.com > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Don.Penney at windriver.com Fri Feb 15 18:58:08 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 15 Feb 2019 18:58:08 +0000 Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <194188777.555385.1550250997219@communicator.strato.com> References: <701275276.504630.1550169575394@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> <543FEB76-F032-4865-A12F-B45C56A3B0B7@schaible-consulting.de> <6703202FD9FDFF4A8DA9ACF104AE129FBA43DAC9@ALA-MBD.corp.ad.wrs.com> <194188777.555385.1550250997219@communicator.strato.com> Message-ID: <6703202FD9FDFF4A8DA9ACF104AE129FBA43E077@ALA-MBD.corp.ad.wrs.com> If it's not supported by the initrd, it's not supported by the runtime load. The installer images are updated with the runtime kernel and drivers, to align with runtime. What type of NIC are you using? -----Original Message----- From: Marcel Schaible [mailto:marcel at schaible-consulting.de] Sent: Friday, February 15, 2019 12:17 PM To: Penney, Don Cc: starlingx-discuss at lists.starlingx.io Subject: RE: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout Hi Don, your analysis was correct. Our interface ist not supported by the initrd. Do you know by chance how to unpack and pack the initrd correctly? Thanks Marcel > "Penney, Don" hat am 14. Februar 2019 um 20:48 geschrieben: > > > Your load shouldn't have the http port change, which was merged the next day. So I would suggest checking that the lighttpd server is running fine on the active controller as the first step. If it is, then if you have some shell access from the failed installation, maybe you can confirm that the boot interface is supported by the initrd and rule out comms issues. > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Thursday, February 14, 2019 2:41 PM > To: Penney, Don > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > Hi Don, > > which build do you recommend? > > Is there a workaround so I can keep my installation for now? > > Thanks > > Marcel > > Von meinem iPhone gesendet > > > Am 14.02.2019 um 19:50 schrieb Penney, Don : > > > > This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: > > * problems with the lighttpd server on the active controller > > * NICs that are unsupported by the initrd kernel modules > > * some other comms issue > > > > What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ > > > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Thursday, February 14, 2019 1:40 PM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > > > Hi, > > > > I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: > > > > Console log: > > ============= > > > > [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. > > [ OK ] Started Show Plymouth Boot Screen. > > [ OK ] Started Device-Mapper Multipath Device Controller. > > Starting Open-iSCSI... > > [ OK ] Reached target Paths. > > [ OK ] Reached target Basic System. > > [ OK ] Started Open-iSCSI. > > Starting dracut initqueue hook... > > [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > ... > > [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > [Warning: /dev/root does not exist > > > > Generating "/run/initramfs/rdsosreport.txt" > > > > Entering emergency mode. Exit the shell to continue. > > Type "journalctl" to view system logs. > > You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot > > after mounting them and attach it to a bug report. > > > > dracut:/# > > > > Kernel parameter: > > ================== > > > > dracut:/# journalctl | grep -i boot > > Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 > > ======= > > > > The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? > > > > Any idea is welcome! > > > > Thanks > > > > Marcel > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From marcel at schaible-consulting.de Fri Feb 15 19:12:47 2019 From: marcel at schaible-consulting.de (Marcel Schaible) Date: Fri, 15 Feb 2019 20:12:47 +0100 (CET) Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout In-Reply-To: <6703202FD9FDFF4A8DA9ACF104AE129FBA43E077@ALA-MBD.corp.ad.wrs.com> References: <701275276.504630.1550169575394@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43D9CA@ALA-MBD.corp.ad.wrs.com> <543FEB76-F032-4865-A12F-B45C56A3B0B7@schaible-consulting.de> <6703202FD9FDFF4A8DA9ACF104AE129FBA43DAC9@ALA-MBD.corp.ad.wrs.com> <194188777.555385.1550250997219@communicator.strato.com> <6703202FD9FDFF4A8DA9ACF104AE129FBA43E077@ALA-MBD.corp.ad.wrs.com> Message-ID: <33387997.558340.1550257967471@communicator.strato.com> It is the ixgbevf driver. I have packed into the pxeboot image and it works now. Thanks again for your support! Marcel > "Penney, Don" hat am 15. Februar 2019 um 19:58 geschrieben: > > > If it's not supported by the initrd, it's not supported by the runtime load. The installer images are updated with the runtime kernel and drivers, to align with runtime. > > What type of NIC are you using? > > > -----Original Message----- > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > Sent: Friday, February 15, 2019 12:17 PM > To: Penney, Don > Cc: starlingx-discuss at lists.starlingx.io > Subject: RE: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > Hi Don, > > your analysis was correct. Our interface ist not supported by the initrd. > > Do you know by chance how to unpack and pack the initrd correctly? > > Thanks > > Marcel > > > "Penney, Don" hat am 14. Februar 2019 um 20:48 geschrieben: > > > > > > Your load shouldn't have the http port change, which was merged the next day. So I would suggest checking that the lighttpd server is running fine on the active controller as the first step. If it is, then if you have some shell access from the failed installation, maybe you can confirm that the boot interface is supported by the initrd and rule out comms issues. > > > > -----Original Message----- > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > Sent: Thursday, February 14, 2019 2:41 PM > > To: Penney, Don > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > > > Hi Don, > > > > which build do you recommend? > > > > Is there a workaround so I can keep my installation for now? > > > > Thanks > > > > Marcel > > > > Von meinem iPhone gesendet > > > > > Am 14.02.2019 um 19:50 schrieb Penney, Don : > > > > > > This means the initrd was unable to download the squashfs.img from the active controller. This could be a couple of things: > > > * problems with the lighttpd server on the active controller > > > * NICs that are unsupported by the initrd kernel modules > > > * some other comms issue > > > > > > What load are you using? There was a recent update around http port config that moved lighttpd to listen to port 8080 instead of 80, but your boot cmdline is referencing http://pxecontroller/ > > > > > > > > > -----Original Message----- > > > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] > > > Sent: Thursday, February 14, 2019 1:40 PM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] Duplex Configuration: Installation of controller-1 fails with dracut timeout > > > > > > Hi, > > > > > > I am trying to install controller-1 in a duplex configuration (bare metal) and getting the following error: > > > > > > Console log: > > > ============= > > > > > > [ 201.379136] dracut-initqueue[730]: Warning: Could not boot. > > > [ OK ] Started Show Plymouth Boot Screen. > > > [ OK ] Started Device-Mapper Multipath Device Controller. > > > Starting Open-iSCSI... > > > [ OK ] Reached target Paths. > > > [ OK ] Reached target Basic System. > > > [ OK ] Started Open-iSCSI. > > > Starting dracut initqueue hook... > > > [ 140.683067] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > > [ 141.198294] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > > ... > > > [ 195.770491] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > > [ 196.280235] dracut-initqueue[730]: Warning: dracut-initqueue timeout - starting timeout scripts > > > [Warning: /dev/root does not exist > > > > > > Generating "/run/initramfs/rdsosreport.txt" > > > > > > Entering emergency mode. Exit the shell to continue. > > > Type "journalctl" to view system logs. > > > You might want to save "/run/initramfs/rdsosreport.txt" to a USB stick or /boot > > > after mounting them and attach it to a bug report. > > > > > > dracut:/# > > > > > > Kernel parameter: > > > ================== > > > > > > dracut:/# journalctl | grep -i boot > > > Jan 09 00:16:06 localhost kernel: Command line: BOOT_IMAGE=rel-19.01/installer-bzImage bootifonly=1 devfs=nomount inst.repo=http://pxecontroller/feed/rel-19.01/ inst.ks=http://pxecontroller/feed/rel-19.01/net_smallsystem_ks.cfg usbcore.autosuspend=-1 biosdevname=0 rd.net.timeout.dhcp=120 ksdevice=02:01:00:10:02:06 BOOTIF=02:01:00:10:02:06 boot_device=nvme0n1 rootfs_device=nvme0n1 inst.text console=ttyS0,115200 tisnotify=http://pxecontroller:6385/v1/ihosts/00273dcb-25fa-4204-98de-64fed0bfabfe/install_progress inst.gpt user_namespace.enable=1 security_profile=standard nopti nospectre_v2 > > > ======= > > > > > > The message "[Warning: /dev/root does not exist" make me nervous. What does that mean? > > > > > > Any idea is welcome! > > > > > > Thanks > > > > > > Marcel > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Frank.Miller at windriver.com Fri Feb 15 21:00:25 2019 From: Frank.Miller at windriver.com (Miller, Frank) Date: Fri, 15 Feb 2019 21:00:25 +0000 Subject: [Starlingx-discuss] StarlingX Infrastructure Containerization Message-ID: Due to a holiday for many on the containers subproject on Monday, moving this meeting to Tuesday Feb 19th for this week only. -----------------------------------> For those contributing to or interested in the Containerization subproject a weekly meeting has been set up: Timeslot: 11am EST / 8am PDT / 1600 UTC Call details * Zoom link: https://zoom.us/j/342730236 * Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes Project notes are at https://etherpad.openstack.org/p/stx-containerization Containerization subproject wiki: https://wiki.openstack.org/wiki/StarlingX/Containers -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2467 bytes Desc: not available URL: From cesar.lara at intel.com Fri Feb 15 21:45:10 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Fri, 15 Feb 2019 21:45:10 +0000 Subject: [Starlingx-discuss] [multios][meetings] MultiOS team meeting agenda for 2/18/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105EF3D6@fmsmsx104.amr.corp.intel.com> MultiOS team meeting Agenda for 2/18/2019 - Discuss multiOS team activities for May release - Opens Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From juan.carlos.alonso at intel.com Fri Feb 15 22:42:00 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 15 Feb 2019 22:42:00 +0000 Subject: [Starlingx-discuss] Sanity Test - ISO 20190215 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9BCB0@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-15 (link) Sanity Test is executed in a Bare Metal Environment Status: GREEN Simplex Setup Manual [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 43 TCs PASS ] =========================================== Sanity Test is executed in a Virtual Environment Status: GREEN Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 42 TCs [PASS] TOTAL: [ 47 TCs PASS ] Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS]s TOTAL: [ 50 TCs PASS ] Multinode Controller Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] Multinode Dedicated Storage Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity 45 TCs [PASS] TOTAL: [ 50 TCs PASS ] ------------------------------------------------------------------ Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From vm.rod25 at gmail.com Fri Feb 15 23:07:08 2019 From: vm.rod25 at gmail.com (Victor Rodriguez) Date: Fri, 15 Feb 2019 17:07:08 -0600 Subject: [Starlingx-discuss] contact for OPNFV In-Reply-To: <38CB89DDEF80A04D89F30124F87D7A559D22186F@ALA-MBD.corp.ad.wrs.com> References: <0EECF16D-278B-46E9-848E-E138060D906C@intel.com> <0F0AD23D-45CF-4F11-A22D-2FC63DCDF549@intel.com> <3CAA827B7A79BA46B15B280EC82088FE48273E9B@ALA-MBD.corp.ad.wrs.com> <38CB89DDEF80A04D89F30124F87D7A559D22186F@ALA-MBD.corp.ad.wrs.com> Message-ID: Thanks a lot Peng I will follow the steps and get back with questions Regards Victor On Fri, Feb 15, 2019 at 12:59 PM Peng, Peng wrote: > > Hi Victor, > > I summarized the Refstack setup steps. Please check attachment for detail. > > Thanks, > Peng > > -----Original Message----- > From: Victor Rodriguez [mailto:vm.rod25 at gmail.com] > Sent: Thursday, February 14, 2019 1:05 PM > To: Waheed, Numan > Cc: Rodriguez Bahena, Victor; Peng, Peng; starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] contact for OPNFV > > Thanks a lot, Numan > > Peng , Numan mention last testing meeting about a wiki with the steps > to run the OPNFV performance framework on top of STX, could you please > share those steps? > > > Regards > > On Thu, Feb 14, 2019 at 10:27 AM Waheed, Numan > wrote: > > > > Hi Victor, > > > > Peng from my team can help you with any question regarding OPNFV and Refstack test suite. > > > > Thanks, > > > > Numan. > > > > -----Original Message----- > > From: Rodriguez Bahena, Victor > > Sent: February-14-19 10:57 AM > > To: Waheed, Numan > > Subject: Re: contact for OPNFV > > > > Hi > > > > Friendly reminder > > > > > > -----Original Message----- > > From: "Rodriguez Bahena, Victor" > > Date: Tuesday, February 12, 2019 at 2:33 PM > > To: "Numan.Waheed at windriver.com" > > Subject: contact for OPNFV > > > > Hi Numan > > > > I was wondering if you have the information about the steps to run the performance tests from OPNFV ? > > > > Regards > > > > Victor Rodriguez > > > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From juan.carlos.alonso at intel.com Fri Feb 15 23:23:59 2019 From: juan.carlos.alonso at intel.com (Alonso, Juan Carlos) Date: Fri, 15 Feb 2019 23:23:59 +0000 Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190215 Message-ID: <8557B550001AFB46A43A0CCC314BF85153C9BCE4@FMSMSX108.amr.corp.intel.com> Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Feb-15 (link) Sanity Test is executed in a Containers - Bare Metal Environment Status: GREEN Simplex Setup Manual [PASS] Provisioning Manual [PASS] Sanity OpenStack 35 TCs [PASS] Sanity Platform In Development TOTAL: [ 35 TCs PASS ] =========================================== Sanity Test is executed in a Containers - Virtual Environment Status: RED Simplex Setup Manual [FAIL] Provisioning Manual [FAIL] Sanity OpenStack 35 TCs [FAIL] Sanity Platform In Development TOTAL: [ 35 TCs FAIL ] ------------------------------------------------------------------ 'Config_controller -kubernetes' failed during step '06/08 Applying controller manifest' Launchpad opened: https://bugs.launchpad.net/starlingx/+bug/1816199 Test to check ceilometer services was removed. Containerized telemetry services have not been enabled on master branch. Regards. Juan Carlos Alonso -------------- next part -------------- An HTML attachment was scrubbed... URL: From volker.von.hoesslin at gmx.de Sat Feb 16 19:21:01 2019 From: volker.von.hoesslin at gmx.de (Volker von Hoesslin) Date: Sat, 16 Feb 2019 20:21:01 +0100 Subject: [Starlingx-discuss] Nova instance - Change system product name In-Reply-To: References: Message-ID: <45461628-2af1-7e17-47a8-97e12010aa5b@gmx.de> just checkout /etc/nova/release Am 14.02.2019 um 17:23 schrieb von Hoesslin, Volker: > hi, > is there any given glance image metadata to change the instance > product name? > > i want to change the xml entry for file libvirt.xml from instance: > > # by default it looks like this: > > |OpenStack Nova| > thx, >   volker... > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Sun Feb 17 06:24:31 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 17 Feb 2019 01:24:31 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_download_mirror - Build # 169 - Failure! Message-ID: <715991703.40.1550384673227.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 169 Status: Failure Timestamp: 20190217T061731Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190217T060000Z/logs -------------------------------------------------------------------------------- Parameters DOCKER_DL_ID: jenkins-master-20190217T060000Z-downloader PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190217T060000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/20190217T060000Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Sun Feb 17 06:24:35 2019 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 17 Feb 2019 01:24:35 -0500 (EST) Subject: [Starlingx-discuss] [build-report] STX_build_master_pike - Build # 142 - Failure! Message-ID: <197547165.43.1550384676965.JavaMail.javamailuser@localhost> Project: STX_build_master_pike Build #: 142 Status: Failure Timestamp: 20190217T060000Z Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190217T060000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS: false From haochuan.z.chen at intel.com Mon Feb 18 02:07:04 2019 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Mon, 18 Feb 2019 02:07:04 +0000 Subject: [Starlingx-discuss] plan to remove IMA redundant kernel patch Message-ID: <56829C2A36C2E542B0CCB9854828E4D8561F1A34@CDSMSX102.ccr.corp.intel.com> Hi I study this patch cgcs-root/stx/stx-integ/kernel/kernel-std/centos/patches/US101216-IMA-support-in-Titanium-kernel.patch My assumption is that for integrity function code in kernel 3.10 is tool old, it introduces kernel module cgcs-root/stx/stx-integ/kernel/kernel-modules/integrity So for CentOS 8 upgrade, as integrity in later kernel is already upgrade, what about remove this package cgcs-root/stx/stx-integ/kernel/kernel-modules/integrity And enable integrity config in cgcs-root/stx/stx-integ/kernel/kernel-std/centos/patches/kernel-3.10.0-x86_64.config.tis_extra For ima.conf, we could move to cgcs-root/stx/stx-integ/config-files. By this way, we could remove several redundant patches for IMA and easy for maintenance. Wait for your opinion Thanks! Martin, Chen SSP, Software Engineer 021-61164330 -------------- next part -------------- An HTML attachment was scrubbed... URL: From serverascode at gmail.com Mon Feb 18 13:11:02 2019 From: serverascode at gmail.com (Curtis) Date: Mon, 18 Feb 2019 08:11:02 -0500 Subject: [Starlingx-discuss] Packet.com baremetal cloud opportunity In-Reply-To: References: Message-ID: FYI - Based on the results of the doodle poll I've updated the etherpad with the information for the next meeting, which is Tues Feb 19, 2-3PM EST (tomorrow). https://etherpad.openstack.org/p/starlingx-packet-edge-pilot John from packet should be in attendance, as well as a gentleman from another open source project who is utilizing packet infrastructure in much the same way we potentially would, and thus can give us some idea of what it's like. :) Thanks All! Curtis On Thu, Jan 31, 2019 at 10:59 AM Curtis wrote: > Hi All, > > There is an opportunity to work with the Packet.com cloud in terms of them > providing cloud resources to the STX community in a couple of different > ways, but you can read about all that in the etherpad [1] and add any > comments/questions/ideas, etc. :) > > Obviously there is some due diligence and information gathering to be > completed, but overall, from my own perspective, having been on a few > related calls, it seems like the STX TSC and other community members that > have had input are thus far positive towards this opportunity. > > Do let us know what you think! > > Thanks kindly, > Curtis > > [1]: https://etherpad.openstack.org/p/starlingx-packet-edge-pilot > -- Blog: serverascode.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Mon Feb 18 13:38:21 2019 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Mon, 18 Feb 2019 13:38:21 +0000 Subject: [Starlingx-discuss] Agenda for Distro OpenStack Call (Feb 19, 2019) Message-ID: <586E8B730EA0DA4A9D6A80A10E486BC0A06752@ALA-MBD.corp.ad.wrs.com> Agenda... - updates on work items in OpenStack tracking sheet [0] - please plan to attend, or provide your update(s) beforehand on [0] - Nova: Frank, Yong, Yongli - Neutron: Ghada, Kailun, Chenjie, Chen, Huifeng, Matt Welch, Enyinna - Horizon: Yan Chen - Glance/Cinder: Liang Fang [0] tracking sheet: https://docs.google.com/spreadsheets/d/1udAtEpQljV2JZVs-525UhWyx-5ePOaSSkKD1CS27ohU/ [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#6am_PDT_.2F_1400_UTC_-_Distro_OpenStack_Team_Call From fungi at yuggoth.org Mon Feb 18 16:09:03 2019 From: fungi at yuggoth.org (Jeremy Stanley) Date: Mon, 18 Feb 2019 16:09:03 +0000 Subject: [Starlingx-discuss] CVE Support and Scanning In-Reply-To: Message-ID: <20190218160903.iix6egvwmie36avd@yuggoth.org> On 2019-02-06 15:28:24 +0000 (+0000), Victor Rodriguez wrote: [...] > This is the link to the tool that we could use to catch the CVEs > > https://github.com/clearlinux/cve-check-tool > > Hoep it helps as an starting point That seems to be unmaintained for the past couple of years (last commit was from April 2017 and it has accumulated a number of untriaged issues and pull requests since then). If you're interested in using it, you may want to consider helping the previous maintainers resurrect it. Maybe also consider (actively-maintained) https://github.com/intel/cve-bin-tool which uses a different approach: looking for signs of vulnerable libraries linked into compiled files. It could help you identify when packages you're maintaining need to be rebuilt due to vulnerabilities in their dependencies. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From cesar.lara at intel.com Mon Feb 18 18:28:51 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Mon, 18 Feb 2019 18:28:51 +0000 Subject: [Starlingx-discuss] [build][meetings] Build team meeting minutes 02/14/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105F1123@fmsmsx104.amr.corp.intel.com> Build team meeting Agenda for 02/14/2019 - Cengn update - CVE scan integration - Opens Notes Cengn Update Build environment stable, there has been some minor issues due to upstream changes, we don't have a huge impact and they have been solved in a timely manner. We need to establish an strategy to lock down bits and minimize risks on file versions on upstream changes, for this we discuss the possibility to have the latest versions for Master branch and lock down versions for release branch. We will write down a script to enforce this strategy AR - Cesar to identify a resource to help Scott on enabling AR - Cesar will identify a resource to help Scott to review scripts on Cengn that updates to packages on mirror CVE scan integration CVE scanning tool proposals presented a sub team will evaluate tool and a proposal based on the following requirements - is fully automated - is up to date with vulnerabilities - can handle multiple OS - need scanning of containers (maybe take a look on Clair tool) - needs a report for further analysis - link to patches or merged patch - patch update - fix in OS first AR Ken to send a communication to mailing list about the process we were following on this and updates Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From cesar.lara at intel.com Mon Feb 18 18:46:45 2019 From: cesar.lara at intel.com (Lara, Cesar) Date: Mon, 18 Feb 2019 18:46:45 +0000 Subject: [Starlingx-discuss] [multios][meetings] MultiOS team meeting minutes 2/18/2019 Message-ID: <0B566C62EC792145B40E29EFEBF1AB47105F11CD@fmsmsx104.amr.corp.intel.com> MultiOS team meeting Agenda for 2/18/2019 - Discuss multiOS team activities for May release - Opens Notes Discuss multiOS team activities for May release The intention of these activities is to minimize the risk of current StarlingX build CentOS based as we are not intended to change anything regarding of the current environment, with that in mind we separated the effort required to spin up and Ubuntu based PoC and requested the following activities to be reviewed by the community and the TSC -Create a new directory to host Ubuntu files https://review.openstack.org/#/c/634074/ -Create a new repo to store rules for dev file creation for flock services https://review.openstack.org/#/c/631288/ -Work on the Ubuntu PoC and their tools https://review.openstack.org/#/c/621033/ AR- Victor to update spec files accordingly to reflect the scope of these changes These activities are targeted for code freeze on April 1st, but are not to be categorized as blockers for May release Regards Cesar Lara Software Engineering Manager OpenSource Technology Center -------------- next part -------------- An HTML attachment was scrubbed... URL: From Don.Penney at windriver.com Fri Feb 15 15:10:05 2019 From: Don.Penney at windriver.com (Penney, Don) Date: Fri, 15 Feb 2019 15:10:05 +0000 Subject: [Starlingx-discuss] CentOS7.6 testing status - blocked In-Reply-To: <9700A18779F35F49AF027300A49E7C765FE84894@SHSMSX101.ccr.corp.intel.com> References: <19C65A6E92EA384D809B1772