From build.starlingx at gmail.com Sun May 1 01:03:05 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sat, 30 Apr 2022 21:03:05 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_deb_build_containers - Build # 219 - Still Failing! In-Reply-To: <1997401345.52.1651281521071.JavaMail.javamailuser@localhost> References: <1997401345.52.1651281521071.JavaMail.javamailuser@localhost> Message-ID: <1550864839.57.1651366986975.JavaMail.javamailuser@localhost> Project: STX_build_deb_build_containers Build #: 219 Status: Still Failing Timestamp: 20220501T005624Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/build-containers/20220501T005601Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/debian-master-build-containers/20220501T005601Z OS: debian MY_REPO: /localdisk/designer/jenkins/debian-master-build-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/build-containers/20220501T005601Z/logs REGISTRY_USERID: slittlewrs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/debian/build-containers/20220501T005601Z/logs MASTER_JOB_NAME: STX_build_debian_build_containers_master MY_REPO_ROOT: /localdisk/designer/jenkins/debian-master-build-containers FULL_BUILD: false REGISTRY_ORG: starlingx DOCKER_BUILD_TAG: master-debian-20220501T005601Z LAYER: build-containers REGISTRY: docker.io From build.starlingx at gmail.com Mon May 2 01:03:21 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 1 May 2022 21:03:21 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_deb_build_containers - Build # 220 - Still Failing! In-Reply-To: <239839458.55.1651366982433.JavaMail.javamailuser@localhost> References: <239839458.55.1651366982433.JavaMail.javamailuser@localhost> Message-ID: <1346884435.60.1651453402854.JavaMail.javamailuser@localhost> Project: STX_build_deb_build_containers Build #: 220 Status: Still Failing Timestamp: 20220502T005621Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/build-containers/20220502T005601Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/debian-master-build-containers/20220502T005601Z OS: debian MY_REPO: /localdisk/designer/jenkins/debian-master-build-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/build-containers/20220502T005601Z/logs REGISTRY_USERID: slittlewrs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/debian/build-containers/20220502T005601Z/logs MASTER_JOB_NAME: STX_build_debian_build_containers_master MY_REPO_ROOT: /localdisk/designer/jenkins/debian-master-build-containers FULL_BUILD: false REGISTRY_ORG: starlingx DOCKER_BUILD_TAG: master-debian-20220502T005601Z LAYER: build-containers REGISTRY: docker.io From bogdan-iulian.andrei at intel.com Mon May 2 06:10:36 2022 From: bogdan-iulian.andrei at intel.com (Andrei, Bogdan-Iulian) Date: Mon, 2 May 2022 06:10:36 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220430T025608Z Message-ID: Sanity Test from 2022-April-30 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220430T025608Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220430T025608Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by this LP bug: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From sebastian-valentin.bran at intel.com Mon May 2 07:16:22 2022 From: sebastian-valentin.bran at intel.com (Bran, Sebastian-Valentin) Date: Mon, 2 May 2022 07:16:22 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220501T013740Z Message-ID: Sanity Test from 2022-May-01(http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220501T013740Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220501T013740Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by this LP bug: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Bran Sebastian-Valentin Software Engineer PMCE TEAM Personal Mobile: +40 57487760 sebastian-valentin.bran at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Jerry.Sun at windriver.com Tue May 3 14:21:28 2022 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Tue, 3 May 2022 14:21:28 +0000 Subject: [Starlingx-discuss] Cert Manager and Nginx upversioning Message-ID: Hi All, We are looking to merge changes to upversion Cert Manager and Nginx today. The new versions of Cert Manager and Nginx uses new images. If you bootstrap from your own private registry, please ensure your private registry are updated with the new Cert Manager and Nginx images in order to minimize disruptions. Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From Jerry.Sun at windriver.com Tue May 3 21:00:17 2022 From: Jerry.Sun at windriver.com (Sun, Yicheng (Jerry)) Date: Tue, 3 May 2022 21:00:17 +0000 Subject: [Starlingx-discuss] Cert Manager and Nginx upversioning In-Reply-To: References: Message-ID: As a followup, the image list: quay.io/jetstack/cert-manager-cainjector:v1.7.1 quay.io/jetstack/cert-manager-controller:v1.7.1 quay.io/jetstack/cert-manager-webhook:v1.7.1 quay.io/jetstack/cert-manager-ctl:v1.7.1 quay.io/jetstack/cert-manager-acmesolver:v1.7.1 k8s.gcr.io/defaultbackend:1.4 k8s.gcr.io/ingress-nginx/controller:v1.1.1 k8s.gcr.io/ingress-nginx/kube-webhook-certgen:v1.1.1 From: Sun, Yicheng (Jerry) Sent: May 3, 2022 10:21 AM To: starlingx-discuss at lists.starlingx.io Subject: Cert Manager and Nginx upversioning Hi All, We are looking to merge changes to upversion Cert Manager and Nginx today. The new versions of Cert Manager and Nginx uses new images. If you bootstrap from your own private registry, please ensure your private registry are updated with the new Cert Manager and Nginx images in order to minimize disruptions. Thanks, Jerry -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed May 4 01:46:15 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 3 May 2022 21:46:15 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_compiler_master_master - Build # 928 - Failure! Message-ID: <991832645.65.1651628776516.JavaMail.javamailuser@localhost> Project: STX_build_layer_compiler_master_master Build #: 928 Status: Failure Timestamp: 20220504T013001Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20220504T013001Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From sebastian-valentin.bran at intel.com Wed May 4 08:42:05 2022 From: sebastian-valentin.bran at intel.com (Bran, Sebastian-Valentin) Date: Wed, 4 May 2022 08:42:05 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220503T001927Z Message-ID: Sanity Test from 2022-April-30 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220503T001927Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220503T001927Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by this LP bug: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Bran Sebastian-Valentin Software Engineer PMCE TEAM Personal Mobile: +40 57487760 sebastian-valentin.bran at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From sebastian-valentin.bran at intel.com Wed May 4 10:02:35 2022 From: sebastian-valentin.bran at intel.com (Bran, Sebastian-Valentin) Date: Wed, 4 May 2022 10:02:35 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220503T001927Z Message-ID: Sanity Test from 2022-May-03 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220503T001927Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220503T001927Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by this LP bug: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Bran Sebastian-Valentin Software Engineer PMCE TEAM Personal Mobile: +40 57487760 sebastian-valentin.bran at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Ghada.Khalil at windriver.com Wed May 4 14:26:42 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 4 May 2022 14:26:42 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - May 4/2022 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.7.0 - Release/Feature Planning: https://docs.google.com/spreadsheets/d/171PJAu9SykXm9h9Ny2IsZ8YMbhvEwOzuTbISWUMQhiE/edit#gid=1107209846 - Release Verification: - Folder: https://drive.google.com/drive/folders/1szAP-xVZq7ebSyGJ-EHTVebMmsupSY7w - Feature Testing: https://docs.google.com/spreadsheets/d/1hXJJ4LvxhWwLIF_PHpCyxlGpfKdXtcPjlyclfKFVxKo/edit#gid=968103774 - Regression Testing: https://docs.google.com/spreadsheets/d/19OQmmo5OfD1eHS8rp5uBgnVaS7i8J-NhQuiKVqcyZ0Y/edit#gid=1717644237 - Release Updates - Feature Stats - 13 features are Done - 25 features are In Progress - 3 features are Out/Deferred - Debian Update - CENGN builds to generate Debian ISOs: Still working through issues, but should be close - Debian Sanity: WR test team is working on env setup to enable this - Feature Update -- Features with Code Merge Date in April / early May - Support for Intel Logan Beach >> Code Merged. Test in progress. - Support for Mellanox CX6 >> Code Merged. Ready for test - Support for Broadcom 57504 >> Code Merged. Ready for test. - Kubernetes custom configuration support (partial) >> Re-forecasted to May 10 - Armada Deprecation / Replacement - FluxCD >> Re-forecasted to May 20 - K8S & Container Components Refresh - k8s 1.22/1.23 >> Re-forecasted to May 10 - Container CNI Component Refresh >> Re-forecasted to May 17 - Platform Application Refresh - metric-server >> Code Merged. Still needs to be coverted to FluxCD (part of the Armada Replacement feature) before feature testing starts. - Debian Builds on CENGN >> Re-forecasted to May 6 - Container Base Image Based on Debian >> Re-forecasted to May 13 - FEC Device Configurability (fec-operator Integration) >> Re-forecasted to May 27 as just starting to post code for review - Test Update - Feature Testing - Testing in progress on features which are ready. spreadsheet kept up-to-date on a bi-weekly basis. - Need to re-forecast Feature Test Complete dates for the following features as the Code Merge date moved. Action: Rob to update the fcst column (Column S) for: - Support for Broadcom 57504 - K8S & Container Components Refresh - k8s 1.22/1.23 - Container Storage Component Refresh - NetApp Trident - Armada Deprecation / Replacement - FluxCD - Need Plan dates for the following features. Action: Rob to update the plan column (Column R) for: - Scalability Enhancements to 1000 AIO-SX subclouds - Enhanced Parallel Operations on Subclouds - Subcloud Local Installation Support/Enhancements - Sanity - Intel is ceasing their sanity contribution to stx starting May 13 - WR test team is looking to take that over. Currently working on env setup. This may affect when Debian sanity can start. - TBD whether the same sanity cadence can be maintained. From build.starlingx at gmail.com Thu May 5 04:39:06 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 5 May 2022 00:39:06 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1278 - Failure! Message-ID: <1140538423.74.1651725547166.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 1278 Status: Failure Timestamp: 20220505T043000Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220505T043000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From Ghada.Khalil at windriver.com Thu May 5 12:23:19 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 5 May 2022 12:23:19 +0000 Subject: [Starlingx-discuss] Minutes: Community Call (May 4 2022) Message-ID: Etherpad: https://etherpad.opendev.org/p/stx-status Minutes from the community call May 4 2022 Standing Topics - Build - CentOS builds are stable - Had 6 good builds, but hit one download error last night. - Intermittent build failure related to LP: https://bugs.launchpad.net/starlingx/+bug/1968583 not seen in the last two weeks - Debian builds are still being setup. - Build team is working through publication issues. Expect to be ready by EOW - Sanity - Sanity is Red. LP: https://bugs.launchpad.net/starlingx/+bug/1970645 - Need to follow up on assignment as Douglas Pereira is out of office. Contacted Thales Elero Cervi to assign to someone. - Gerrit Reviews in Need of Attention - Two spec reviews need the final +2 from Mingyuan Qi - https://review.opendev.org/q/project:starlingx/specs+status:open - There is currently a holiday in China from May 1-5. - Would like these to merge by early next week if possible. - Reference Links: - Active Branch (open): https://review.opendev.org/q/projects:starlingx+is:open+branch:+master - Active Branch (merged): https://review.opendev.org/q/projects:starlingx+is:merged+branch:master Topics for This Week - Sanity - Intel is ceasing their CentOS sanity contribution to stx starting May 15 - WR test team is looking to take that over. Currently working on env setup. This may affect when Debian sanity can start. - TBD whether the same sanity cadence can be maintained. ARs from Previous Meetings - None Open Requests for Help - Nothing new from the mailing list From outbackdingo at gmail.com Fri May 6 04:41:44 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Fri, 6 May 2022 11:41:44 +0700 Subject: [Starlingx-discuss] multi stx AIO / more then 2 Message-ID: can I do more then 2 (duplex) AIO as in 3 AIO with control/storage/compute? and add 3 workers with compute / storage? As i have 6 nodes with 512GB Memory / 4TB storage each. From sebastian-valentin.bran at intel.com Fri May 6 14:18:32 2022 From: sebastian-valentin.bran at intel.com (Bran, Sebastian-Valentin) Date: Fri, 6 May 2022 14:18:32 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220506T032159Z Message-ID: Sanity Test from 2022-May-06 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220506T032159Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220506T032159Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Bran Sebastian-Valentin Software Engineer PMCE TEAM Personal Mobile: +40 57487760 sebastian-valentin.bran at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Juanita.Balaraj at windriver.com Sun May 8 00:53:21 2022 From: Juanita.Balaraj at windriver.com (Balaraj, Juanita) Date: Sun, 8 May 2022 00:53:21 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 04-May-22 Message-ID: Hello all, Here are last week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Thanks, Juanita Balaraj ============ 04-May-22 * Bug status -- 6 total o Query for LP that are tagged for docs: https://bugs.launchpad.net/starlingx/+bugs?field.tag=stx.docs o Use caution when working on LP, especially if seems related to WR (like wr-openstack issue). * Stx 7.0 Release; https://docs.google.com/spreadsheets/d/171PJAu9SykXm9h9Ny2IsZ8YMbhvEwOzuTbISWUMQhiE/edit#gid=1107209846 - Juanita to complete doc tasks associated with Stories * Message from Ildiko: IMPORTANT REMINDER: If you would like to run for any open seat you need to register as an Open Infrastructure Foundation individual member prior to when the nomination period starts. The nomination period officially begins on __May 10, 2022, 2000 UTC__. For all the details on the elections please visit: https://docs.starlingx.io/election/Elections - https://docs.starlingx.io/election/ (https://etherpad.opendev.org/p/stx-cores) * A few Cherry picks were completed; https://review.opendev.org/q/%2509starlingx/docs * Status of Open Gerrit reviews: https://review.opendev.org/q/starlingx/docs+status:open _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From sebastian-valentin.bran at intel.com Mon May 9 06:55:28 2022 From: sebastian-valentin.bran at intel.com (Bran, Sebastian-Valentin) Date: Mon, 9 May 2022 06:55:28 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220507T013750Z Message-ID: Sanity Test from 2022-May-07 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220507T013750Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220507T013750Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Bran Sebastian-Valentin Software Engineer PMCE TEAM Personal Mobile: +40 57487760 sebastian-valentin.bran at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From build.starlingx at gmail.com Mon May 9 07:04:18 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 9 May 2022 03:04:18 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 1429 - Failure! Message-ID: <509310726.93.1652079860841.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 1429 Status: Failure Timestamp: 20220509T044112Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220509T043001Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20220509T043001Z DOCKER_BUILD_ID: jenkins-master-20220509T043001Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220509T043001Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20220509T043001Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon May 9 07:04:22 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 9 May 2022 03:04:22 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1282 - Failure! Message-ID: <659870788.96.1652079862891.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 1282 Status: Failure Timestamp: 20220509T043001Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220509T043001Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From outbackdingo at gmail.com Mon May 9 10:24:53 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Mon, 9 May 2022 17:24:53 +0700 Subject: [Starlingx-discuss] stx tools 6.0 / iso build Message-ID: i must be missing something trying to get an iso built, seems way to hard by these docks, for such a simple procless reading https://opendev.org/starlingx/tools it states To generate centos-repo The centos-repo is a set of symbolic links to the packages in the mirror and the mock configuration file. It is needed to create these links if this is the first build or the mirror has been updated. generate-centos-repo.sh /import/mirrors/CentOS Where the argument to the script is the path of the mirror. To build all packages: $ cd $MY_REPO $ build-pkgs or build-pkgs --clean ; build-pkgs To generate local-repo: The local-repo has the dependency information that sequences the build order; To generate or update the information the following command needs to be executed after building modified or new packages. $ generate-local-repo.sh however inside the container, [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS mirror_dir=/import/mirrors/CentOS config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config distro=centos layer=all layer_pkg_urls= layer_image_inc_urls= layer_wheels_inc_urls= The mirror /import/mirrors/CentOS doesn't has the Binary and Source folders. Please provide a valid mirror [dingo at 25d9abcf4450 starlingx]$ $ build-iso bash: $: command not found [dingo at 25d9abcf4450 starlingx]$ build-iso 05:56:09 05:56:09 ************************* 05:56:09 Create StarlingX/CentOS Boot CD 05:56:09 ************************* 05:56:09 05:56:09 ERROR: create-yum-conf failed [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS mirror_dir=/import/mirrors/CentOS config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config distro=centos layer=all layer_pkg_urls= layer_image_inc_urls= layer_wheels_inc_urls= The mirror /import/mirrors/CentOS doesn't has the Binary and Source folders. Please provide a valid mirror From outbackdingo at gmail.com Mon May 9 10:25:44 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Mon, 9 May 2022 17:25:44 +0700 Subject: [Starlingx-discuss] pxe boot network / 6.0 Message-ID: Im trying to deploy 6 nodes with 512G ram and 24TB disk each Im envisioning 2-3 AIO DUPLEX Controller and 4 WORKER/STORAGE SO i deployed a single AIO duplex node, works fine, then had issues with pxe booting other nodes from it, due to switch configuration, without having to reinvent the switch topology to accomodate the pxe 169.254.202.1 pxe network, im reading below where it states PXE Boot Network VERSION You can set up a PXE boot network for booting all nodes to allow a non-standard management network configuration. The internal management network is used for PXE booting of new hosts and the PXE boot network is not required. However there are scenarios where the internal management network cannot be used for PXE booting of new hosts. For example, if the internal management network needs to be on a VLAN-tagged network for deployment reasons, or if it must support IPv6, you must configure the optional untagged PXE boot network for PXE booting of new hosts using IPv4. According to: https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html#configuring-a-pxe-boot-server-r6 Okay so it seems i need to go this route, and pxe boot all nodes, however.. reading the configure a pxe boot server doc, it states "You can optionally set up a PXE Boot Server to support controller-0 initialization." so this tells me that it can only be used for pxe booting controller-0 ???? https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html#configuring-a-pxe-boot-server-r6 so questions: 1) can i boot multiple aio duplex from a pxe boot server 2) once controller-0 is up, how does controller-1 connect to it. 3) which then i have to inquire how does discovery work for controller-1 on controller-0 4) in your opinions, which are welcome, how should i be deploying 6 nodes with this much memery and storage per node, to use 3 as controllers, and all 6 as compute/storage nodes? From outbackdingo at gmail.com Mon May 9 10:26:57 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Mon, 9 May 2022 17:26:57 +0700 Subject: [Starlingx-discuss] multi-node AIO / PXE booting Message-ID: not sure why my emails not going through so again.... I have 6 nodes, 512Gb Memory and 24TB disk 8x3TB per node can i deploy a duplex aio on 2-3 nodes? and worker/storage on 4 nodes second to that, since we cannot confirm our network environment to meet the pxe boot on 169.254 for other controller-1, worker nodes it states here https://docs.starlingx.io/planning/kubernetes/network-planning-the-pxe-boot-network.html that we can create a pxe boot environment to boot all nodes... okay PXE Boot Network VERSION You can set up a PXE boot network for booting all nodes to allow a non-standard management network configuration. however, this doc states https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html Configure a PXE Boot Server VERSION You can optionally set up a PXE Boot Server to support controller-0 initialization. so questions: 1) can i boot all nodes from pxe server? 2) after booting installing the controller-0 node i can then pxe boot install controller-1 node 3) in this scenerio how does controller-0 discover controller-1 so i can set the host "system host-update 2 personality=controller" 4) does the same apply to workers 5) if you have 6 fully configured server each with 24TB and 512Gb memory, how would you deploy to get a full 6 nodes with storage and compute? From outbackdingo at gmail.com Mon May 9 11:46:45 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Mon, 9 May 2022 18:46:45 +0700 Subject: [Starlingx-discuss] stx 6.0 in a VM / Saving config in sysinv database fails timeout Message-ID: curiously, while trying to do some testing on pxe, i noticed starlingx bootstrap times out in a VM, i found this in sysinv.log TASK [bootstrap/persist-config : debug] *********************************************************************************************************************** Monday 09 May 2022 11:27:51 +0000 (0:29:57.887) 0:39:28.205 ************ ok: [localhost] => populate_result: changed: true failed: false failed_when_result: false msg: non-zero return code rc: 1 stderr: |- Traceback (most recent call last): File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1652093873.31-134751541330122/populate_initial_config.py", line 1119, in inventory_config_complete_wait(client, controller) File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1652093873.31-134751541330122/populate_initial_config.py", line 1073, in inventory_config_complete_wait wait_initial_inventory_complete(client, controller) File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1652093873.31-134751541330122/populate_initial_config.py", line 1064, in wait_initial_inventory_complete raise ConfigFail('Timeout waiting for controller inventory ' __main__.ConfigFail: Timeout waiting for controller inventory completion stderr_lines: - 'Traceback (most recent call last):' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1652093873.31-134751541330122/populate_initial_config.py", line 1119, in ' - ' inventory_config_complete_wait(client, controller)' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1652093873.31-134751541330122/populate_initial_config.py", line 1073, in inventory_config_complete_wait' - ' wait_initial_inventory_complete(client, controller)' - ' File "/tmp/.ansible-sysadmin/tmp/ansible-tmp-1652093873.31-134751541330122/populate_initial_config.py", line 1064, in wait_initial_inventory_complete' - ' raise ConfigFail(''Timeout waiting for controller inventory ''' - '__main__.ConfigFail: Timeout waiting for controller inventory completion' stdout: |- Populating system config... System config completed. Populating load config... Load config completed. Populating management network... Populating pxeboot network... Populating oam network... Populating multicast network... Populating cluster host network... Populating cluster pod network... Populating cluster service network... Network config completed. Populating/Updating DNS config... DNS config completed. Populating/Updating docker registry config... Docker registry config completed. Populating/Updating kubernetes san list... Populating/Updating kubernetes config... Kubernetes config completed. Management mac = 00:00:00:00:00:00 Root fs device = /dev/disk/by-path/pci-0000:02:02.0-ata-2.0 Boot device = /dev/disk/by-path/pci-0000:02:02.0-ata-2.0 Console = tty0 Tboot = false Install output = text Host values = {'tboot': 'false', 'install_output': 'text', 'rootfs_device': '/dev/disk/by-path/pci-0000:02:02.0-ata-2.0', 'boot_device': '/dev/disk/by-path/pci-0000:02:02.0-ata-2.0', 'availability': 'offline', 'mgmt_mac': '00:00:00:00:00:00', 'console': 'tty0', 'mgmt_ip': '192.168.204.2', 'hostname': 'controller-0', 'operational': 'disabled', 'invprovision': 'provisioning', 'administrative': 'locked', 'personality': 'controller'} Host controller-0 created. Failed to update the initial system config. stdout_lines: - Populating system config... - System config completed. - Populating load config... - Load config completed. - Populating management network... - Populating pxeboot network... - Populating oam network... - Populating multicast network... - Populating cluster host network... - Populating cluster pod network... - Populating cluster service network... - Network config completed. - Populating/Updating DNS config... - DNS config completed. - Populating/Updating docker registry config... - Docker registry config completed. - Populating/Updating kubernetes san list... - Populating/Updating kubernetes config... - Kubernetes config completed. - Management mac = 00:00:00:00:00:00 - Root fs device = /dev/disk/by-path/pci-0000:02:02.0-ata-2.0 - Boot device = /dev/disk/by-path/pci-0000:02:02.0-ata-2.0 - Console = tty0 - Tboot = false - Install output = text - 'Host values = {''tboot'': ''false'', ''install_output'': ''text'', ''rootfs_device'': ''/dev/disk/by-path/pci-0000:02:02.0-ata-2.0'', ''boot_device'': ''/dev/disk/by-path/pci-0000:02:02.0-ata-2.0'', ''availability'': ''offline'', ''mgmt_mac'': ''00:00:00:00:00:00'', ''console'': ''tty0'', ''mgmt_ip'': ''192.168.204.2'', ''hostname'': ''controller-0'', ''operational'': ''disabled'', ''invprovision'': ''provisioning'', ''administrative'': ''locked'', ''personality'': ''controller''}' - Host controller-0 created. - Failed to update the initial system config. TASK [bootstrap/persist-config : Fail if populate config script throws an exception] ************************************************************************** Monday 09 May 2022 11:27:51 +0000 (0:00:00.069) 0:39:28.275 ************ fatal: [localhost]: FAILED! => changed=false msg: Failed to provision initial system configuration. will fail the timeout sysinv 2022-05-09 11:24:21.703 98797 ERROR sysinv.conductor.kube_app [-] Kubernetes is not configured. API operations will not be available.: KubeNotConfigured: Kubernetes is not configured. API operations will not be available. sysinv 2022-05-09 11:25:03.484 91940 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['disk', 'lvg', 'pv', 'memory']) sysinv 2022-05-09 11:25:03.485 91940 INFO sysinv.agent.manager [-] Sysinv Agent audit running inv_get_and_report. sysinv 2022-05-09 11:25:04.395 91940 WARNING sysinv.agent.pci [-] Enabling device ens5 to query link speed: CalledProcessError: Command '['query_pci_id', '-v 0x8086', '-d 0x100e']' returned non-zero exit status 1 sysinv 2022-05-09 11:25:04.403 91940 WARNING sysinv.agent.pci [-] ATTR speed unknown for: ens5 (flags: 0x1002): IOError: [Errno 22] Invalid argument sysinv 2022-05-09 11:25:04.404 91940 WARNING sysinv.agent.pci [-] Disabling device ens5 after querying link speed: IOError: [Errno 22] Invalid argument sysinv 2022-05-09 11:25:04.527 98797 INFO sysinv.conductor.manager [-] Updating platform data for host: 7be69b7e-5660-48d8-ad95-28af6222da2e with: {u'first_report': True} sysinv 2022-05-09 11:25:04.549 91940 INFO sysinv.agent.manager [-] Sysinv Agent platform update by host: {'first_report': True} sysinv 2022-05-09 11:25:04.549 91940 INFO sysinv.agent.manager [-] Agent found matching ihost: 7be69b7e-5660-48d8-ad95-28af6222da2e sysinv 2022-05-09 11:25:04.550 91940 INFO sysinv.agent.manager [-] _report_to_conductor initial_reports_required=set(['disk', 'lvg', 'pv', 'memory']) sysinv 2022-05-09 11:25:05.106 98797 INFO sysinv.conductor.manager [-] port 852027ed-3a24-4658-aa62-6f8273ce39bb update attr: {'speed': u'1000', 'sriov_vfs_pci_address': u'', 'sriov_totalvfs': None, 'sriov_vf_pdevice_id': None, 'dpdksupport': False, 'sriov_numvfs': 0, 'driver': u'e1000', 'sriov_vf_driver': None} sysinv 2022-05-09 11:25:05.289 98797 INFO sysinv.conductor.manager [-] port 8a1db02d-ad08-4eb2-b4a7-a42ca5bfcfef update attr: {'speed': None, 'sriov_vfs_pci_address': u'', 'sriov_totalvfs': None, 'sriov_vf_pdevice_id': None, 'dpdksupport': False, 'sriov_numvfs': 0, 'driver': u'e1000', 'sriov_vf_driver': None} sysinv 2022-05-09 11:25:05.376 98797 INFO sysinv.conductor.manager [-] update 0000:00:01.1 attr: {'sriov_numvfs': 0, 'driver': u'ata_piix', 'sriov_vf_driver': None, 'sriov_vf_pdevice_id': None, 'psvendor': u'XenSource, Inc.', 'extra_info': None, 'pdevice_id': u'7010', 'pclass': u'IDE interface', 'psdevice': u'Device 0001', 'sriov_vfs_pci_address': u'', 'pvendor': u'Intel Corporation', 'pvendor_id': u'8086', 'pclass_id': u'010180', 'sriov_totalvfs': None} sysinv 2022-05-09 11:25:05.404 98797 INFO sysinv.conductor.manager [-] update 0000:00:01.2 attr: {'sriov_numvfs': 0, 'driver': u'uhci_hcd', 'sriov_vf_driver': None, 'sriov_vf_pdevice_id': None, 'psvendor': u'XenSource, Inc.', 'extra_info': None, 'pdevice_id': u'7020', 'pclass': u'USB controller', 'psdevice': u'Device 0001', 'sriov_vfs_pci_address': u'', 'pvendor': u'Intel Corporation', 'pvendor_id': u'8086', 'pclass_id': u'0c0300', 'sriov_totalvfs': None} sysinv 2022-05-09 11:25:05.432 98797 INFO sysinv.conductor.manager [-] update 0000:00:02.0 attr: {'sriov_numvfs': 0, 'driver': None, 'sriov_vf_driver': None, 'sriov_vf_pdevice_id': None, 'psvendor': u'Device 0001', 'extra_info': None, 'pdevice_id': u'1111', 'pclass': u'VGA compatible controller', 'psdevice': u'Device 0001', 'sriov_vfs_pci_address': u'', 'pvendor': u'Vendor 1234', 'pvendor_id': u'1234', 'pclass_id': u'030000', 'sriov_totalvfs': None} sysinv 2022-05-09 11:25:05.460 98797 INFO sysinv.conductor.manager [-] update 0000:00:03.0 attr: {'sriov_numvfs': 0, 'driver': None, 'sriov_vf_driver': None, 'sriov_vf_pdevice_id': None, 'psvendor': u'XenSource, Inc.', 'extra_info': None, 'pdevice_id': u'0001', 'pclass': u'SCSI storage controller', 'psdevice': u'Xen Platform Device', 'sriov_vfs_pci_address': u'', 'pvendor': u'XenSource, Inc.', 'pvendor_id': u'5853', 'pclass_id': u'010000', 'sriov_totalvfs': None} sysinv 2022-05-09 11:25:05.545 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=0 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.545 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=2 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.545 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=4 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.546 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=6 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.548 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=8 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.548 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=10 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.549 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=12 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.549 98797 INFO sysinv.conductor.manager [-] Already in db numa_node=14 mynuma_nodes=[0, 2, 4, 6, 8, 10, 12, 14] sysinv 2022-05-09 11:25:05.598 98797 INFO sysinv.conductor.manager [-] Logical CPU topology: host:controller-0 (controller,worker,lowlatency), sockets:8, cores/socket=1, threads/core=1, reference:current (unchanged) sysinv 2022-05-09 11:25:05.598 98797 INFO sysinv.conductor.manager [-] cpu_id : 0 1 2 3 4 5 6 7 sysinv 2022-05-09 11:25:05.599 98797 INFO sysinv.conductor.manager [-] socket_id : 0 2 4 6 8 10 12 14 sysinv 2022-05-09 11:25:05.600 98797 INFO sysinv.conductor.manager [-] core_id : 0 0 0 0 0 0 0 0 sysinv 2022-05-09 11:25:05.601 98797 INFO sysinv.conductor.manager [-] thread_id : 0 0 0 0 0 0 0 0 sysinv 2022-05-09 11:25:05.615 98797 INFO sysinv.conductor.manager [-] update_grub_config, host uuid: (7be69b7e-5660-48d8-ad95-28af6222da2e), force: (False) sysinv 2022-05-09 11:25:05.616 98797 INFO sysinv.conductor.manager [-] _config_update_hosts personalities=['controller', 'worker'] host_uuids=[u'7be69b7e-5660-48d8-ad95-28af6222da2e'] reboot=False config_uuid=0fd14450-6057-43bc-a429-17de31115a16 tb= File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 9644, in update_grub_config host_uuids=[host_uuid]) sysinv 2022-05-09 11:25:05.630 98797 INFO sysinv.conductor.manager [-] Setting config target of host 'controller-0' to '8fd14450-6057-43bc-a429-17de31115a16'. sysinv 2022-05-09 11:25:05.662 98797 INFO sysinv.conductor.manager [-] _config_update_hosts config_uuid=0fd14450-6057-43bc-a429-17de31115a16 sysinv 2022-05-09 11:25:05.668 98797 INFO sysinv.conductor.manager [-] applying runtime manifest config_uuid=0fd14450-6057-43bc-a429-17de31115a16, classes: ['platform::compute::grub::runtime', 'platform::compute::config::runtime'] sysinv 2022-05-09 11:25:05.685 98797 INFO sysinv.conductor.manager [-] Cannot generate the configuration for controller-0, the host is not inventoried yet. sysinv 2022-05-09 11:25:05.685 98797 INFO sysinv.conductor.manager [-] Evaluating apps reapply {'type': 'runtime-apply-puppet'} sysinv 2022-05-09 11:25:05.685 98797 INFO sysinv.conductor.manager [-] Apps reapply order: [] sysinv 2022-05-09 11:25:05.687 98797 INFO sysinv.agent.rpcapi [-] config_apply_runtime_manifest: fanout_cast: sending config 0fd14450-6057-43bc-a429-17de31115a16 {'classes': ['platform::compute::grub::runtime', 'platform::compute::config::runtime'], 'force': False, 'personalities': ['controller', 'worker'], 'host_uuids': [u'7be69b7e-5660-48d8-ad95-28af6222da2e']} to agent sysinv 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task [-] Error during AgentManager._agent_audit: [Errno 2] No such file or directory: '/sys/devices/system/node/node1/hugepages': OSError: [Errno 2] No such file or directory: '/sys/devices/system/node/node1/hugepages' 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task Traceback (most recent call last): 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/periodic_task.py", line 180, in run_periodic_tasks 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task task(self, context) 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/manager.py", line 1253, in _agent_audit 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task force_updates=None) 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib/python2.7/site-packages/oslo_concurrency/lockutils.py", line 328, in inner 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task return f(*args, **kwargs) 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/manager.py", line 1272, in agent_audit 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task self.ihost_inv_get_and_report(icontext) 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/manager.py", line 878, in ihost_inv_get_and_report 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task imemory = self._inode_operator.inodes_get_imemory() 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/node.py", line 566, in inodes_get_imemory 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task imemory = self._inode_get_memory_hugepages() 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/node.py", line 347, in _inode_get_memory_hugepages 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task subdirs = self._get_immediate_subdirs(hugepages) 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task File "/usr/lib64/python2.7/site-packages/sysinv/agent/node.py", line 279, in _get_immediate_subdirs 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task return [name for name in listdir(dir) 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task OSError: [Errno 2] No such file or directory: '/sys/devices/system/node/node1/hugepages' 2022-05-09 11:25:05.699 91940 ERROR sysinv.openstack.common.periodic_task sysinv 2022-05-09 11:25:21.715 98797 ERROR sysinv.conductor.kube_app [-] Kubernetes is not configured. API operations will not be available.: KubeNotConfigured: Kubernetes is not configured. API operations will not be available. From ildiko.vancsa at gmail.com Mon May 9 19:23:49 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 9 May 2022 12:23:49 -0700 Subject: [Starlingx-discuss] Call for a virtual project update Message-ID: <5E678539-D7BB-40C0-B5AC-84833EB3EA2D@gmail.com> Hi everyone, As the Berlin Summit is approaching in June, we are collecting __video recordings to capture a project update__ from each OpenInfa community to showcase what each project has accomplished in the past year/release. We will post all recordings on the OpenInfra Foundation YouTube channel and promote them at the Summit, if they are available before the event. __We recommend the recordings to be less than 10 minutes long.__ Slides are not mandatory, but could be good to have some to support the information the presenter is sharing in the video. You can use the overview slide deck[1] for slide template. If you can submit your project recording to me __by Friday, May 27__, we?d love to promote them at the upcoming Berlin Summit. If you prefer to submit it after the Summit, I?ll send out another reminder after the event to collect any reminding recordings. Please let me know if you have any questions. Thanks, Ildik? [1] https://opendev.org/starlingx/docs/src/branch/master/resources/StarlingX_Onboarding_Deck_for_Web_October_2020.pptx From ildiko.vancsa at gmail.com Mon May 9 19:42:01 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 9 May 2022 12:42:01 -0700 Subject: [Starlingx-discuss] Affiliation data on the Bitergia dashboard Message-ID: <353F1751-3A07-4FCB-AB31-57E97E4CCD06@gmail.com> Hi, I?m reaching out to you about the Bitergia dashboard that the project has to capture community metrics. The Bitergia team will be on site for the OpenInfra Summit in Berlin and as part of their activities they will showcase the metrics dashboard for some of the OpenInfra projects, including StarlingX. To ensure that the data that they show is correct, I would like to remind everyone to please check your affiliation information on the dashboard[1]. If your affiliation is incorrect please reach out to me ASAP and I will get it fixed for you! Please let me know if you have any questions. Thanks, Ildik? [1] https://starlingx.biterg.io/app/kibana#/dashboard/cce903f0-2892-11e9-9a7c-254518135e42?_g=(refreshInterval:(pause:!t,value:0),time:(from:now-1y,mode:quick,to:now))&_a=(description:'Custom%20Affiliations%20panel%20for%20OpenStack',filters:!(('$state':(store:appState),meta:(alias:Bots,disabled:!f,index:git,key:author_bot,negate:!t,params:(query:!t,type:phrase),type:phrase,value:true),query:(match:(author_bot:(query:!t,type:phrase))))),fullScreenMode:!f,options:(darkTheme:!f,useMargins:!t),panels:!((embeddableConfig:(title:'Authors%20by%20Organization',vis:(legendOpen:!t)),gridData:(h:20,i:'6',w:28,x:20,y:0),id:affiliations_authors_organizations,panelIndex:'6',title:'Authors%20by%20Organization',type:visualization,version:'6.8.6'),(embeddableConfig:(title:Authors,vis:(params:(config:(searchKeyword:''),sort:(columnIndex:!n,direction:!n)))),gridData:(h:36,i:'8',w:28,x:0,y:20),id:affiliations_authors,panelIndex:'8',title:Authors,type:visualization,version:'6.8.6'),(embeddableConfig:(title:'Data%20Sources'),gridData:(h:12,i:'9',w:20,x:0,y:8),id:affiliations_data_sources,panelIndex:'9',title:'Data%20Sources',type:visualization,version:'6.8.6'),(gridData:(h:8,i:'10',w:20,x:0,y:0),id:'99230770-e0f1-11e8-8aac-ef7fd4d8cbad',panelIndex:'10',title:'Data%20Source',type:visualization,version:'6.8.6'),(gridData:(h:8,i:'11',w:48,x:0,y:56),id:'8d619890-136c-11e9-8aac-ef7fd4d8cbad',panelIndex:'11',title:Affiliations,type:visualization,version:'6.8.6'),(embeddableConfig:(vis:(params:(config:(searchKeyword:''),sort:(columnIndex:!n,direction:!n)))),gridData:(h:36,i:'12',w:20,x:28,y:20),id:c0327c20-2a38-11e9-9a7c-254518135e42,panelIndex:'12',title:Organizations,type:visualization,version:'6.8.6')),query:(language:lucene,query:(query_string:(analyze_wildcard:!t,default_field:'*',query:'*'))),timeRestore:!f,title:Affiliations,viewMode:view) From ildiko.vancsa at gmail.com Mon May 9 20:02:07 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 9 May 2022 13:02:07 -0700 Subject: [Starlingx-discuss] StarlingX electorate Message-ID: <4489FE7E-0B66-4D49-BF20-412F77B4D912@gmail.com> Hi, In preparation to the TSC election that kicks off tomorrow[2] I generated the electorate for this election. Please check if all active project contributors are listed on the etherpad and reach out to the election officials[1] if you see any issues: https://etherpad.opendev.org/p/starlingx-electorate-may-2022 Thanks, [1] https://docs.starlingx.io/election/#election-officials [2] https://docs.starlingx.io/election/#starlingx-pl-tl-and-tsc-elections-timeline From sebastian-valentin.bran at intel.com Tue May 10 07:57:15 2022 From: sebastian-valentin.bran at intel.com (Bran, Sebastian-Valentin) Date: Tue, 10 May 2022 07:57:15 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220508T230632Z Message-ID: Sanity Test from 2022-May-08 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220508T230632Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220508T230632Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Bran Sebastian-Valentin Software Engineer PMCE TEAM Personal Mobile: +40 57487760 sebastian-valentin.bran at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From bogdan-iulian.andrei at intel.com Tue May 10 08:00:35 2022 From: bogdan-iulian.andrei at intel.com (Andrei, Bogdan-Iulian) Date: Tue, 10 May 2022 08:00:35 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220509T230607Z Message-ID: Sanity Test from 2022-May-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220509T230607Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220509T230607Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From pvmpublic at gmail.com Tue May 10 11:48:41 2022 From: pvmpublic at gmail.com (Pratik M.) Date: Tue, 10 May 2022 17:18:41 +0530 Subject: [Starlingx-discuss] multi-node AIO / PXE booting In-Reply-To: References: Message-ID: On Mon, May 9, 2022 at 4:01 PM Outback Dingo wrote: > can i deploy a duplex aio on 2-3 nodes? and worker/storage on 4 nodes Duplex AIO = Controller (and storage and workers) on 2 nodes. There is no 3-controller option in StarlingX. So typically you would get 2 controllers and 6 workers. Pl. see: https://docs.starlingx.io/deploy/index-deploy-da06a98b83b1.html > so questions: > 1) can i boot all nodes from pxe server? In the standard/typical deployment, only controller-0 is PXE booted from an "external" PXE server. Other hosts PXE boot off the controller-0 on the cluster-internal mgmt/cluster network. The second link you refer to, is about PXE booting the controller-0. But, the first link you refer to, allows PXE booting ALL nodes from an external network. I haven't tried this but it seems supported. For e.g. https://docs.starlingx.io/developer_resources/stx_ipv6_deployment.html > 2) after booting installing the controller-0 node i can then pxe boot > install controller-1 node > 3) in this scenerio how does controller-0 discover controller-1 so i > can set the host "system host-update 2 personality=controller" In the typical deployment, once controller-0 is up, it runs a DHCP and TFTP server on the internal mgmt/cluster network. The controller-1 and other worker nodes are supposed to be configured to network boot off the NIC connected to this internal network. When they boot, controller-0 will "discover" them and add them to the host inventory. https://docs.starlingx.io/planning/kubernetes/starlingx-boot-sequence-considerations.html https://docs.starlingx.io/planning/kubernetes/internal-management-network-planning.html#internal-management-network-planning > 4) does the same apply to workers Yes > > 5) if you have 6 fully configured server each with 24TB and 512Gb > memory, how would you deploy to get a full 6 nodes with storage and > compute? You want more storage than controllers can provide? Pl see if this helps: https://docs.starlingx.io/planning/kubernetes/storage-planning-storage-resources.html Thanks -------------- next part -------------- An HTML attachment was scrubbed... URL: From outbackdingo at gmail.com Tue May 10 12:06:49 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Tue, 10 May 2022 19:06:49 +0700 Subject: [Starlingx-discuss] multi-node AIO / PXE booting In-Reply-To: References: Message-ID: On Tue, May 10, 2022 at 6:48 PM Pratik M. wrote: > > > On Mon, May 9, 2022 at 4:01 PM Outback Dingo wrote: > > > can i deploy a duplex aio on 2-3 nodes? and worker/storage on 4 nodes > > Duplex AIO = Controller (and storage and workers) on 2 nodes. There is no 3-controller option in StarlingX. So typically you would get 2 controllers and 6 workers. Pl. see: > https://docs.starlingx.io/deploy/index-deploy-da06a98b83b1.html > so never more then 2 controllers.... i can pxe boot and manually add all hosts from a central pxe server, since we cant use stx 169.254, due to bonds and vlans. that leaves me with what do i do for storage can a compute node be a storage node, as in all 6 nodes being compute/storage (ceph), what about controller-0 storage... meaning none except for boot drives. ie... control-1 2x128Gb boot ssd, 1 x 1TB, document states i need 2 500gb + drives control-2 2x128Gb boot ssd, 1 x 1TB, document states i need 2 500gb + drives compute-1 1x1TB boot drive, and 8x3.8TB drives compute-2 1x1TB boot drive, and 8x3.8TB drives compute-3 1x1TB boot drive, and 8x3.8TB drives compute-4 1x1TB boot drive, and 8x3.8TB drives compute-5 1x1TB boot drive, and 8x3.8TB drives compute-6 1x1TB boot drive, and 8x3.8TB drives not sure this scenerio is even viable without controller-0/controller-1 having 0 ceph storage, as it would then all be on the nodes > > > so questions: > > 1) can i boot all nodes from pxe server? > > In the standard/typical deployment, only controller-0 is PXE booted from an "external" PXE server. Other hosts PXE boot off the controller-0 on the cluster-internal mgmt/cluster network. The second link you refer to, is about PXE booting the controller-0. > > But, the first link you refer to, allows PXE booting ALL nodes from an external network. I haven't tried this but it seems supported. For e.g. > https://docs.starlingx.io/developer_resources/stx_ipv6_deployment.html > > > 2) after booting installing the controller-0 node i can then pxe boot > > install controller-1 node > > 3) in this scenerio how does controller-0 discover controller-1 so i > > can set the host "system host-update 2 personality=controller" > > In the typical deployment, once controller-0 is up, it runs a DHCP and TFTP server on the internal mgmt/cluster network. The controller-1 and other worker nodes are supposed to be configured to network boot off the NIC connected to this internal network. When they boot, controller-0 will "discover" them and add them to the host inventory. > > https://docs.starlingx.io/planning/kubernetes/starlingx-boot-sequence-considerations.html > https://docs.starlingx.io/planning/kubernetes/internal-management-network-planning.html#internal-management-network-planning > > > 4) does the same apply to workers > > Yes > > > > > 5) if you have 6 fully configured server each with 24TB and 512Gb > > memory, how would you deploy to get a full 6 nodes with storage and > > compute? > > You want more storage than controllers can provide? Pl see if this helps: > https://docs.starlingx.io/planning/kubernetes/storage-planning-storage-resources.html > > Thanks From outbackdingo at gmail.com Tue May 10 12:20:42 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Tue, 10 May 2022 19:20:42 +0700 Subject: [Starlingx-discuss] multi-node AIO / PXE booting In-Reply-To: References: Message-ID: On Tue, May 10, 2022 at 6:48 PM Pratik M. wrote: > > > On Mon, May 9, 2022 at 4:01 PM Outback Dingo wrote: > > > can i deploy a duplex aio on 2-3 nodes? and worker/storage on 4 nodes > > Duplex AIO = Controller (and storage and workers) on 2 nodes. There is no 3-controller option in StarlingX. So typically you would get 2 controllers and 6 workers. Pl. see: > https://docs.starlingx.io/deploy/index-deploy-da06a98b83b1.html according to this document Standard with Storage Cluster on Controller Nodes A two node HA controller + storage node cluster, managing up to 200 worker nodes. Standard with Storage Cluster on dedicated Storage Nodes A two node HA controller node cluster with a 2-9 node Ceph storage cluster, managing up to 200 worker nodes. which basically says i cant even put storage on compute/worker nodes > > so questions: > > 1) can i boot all nodes from pxe server? > > In the standard/typical deployment, only controller-0 is PXE booted from an "external" PXE server. Other hosts PXE boot off the controller-0 on the cluster-internal mgmt/cluster network. The second link you refer to, is about PXE booting the controller-0. > > But, the first link you refer to, allows PXE booting ALL nodes from an external network. I haven't tried this but it seems supported. For e.g. > https://docs.starlingx.io/developer_resources/stx_ipv6_deployment.html > > > 2) after booting installing the controller-0 node i can then pxe boot > > install controller-1 node > > 3) in this scenerio how does controller-0 discover controller-1 so i > > can set the host "system host-update 2 personality=controller" > > In the typical deployment, once controller-0 is up, it runs a DHCP and TFTP server on the internal mgmt/cluster network. The controller-1 and other worker nodes are supposed to be configured to network boot off the NIC connected to this internal network. When they boot, controller-0 will "discover" them and add them to the host inventory. > > https://docs.starlingx.io/planning/kubernetes/starlingx-boot-sequence-considerations.html > https://docs.starlingx.io/planning/kubernetes/internal-management-network-planning.html#internal-management-network-planning > > > 4) does the same apply to workers > > Yes > > > > > 5) if you have 6 fully configured server each with 24TB and 512Gb > > memory, how would you deploy to get a full 6 nodes with storage and > > compute? > > You want more storage than controllers can provide? Pl see if this helps: > https://docs.starlingx.io/planning/kubernetes/storage-planning-storage-resources.html > > Thanks From Ghada.Khalil at windriver.com Tue May 10 13:56:01 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Tue, 10 May 2022 13:56:01 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220509T230607Z In-Reply-To: References: Message-ID: Hi Bogdan, Regarding the nginx-ingress-controller failure, I believe the sanity systems use a private registry. Did you pull the new images as per this stx email thread: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/012955.html ? Thanks, Ghada From: Andrei, Bogdan-Iulian Sent: Tuesday, May 10, 2022 4:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220509T230607Z [Please note: This e-mail is from an EXTERNAL e-mail address] Sanity Test from 2022-May-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220509T230607Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220509T230607Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From scott.little at windriver.com Tue May 10 19:07:59 2022 From: scott.little at windriver.com (Scott Little) Date: Tue, 10 May 2022 15:07:59 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1282 - Failure! In-Reply-To: <659870788.96.1652079862891.JavaMail.javamailuser@localhost> References: <659870788.96.1652079862891.JavaMail.javamailuser@localhost> Message-ID: The is another case of the intermittent golang build error ... https://bugs.launchpad.net/starlingx/+bug/1968583 Scott On 2022-05-09 03:04, build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_master_master > Build #: 1282 > Status: Failure > Timestamp: 20220509T043001Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220509T043001Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue May 10 19:19:47 2022 From: scott.little at windriver.com (Scott Little) Date: Tue, 10 May 2022 15:19:47 -0400 Subject: [Starlingx-discuss] stx tools 6.0 / iso build In-Reply-To: References: Message-ID: <0a1185fc-ec25-e3ff-5210-67fbbf93571c@windriver.com> The official build instructions have moved here: https://docs.starlingx.io/developer_resources/build_guide.html You appear to be reading stx-tools/README.rst which is likely very out off date.? I'll create a launchpad to correct that. Scott On 2022-05-09 06:24, Outback Dingo wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > i must be missing something trying to get an iso built, seems way to > hard by these docks, for such a simple procless > > reading https://opendev.org/starlingx/tools > > it states > > To generate centos-repo > The centos-repo is a set of symbolic links to the packages in the > mirror and the mock configuration file. It is needed to create these > links if this is the first build or the mirror has been updated. > > generate-centos-repo.sh /import/mirrors/CentOS > Where the argument to the script is the path of the mirror. > > To build all packages: > $ cd $MY_REPO > $ build-pkgs or build-pkgs --clean ; build-pkgs > To generate local-repo: > The local-repo has the dependency information that sequences the build > order; To generate or update the information the following command > needs to be executed after building modified or new packages. > > $ generate-local-repo.sh > > > however inside the container, > > [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh > ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS > mirror_dir=/import/mirrors/CentOS > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > distro=centos > layer=all > > layer_pkg_urls= > > layer_image_inc_urls= > > layer_wheels_inc_urls= > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > folders. Please provide a valid mirror > [dingo at 25d9abcf4450 starlingx]$ $ build-iso > bash: $: command not found > [dingo at 25d9abcf4450 starlingx]$ build-iso > 05:56:09 > 05:56:09 ************************* > 05:56:09 Create StarlingX/CentOS Boot CD > 05:56:09 ************************* > 05:56:09 > 05:56:09 ERROR: create-yum-conf failed > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS > mirror_dir=/import/mirrors/CentOS > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > distro=centos > layer=all > > layer_pkg_urls= > > layer_image_inc_urls= > > layer_wheels_inc_urls= > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > folders. Please provide a valid mirror > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Tue May 10 20:42:00 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 10 May 2022 13:42:00 -0700 Subject: [Starlingx-discuss] StarlingX TSC election - Nomination period started Message-ID: Hi StarlingX Community, Nominations for the 2 Technical Steering Committee positions are now open and will remain open until __May 17, 2022, 20:00 UTC__. All nominations must be submitted as a text file to the starlingx/election repository as explained on the election website[1]. Please note that the name of the file should match the email address in your Gerrit configuration. Candidates for the Technical Steering Committee Positions: Any contributing community member can propose their candidacy for an available, directly-elected TSC seat. The election will be held from May 24, 2022, 20:00 UTC through to May 31, 2022, 20:00 UTC. The electorate are the community members that are also contributors for one of the official teams[2] or served in a leadership role (TSC, PL, TL) over the 12-month timeframe May 10, 2021 to May 10, 2022, as well as the contributors who are acknowledged by the TSC. Please see the website[3] for additional details about this election. Please find below the timeline: TC nomination starts @ May 10, 2022, 20:00 UTC TC nomination ends @ May 17, 2022, 20:00 UTC TC campaigning starts @ May 17, 2022, 20:00 UTC TC campaigning ends @ May 24, 2022, 20:00 UTC TC elections starts @ May 24, 2022, 20:00 UTC TC elections ends @ May 31, 2022, 20:00 UTC If you have any questions please be sure to either ask them on the mailing list or to the elections officials[4]. Thank you, [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy [2] https://docs.starlingx.io/governance/reference/tsc/projects/index.html [3] https://docs.starlingx.io/election/ [4] https://docs.starlingx.io/election/#election-officials From outbackdingo at gmail.com Wed May 11 03:07:09 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Wed, 11 May 2022 10:07:09 +0700 Subject: [Starlingx-discuss] stx tools 6.0 / iso build In-Reply-To: <0a1185fc-ec25-e3ff-5210-67fbbf93571c@windriver.com> References: <0a1185fc-ec25-e3ff-5210-67fbbf93571c@windriver.com> Message-ID: Thanks, ill give this a go yet during the cd $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash download_mirror.sh normal or broken somehow ?? b2: ovl: Error while doing RPMdb copy-up: b2: [Errno 13] Permission denied: '/var/lib/rpm/Sigmd5' b2: ovl: Error while doing RPMdb copy-up: b2: [Errno 13] Permission denied: '/var/lib/rpm/Sigmd5' On Wed, May 11, 2022 at 2:24 AM Scott Little wrote: > > The official build instructions have moved here: > > https://docs.starlingx.io/developer_resources/build_guide.html > > You appear to be reading stx-tools/README.rst which is likely very out > off date. I'll create a launchpad to correct that. > > Scott > > > On 2022-05-09 06:24, Outback Dingo wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > i must be missing something trying to get an iso built, seems way to > > hard by these docks, for such a simple procless > > > > reading https://opendev.org/starlingx/tools > > > > it states > > > > To generate centos-repo > > The centos-repo is a set of symbolic links to the packages in the > > mirror and the mock configuration file. It is needed to create these > > links if this is the first build or the mirror has been updated. > > > > generate-centos-repo.sh /import/mirrors/CentOS > > Where the argument to the script is the path of the mirror. > > > > To build all packages: > > $ cd $MY_REPO > > $ build-pkgs or build-pkgs --clean ; build-pkgs > > To generate local-repo: > > The local-repo has the dependency information that sequences the build > > order; To generate or update the information the following command > > needs to be executed after building modified or new packages. > > > > $ generate-local-repo.sh > > > > > > however inside the container, > > > > [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh > > ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' > > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS > > mirror_dir=/import/mirrors/CentOS > > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > > distro=centos > > layer=all > > > > layer_pkg_urls= > > > > layer_image_inc_urls= > > > > layer_wheels_inc_urls= > > > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > > folders. Please provide a valid mirror > > [dingo at 25d9abcf4450 starlingx]$ $ build-iso > > bash: $: command not found > > [dingo at 25d9abcf4450 starlingx]$ build-iso > > 05:56:09 > > 05:56:09 ************************* > > 05:56:09 Create StarlingX/CentOS Boot CD > > 05:56:09 ************************* > > 05:56:09 > > 05:56:09 ERROR: create-yum-conf failed > > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS > > mirror_dir=/import/mirrors/CentOS > > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > > distro=centos > > layer=all > > > > layer_pkg_urls= > > > > layer_image_inc_urls= > > > > layer_wheels_inc_urls= > > > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > > folders. Please provide a valid mirror > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From outbackdingo at gmail.com Wed May 11 03:59:59 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Wed, 11 May 2022 10:59:59 +0700 Subject: [Starlingx-discuss] STX networking Message-ID: scenario... i have a host say controller-0 prior to any ansible run i need to create a bond, and bridges and vlans sure.... Add a bond device as root: ip link add bond0 type bond ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up Set VLAN on the bond device: ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type vlan id 1680 ip link set bond0.1680 up Add the bridge device and attach VLAN to it: ip link add br0 type bridge ip link set bond0.1648 master br0 ip link set bond0.1664 master br0 ip link set bond0.1680 master br0 ip link set br0 up so i see where in starlingx system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. where i want to set example, since each needs its own interface can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 OAM_IF=bond0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 vlan bond0 MGMT_IF=bond0.1664 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID} done system host-if-modify controller-0 $MGMT_IF -c platform system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host the reason for this being our switches are # MGMT interface vlan1648 address 10.16.48.2/24 address-virtual 44:38:39:FF:00:02 10.16.48.1 vlan-id 1648 vlan-raw-device bridge interface vlan1672 address 10.16.72.2/24 address-virtual 44:38:39:FF:00:03 10.16.72.1 vlan-id 1672 vlan-raw-device bridge interface vlan1680 address 10.16.80.2/24 address-virtual 44:38:39:FF:00:03 10.16.80.1 vlan-id 1680 vlan-raw-device bridge interface vlan1696 address 10.16.96.2/24 address-virtual 44:38:39:FF:00:03 10.16.96.1 vlan-id 1696 vlan-raw-device bridge interface vlan1664 address 10.16.64.2/24 address-virtual 44:38:39:FF:00:07 10.16.64.1 vlan-id 1664 vlan-raw-device bridge and further down DATAIF_0=bond0.1680 the reason being we are trying to have starlingx conform to our networks topology I also noted: in https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_bootstrap_configs.html#install-time-only-params-r6 ... Network Properties I listed at the bottom can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. Network Properties pxeboot_subnet pxeboot_start_address pxeboot_end_address management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_addres cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 valid_lft forever preferred_lft forever inet 10.16.48.114/24 scope global secondary bond0 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 valid_lft forever preferred_lft forever inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 valid_lft forever preferred_lft forever inet 169.254.202.1/24 scope global vlan1664 valid_lft forever preferred_lft forever inet 192.168.206.1/24 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet 192.168.204.1/24 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever From bogdan-iulian.andrei at intel.com Wed May 11 08:02:50 2022 From: bogdan-iulian.andrei at intel.com (Andrei, Bogdan-Iulian) Date: Wed, 11 May 2022 08:02:50 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220511T024512Z Message-ID: Sanity Test from 2022-May-11 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220511T024512Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220511T024512Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From ibrahim.eryalcin at ulakhaberlesme.com.tr Wed May 11 11:59:58 2022 From: ibrahim.eryalcin at ulakhaberlesme.com.tr (Halil Ibrahim ERYALCIN) Date: Wed, 11 May 2022 11:59:58 +0000 Subject: [Starlingx-discuss] OpenStack - OVS 2.15.2 - Intel E810-C - SR-IOV issue Message-ID: Hello, We 've a issue about running SR-IOV feature on OVS with Intel E810-C NIC. Ovs configuration is shared. When try to attach interface to VM , gives error as below. I wonder , can OVS work with Intel - SR-IOV ? Has anyone ever successed about it? Best Regards, OS: Ubuntu 20.04.4 LTS (GNU/Linux 5.13.0-40-lowlatency x86_64) **************** ovs_version: "2.15.2" **************** root at cmp15:/home/ulak# lspci |grep Eth b1:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02) b3:00.0 Ethernet controller: Intel Corporation Ethernet Controller E810-C for QSFP (rev 02) b3:01.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.1 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.2 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.3 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.4 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.5 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.6 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:01.7 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.0 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.1 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.2 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.3 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.4 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.5 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.6 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) b3:02.7 Ethernet controller: Intel Corporation Ethernet Adaptive Virtual Function (rev 02) **************** root at cmp15:/home/ulak# ethtool -i enp179s0 driver: ice version: 1.8.3 firmware-version: 3.00 0x80008271 1.2992.0 expansion-rom-version: bus-info: 0000:b3:00.0 supports-statistics: yes supports-test: yes supports-eeprom-access: yes supports-register-dump: yes supports-priv-flags: yes **************** nova.conf [pci] passthrough_whitelist = { "address": "*:b1:00.*", "physical_network": null } passthrough_whitelist = { "address": "*:b3:00.*", "physical_network": null } passthrough_whitelist = { "address": "*:b3:01.*", "physical_network": null } passthrough_whitelist = { "address": "*:b3:02.*", "physical_network": null } passthrough_whitelist = { "devname": "enp177s0", "physical_network": null } passthrough_whitelist = { "devname": "enp179s0", "physical_network": null } passthrough_whitelist = { "vendor_id":"8086", "product_id":"1592", "physical_network": null } passthrough_whitelist = { "vendor_id":"8086", "product_id":"1889", "physical_network": null } passthrough_whitelist = { "address": "0000:b3:02.7", "physical_network": null } **************** root at cmp15:/home/ulak# dpdk-devbind.py -s Network devices using kernel driver =================================== 0000:4b:00.0 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp75s0f0 drv=mlx5_core unused=vfio-pci 0000:4b:00.1 'MT27710 Family [ConnectX-4 Lx] 1015' if=enp75s0f1 drv=mlx5_core unused=vfio-pci 0000:4c:00.0 'Ethernet Controller 10G X550T 1563' if=enp76s0f0 drv=ixgbe unused=vfio-pci 0000:4c:00.1 'Ethernet Controller 10G X550T 1563' if=enp76s0f1 drv=ixgbe unused=vfio-pci 0000:98:00.0 'MT2894 Family [ConnectX-6 Lx] 101f' if=enp152s0f0 drv=mlx5_core unused=vfio-pci 0000:98:00.1 'MT2894 Family [ConnectX-6 Lx] 101f' if=enp152s0f1 drv=mlx5_core unused=vfio-pci 0000:b1:00.0 'Ethernet Controller E810-C for QSFP 1592' if=enp177s0 drv=ice unused=vfio-pci 0000:b3:00.0 'Ethernet Controller E810-C for QSFP 1592' if=enp179s0 drv=ice unused=vfio-pci *Active* 0000:b3:01.0 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v0 drv=iavf unused=vfio-pci 0000:b3:01.1 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v1 drv=iavf unused=vfio-pci 0000:b3:01.2 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v2 drv=iavf unused=vfio-pci 0000:b3:01.3 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v3 drv=iavf unused=vfio-pci 0000:b3:01.4 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v4 drv=iavf unused=vfio-pci 0000:b3:01.5 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v5 drv=iavf unused=vfio-pci 0000:b3:01.6 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v6 drv=iavf unused=vfio-pci 0000:b3:01.7 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v7 drv=iavf unused=vfio-pci 0000:b3:02.0 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v8 drv=iavf unused=vfio-pci 0000:b3:02.1 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v9 drv=iavf unused=vfio-pci 0000:b3:02.2 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v10 drv=iavf unused=vfio-pci 0000:b3:02.3 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v11 drv=iavf unused=vfio-pci 0000:b3:02.4 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v12 drv=iavf unused=vfio-pci 0000:b3:02.5 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v13 drv=iavf unused=vfio-pci 0000:b3:02.6 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v14 drv=iavf unused=vfio-pci 0000:b3:02.7 'Ethernet Adaptive Virtual Function 1889' if=enp179s0v15 drv=iavf unused=vfio-pci /var/log/nova/nova-compute.log 2022-05-10 17:00:43.684 706104 INFO nova.virt.libvirt.driver [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Deletion of /var/lib/nova/instances/e8adc95a-2d7c-40f9-a5ab-bbfa640418b7_del complete 2022-05-10 17:00:43.759 706104 INFO nova.compute.manager [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Took 3.12 seconds to destroy the instance on the hypervisor. 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Failed to build and run instance: nova.exception.InternalError: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Traceback (most recent call last): 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 77, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] plugin.plug(vif, instance_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 305, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._plug_vf(vif, instance_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 269, in _plug_vf 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] pf_ifname = linux_net.get_ifname_by_pci_address( 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/vif_plug_ovs/linux_net.py", line 357, in get_ifname_by_pci_address 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise exception.PciDeviceNotFoundById(id=pci_addr) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Traceback (most recent call last): 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 696, in _plug_os_vif 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] os_vif.plug(vif, instance_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 82, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise os_vif.exception.PlugException(vif=vif, err=err) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] os_vif.exception.PlugException: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] Traceback (most recent call last): 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.driver.spawn(context, instance, image_meta, 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4172, in spawn 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._create_guest_with_network( 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7240, in _create_guest_with_network 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._cleanup( 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.force_reraise() 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise self.value 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7205, in _create_guest_with_network 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.plug_vifs(instance, network_info) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1277, in plug_vifs 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self.vif_driver.plug(instance, vif) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 720, in plug 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] self._plug_os_vif(instance, vif_obj) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 700, in _plug_os_vif 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] raise exception.InternalError(msg) 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] nova.exception.InternalError: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.304 706104 ERROR nova.compute.manager [instance: e8adc95a-2d7c-40f9-a5ab-bbfa640418b7] 2022-05-10 17:00:47.310 706104 ERROR os_vif [req-222a65f6-2db1-4dc1-b4c6-a08b016bfd8f 2896d201c5ed44d8813273c48bed5ba3 fe8aaa7d14f44459b6c46e230d538765 - default default] Failed to unplug vif VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True): vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 77, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif plugin.plug(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 305, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif self._plug_vf(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 269, in _plug_vf 2022-05-10 17:00:47.310 706104 ERROR os_vif pf_ifname = linux_net.get_ifname_by_pci_address( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/linux_net.py", line 357, in get_ifname_by_pci_address 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.PciDeviceNotFoundById(id=pci_addr) 2022-05-10 17:00:47.310 706104 ERROR os_vif vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 696, in _plug_os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif os_vif.plug(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 82, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif raise os_vif.exception.PlugException(vif=vif, err=err) 2022-05-10 17:00:47.310 706104 ERROR os_vif os_vif.exception.PlugException: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2409, in _build_and_run_instance 2022-05-10 17:00:47.310 706104 ERROR os_vif self.driver.spawn(context, instance, image_meta, 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 4172, in spawn 2022-05-10 17:00:47.310 706104 ERROR os_vif self._create_guest_with_network( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7240, in _create_guest_with_network 2022-05-10 17:00:47.310 706104 ERROR os_vif self._cleanup( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 227, in __exit__ 2022-05-10 17:00:47.310 706104 ERROR os_vif self.force_reraise() 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/oslo_utils/excutils.py", line 200, in force_reraise 2022-05-10 17:00:47.310 706104 ERROR os_vif raise self.value 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 7205, in _create_guest_with_network 2022-05-10 17:00:47.310 706104 ERROR os_vif self.plug_vifs(instance, network_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/driver.py", line 1277, in plug_vifs 2022-05-10 17:00:47.310 706104 ERROR os_vif self.vif_driver.plug(instance, vif) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 720, in plug 2022-05-10 17:00:47.310 706104 ERROR os_vif self._plug_os_vif(instance, vif_obj) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/virt/libvirt/vif.py", line 700, in _plug_os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.InternalError(msg) 2022-05-10 17:00:47.310 706104 ERROR os_vif nova.exception.InternalError: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2232, in _do_build_and_run_instance 2022-05-10 17:00:47.310 706104 ERROR os_vif self._build_and_run_instance(context, instance, image, 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/nova/compute/manager.py", line 2505, in _build_and_run_instance 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.RescheduledException( 2022-05-10 17:00:47.310 706104 ERROR os_vif nova.exception.RescheduledException: Build of instance e8adc95a-2d7c-40f9-a5ab-bbfa640418b7 was re-scheduled: Failure running os_vif plugin plug method: Failed to plug VIF VIFHostDevice(active=False,address=fa:16:3e:53:e0:10,dev_address=0000:b3:02.7,dev_type='ethernet',has_traffic_filtering=True,id=7ae16610-d6d9-461a-948f-9922cac62aae,network=Network(c260906e-04c6-4a49-a120-3389c7380247),plugin='ovs',port_profile=VIFPortProfileOVSRepresentor,preserve_on_delete=True). Got error: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif During handling of the above exception, another exception occurred: 2022-05-10 17:00:47.310 706104 ERROR os_vif 2022-05-10 17:00:47.310 706104 ERROR os_vif Traceback (most recent call last): 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/os_vif/__init__.py", line 110, in unplug 2022-05-10 17:00:47.310 706104 ERROR os_vif plugin.unplug(vif, instance_info) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 380, in unplug 2022-05-10 17:00:47.310 706104 ERROR os_vif self._unplug_vf(vif) 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/ovs.py", line 343, in _unplug_vf 2022-05-10 17:00:47.310 706104 ERROR os_vif pf_ifname = linux_net.get_ifname_by_pci_address( 2022-05-10 17:00:47.310 706104 ERROR os_vif File "/usr/lib/python3/dist-packages/vif_plug_ovs/linux_net.py", line 357, in get_ifname_by_pci_address 2022-05-10 17:00:47.310 706104 ERROR os_vif raise exception.PciDeviceNotFoundById(id=pci_addr) 2022-05-10 17:00:47.310 706104 ERROR os_vif vif_plug_ovs.exception.PciDeviceNotFoundById: PCI device 0000:b3:02.7 not found 2022-05-10 17:00:47.310 706104 ERROR os_vif -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed May 11 15:07:35 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 11 May 2022 15:07:35 +0000 Subject: [Starlingx-discuss] REMINDER of on-going discussion on UPDATING the STARLINGX MISSION Statement Message-ID: ... just a reminder, that there are on-going discussions around updating the StarlingX Mission Statement in the weekly TSC/Community meetings. See the following etherpad for details on the discussions so far: https://etherpad.opendev.org/p/starlingx-mission-statement Currently some of the proposed new mission statements are: * StarlingX is a complete, open source, highly-scalable, performant, distributed container-based cloud infrastructure platform with Day1 and Day 2 management for the most demanding workloads as found in Industrial IoT, Telecom, Video Delivery, and more. * or * StarlingX is a ready-to-deploy open-source distributed kubernetes-based cloud infrastruture platform with full lifecycle management of the Plaform infrastructure and capable of supporting the most demanding workloads as found in Industrial IOT, Telecom and more. Feel free to join the discussion by adding to discussions in etherpad, attending TSC/Community meeting or replying to this email. Greg. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed May 11 15:10:47 2022 From: scott.little at windriver.com (Scott Little) Date: Wed, 11 May 2022 11:10:47 -0400 Subject: [Starlingx-discuss] stx tools 6.0 / iso build In-Reply-To: References: <0a1185fc-ec25-e3ff-5210-67fbbf93571c@windriver.com> Message-ID: <2063c646-4303-a11d-c503-efd92da1864e@windriver.com> I can't recall ever seeing that error massage during the download step.? A quick scan of past build logs doesn't show it either. Did the task complete, or exit abnormally? If it completed, what do these commands show ? cat logs/*_missing_*.log cat logs/*_failmoved_*.log Scott L On 2022-05-10 11:07 p.m., Outback Dingo wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Thanks, ill give this a go yet during the cd > $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash > download_mirror.sh > > normal or broken somehow ?? > > b2: ovl: Error while doing RPMdb copy-up: > b2: [Errno 13] Permission denied: '/var/lib/rpm/Sigmd5' > b2: ovl: Error while doing RPMdb copy-up: > b2: [Errno 13] Permission denied: '/var/lib/rpm/Sigmd5' > > On Wed, May 11, 2022 at 2:24 AM Scott Little wrote: >> The official build instructions have moved here: >> >> https://docs.starlingx.io/developer_resources/build_guide.html >> >> You appear to be reading stx-tools/README.rst which is likely very out >> off date. I'll create a launchpad to correct that. >> >> Scott >> >> >> On 2022-05-09 06:24, Outback Dingo wrote: >>> [Please note: This e-mail is from an EXTERNAL e-mail address] >>> >>> i must be missing something trying to get an iso built, seems way to >>> hard by these docks, for such a simple procless >>> >>> reading https://opendev.org/starlingx/tools >>> >>> it states >>> >>> To generate centos-repo >>> The centos-repo is a set of symbolic links to the packages in the >>> mirror and the mock configuration file. It is needed to create these >>> links if this is the first build or the mirror has been updated. >>> >>> generate-centos-repo.sh /import/mirrors/CentOS >>> Where the argument to the script is the path of the mirror. >>> >>> To build all packages: >>> $ cd $MY_REPO >>> $ build-pkgs or build-pkgs --clean ; build-pkgs >>> To generate local-repo: >>> The local-repo has the dependency information that sequences the build >>> order; To generate or update the information the following command >>> needs to be executed after building modified or new packages. >>> >>> $ generate-local-repo.sh >>> >>> >>> however inside the container, >>> >>> [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh >>> ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' >>> [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS >>> mirror_dir=/import/mirrors/CentOS >>> config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config >>> distro=centos >>> layer=all >>> >>> layer_pkg_urls= >>> >>> layer_image_inc_urls= >>> >>> layer_wheels_inc_urls= >>> >>> The mirror /import/mirrors/CentOS doesn't has the Binary and Source >>> folders. Please provide a valid mirror >>> [dingo at 25d9abcf4450 starlingx]$ $ build-iso >>> bash: $: command not found >>> [dingo at 25d9abcf4450 starlingx]$ build-iso >>> 05:56:09 >>> 05:56:09 ************************* >>> 05:56:09 Create StarlingX/CentOS Boot CD >>> 05:56:09 ************************* >>> 05:56:09 >>> 05:56:09 ERROR: create-yum-conf failed >>> [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS >>> mirror_dir=/import/mirrors/CentOS >>> config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config >>> distro=centos >>> layer=all >>> >>> layer_pkg_urls= >>> >>> layer_image_inc_urls= >>> >>> layer_wheels_inc_urls= >>> >>> The mirror /import/mirrors/CentOS doesn't has the Binary and Source >>> folders. Please provide a valid mirror >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From outbackdingo at gmail.com Wed May 11 15:16:25 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Wed, 11 May 2022 22:16:25 +0700 Subject: [Starlingx-discuss] stx tools 6.0 / iso build In-Reply-To: <2063c646-4303-a11d-c503-efd92da1864e@windriver.com> References: <0a1185fc-ec25-e3ff-5210-67fbbf93571c@windriver.com> <2063c646-4303-a11d-c503-efd92da1864e@windriver.com> Message-ID: seems to work fine if i add sudo, though might be more of an issue later on. all this to see if i can rollback the mellonox driver update to v.5 as the latest driver in 6.x doesnt support my cards in these two boxes, or i toss the cards and buy new ones. cd $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && sudo bash download_mirror.sh On Wed, May 11, 2022 at 10:10 PM Scott Little wrote: > > I can't recall ever seeing that error massage during the download step. A quick scan of past build logs doesn't show it either. > > Did the task complete, or exit abnormally? > > If it completed, what do these commands show ? > > cat logs/*_missing_*.log > cat logs/*_failmoved_*.log > > Scott L > > > On 2022-05-10 11:07 p.m., Outback Dingo wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Thanks, ill give this a go yet during the cd > $MY_REPO_ROOT_DIR/stx-tools/centos-mirror-tools && bash > download_mirror.sh > > normal or broken somehow ?? > > b2: ovl: Error while doing RPMdb copy-up: > b2: [Errno 13] Permission denied: '/var/lib/rpm/Sigmd5' > b2: ovl: Error while doing RPMdb copy-up: > b2: [Errno 13] Permission denied: '/var/lib/rpm/Sigmd5' > > On Wed, May 11, 2022 at 2:24 AM Scott Little wrote: > > The official build instructions have moved here: > > https://docs.starlingx.io/developer_resources/build_guide.html > > You appear to be reading stx-tools/README.rst which is likely very out > off date. I'll create a launchpad to correct that. > > Scott > > > On 2022-05-09 06:24, Outback Dingo wrote: > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > i must be missing something trying to get an iso built, seems way to > hard by these docks, for such a simple procless > > reading https://opendev.org/starlingx/tools > > it states > > To generate centos-repo > The centos-repo is a set of symbolic links to the packages in the > mirror and the mock configuration file. It is needed to create these > links if this is the first build or the mirror has been updated. > > generate-centos-repo.sh /import/mirrors/CentOS > Where the argument to the script is the path of the mirror. > > To build all packages: > $ cd $MY_REPO > $ build-pkgs or build-pkgs --clean ; build-pkgs > To generate local-repo: > The local-repo has the dependency information that sequences the build > order; To generate or update the information the following command > needs to be executed after building modified or new packages. > > $ generate-local-repo.sh > > > however inside the container, > > [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh > ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS > mirror_dir=/import/mirrors/CentOS > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > distro=centos > layer=all > > layer_pkg_urls= > > layer_image_inc_urls= > > layer_wheels_inc_urls= > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > folders. Please provide a valid mirror > [dingo at 25d9abcf4450 starlingx]$ $ build-iso > bash: $: command not found > [dingo at 25d9abcf4450 starlingx]$ build-iso > 05:56:09 > 05:56:09 ************************* > 05:56:09 Create StarlingX/CentOS Boot CD > 05:56:09 ************************* > 05:56:09 > 05:56:09 ERROR: create-yum-conf failed > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS > mirror_dir=/import/mirrors/CentOS > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > distro=centos > layer=all > > layer_pkg_urls= > > layer_image_inc_urls= > > layer_wheels_inc_urls= > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > folders. Please provide a valid mirror > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Peng.Peng at windriver.com Wed May 11 18:02:37 2022 From: Peng.Peng at windriver.com (Peng, Peng) Date: Wed, 11 May 2022 18:02:37 +0000 Subject: [Starlingx-discuss] FW: Sanity Master Test LAYERED build ISO 20220416T013705Z In-Reply-To: References: Message-ID: Hi, This is Peng from windriver. I got this email from Rob. Can you add me in the sanity result mailing list? Thanks, Peng From: Cooke, Rob Sent: Wednesday, May 11, 2022 11:19 AM To: Peng, Peng Subject: FW: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220416T013705Z From: Andrei, Bogdan-Iulian > Sent: Monday, April 18, 2022 10:16 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220416T013705Z [Please note: This e-mail is from an EXTERNAL e-mail address] Sanity Test from 2022-April-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220416T013705Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220416T013705Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 88 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: -------------- next part -------------- An embedded and charset-unspecified text was scrubbed... Name: ATT00001.txt URL: From Rob.Cooke at windriver.com Wed May 11 18:21:39 2022 From: Rob.Cooke at windriver.com (Cooke, Rob) Date: Wed, 11 May 2022 18:21:39 +0000 Subject: [Starlingx-discuss] FW: Sanity Master Test LAYERED build ISO 20220416T013705Z In-Reply-To: References: Message-ID: Hi Peng, You should be able to subscribe to this list via this link http://lists.starlingx.io/cgi-bin/mailman/listinfo Thanks, Rob From: Peng, Peng Sent: Wednesday, May 11, 2022 2:03 PM To: starlingx-discuss at lists.starlingx.io; bogdan-iulian.andrei at intel.com Subject: [Starlingx-discuss] FW: Sanity Master Test LAYERED build ISO 20220416T013705Z Hi, This is Peng from windriver. I got this email from Rob. Can you add me in the sanity result mailing list? Thanks, Peng From: Cooke, Rob > Sent: Wednesday, May 11, 2022 11:19 AM To: Peng, Peng > Subject: FW: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220416T013705Z From: Andrei, Bogdan-Iulian > Sent: Monday, April 18, 2022 10:16 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220416T013705Z [Please note: This e-mail is from an EXTERNAL e-mail address] Sanity Test from 2022-April-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220416T013705Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220416T013705Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 88 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Greg.Waines at windriver.com Wed May 11 19:11:07 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 11 May 2022 19:11:07 +0000 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: Hey Scott, Wrt what we were talking about in the community meeting: - ROOK Deployment for say an AIO-DX + Workers system > ROOK deployment on Controllers * see https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.html - notice that there are multiple steps to setup rook in this guide > adding of high-level ceph-rook backend for system, while configuring controller-0 > adding of ceph-mon-placement and ceph-mgr-placement labels on each controller (controller-0 and controller-1) > adding of OSDs to each controller (controller-0 and controller-1) > ROOK OSD deployment on Workers * DOH ... we are missing that ( I'll get the starlingx DOC TEAM to look into this ) * in https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex_extend.html - we forgot to add ... at the very end ... AFTER the unlocking of the worker nodes, the section to optionally add OSDs to the worker nodes - you can just use the section at the bottom of https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.html (i.e. If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend) but substitute in the worker hostnames. - PXE BOOT details ( NOTE ... I can't remember how we decided you needed a pxeboot network. What was the original problem that you were trying to solve ? e.g. if it was just to change the IP Address Subnet for mgmt., you can do that simply with a bootstrap override, see below ) > As discussed, only the first time the very first controller is installed (controller-0) can an external PXEBOOT server be used. * The docs on how to do this are here: - https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html - https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.html > Also as discussed, * For all subsequent pxeboot installs of starlingx hosts, it is done by the active controller (controller-0 or controller-1) * By default, StarlingX uses the MGMT network for PXEBOOTING * But for scenarios where the MGMT network can not be used ( i.e. MGMT network needs to be vlan-tagged for some reason or MGMT network needs to be IPv6 ) then one can configure a PXEBOOT network who's sole purpose is for pxebooting. * This is discussed in many of the 'Network Planning' PLANNING sections of StarlingX DOCS e.g. https://docs.starlingx.io/planning/kubernetes/network-requirements.html ... and every page in-between ... https://docs.starlingx.io/planning/kubernetes/network-planning-the-pxe-boot-network.html * DOH ... we don't have this well documented wrt how to configure this - again, I'll get the DOC team to look at this - for now here's the info on how to use it. Using 'pxeboot' network --------------------------------- - the 'pxeboot' network (and all networks I think) are always present in the system e.g. you can see them with system network-list system addrpool-list - the easiest way to configure the pxeboot IP Address Subnet is at bootstrap time in the localhost.yml (bootstrap override file) e.g. system_mode: duplex dns_servers: - 8.8.8.8 - 8.8.4.4 pxeboot_subnet: 169.254.202.0/24 # pxeboot_start_address: # pxeboot_end_address: management_subnet: 192.168.204.0/24 # management_start_address: # management_end_address: external_oam_subnet: / external_oam_gateway_address: external_oam_floating_address: external_oam_node_0_address: external_oam_node_1_address: admin_username: admin admin_password: ansible_become_pass: # OPTIONALLY provide a ROOT CA certificate and key for k8s root ca, # if not specified, one will be auto-generated, # see 'Kubernetes Root CA Certificate' in Security Guide for details. k8s_root_ca_cert: < your_root_ca_cert.pem > k8s_root_ca_key: < your_root_ca_key.pem > apiserver_cert_sans: - < your_hostname_for_oam_floating.your_domain > ( see all parameters and their defaults in /usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml on target ) - you then need to configure a host interface on controller-0 and attach it to the 'pxeboot' network IMPORTANT: a host's pxeboot interface MUST be a port-based/untagged vlan AND MUST be on the same physical interface as the mgmt interface (which would be vlan-tagged). e.g. // the config of the mgmt. interface and pxeboot interface for controller-0 MGMT_PORT= system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID} done system host-if-modify controller-0 $MGMT_PORT -c platform system interface-network-assign controller-0 $MGMT_PORT pxeboot system host-if-add controller-0 -V 22 -c platform mgmt0 vlan $MGMT_PORT system interface-network-assign controller-0 mgmt0 mgmt system interface-network-assign controller-0 mgmt0 cluster-host - dnsmasq (boot server on controllers) is configured to answer dhcp on the pxeboot and mgmt interface IPs - when adding new hosts (e.g. second controller or workers), you need to ensure in BIOS that these nodes are trying to network boot off of the interface physically attached to the port-based/vlan-untagged pxeboot network. > and then after new host boots, you need to configure its host interfaces correctly ... e.g. with pxeboot interface on port-based/vlan-untagged physical interface, and the mgmt. interface as a vlan interface on that same physical interface. Greg. -----Original Message----- From: Outback Dingo Sent: Wednesday, May 11, 2022 12:00 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] scenario... i have a host say controller-0 prior to any ansible run i need to create a bond, and bridges and vlans sure.... Add a bond device as root: ip link add bond0 type bond ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up Set VLAN on the bond device: ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type vlan id 1680 ip link set bond0.1680 up Add the bridge device and attach VLAN to it: ip link add br0 type bridge ip link set bond0.1648 master br0 ip link set bond0.1664 master br0 ip link set bond0.1680 master br0 ip link set br0 up so i see where in starlingx system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. where i want to set example, since each needs its own interface can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 OAM_IF=bond0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 vlan bond0 MGMT_IF=bond0.1664 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID} done system host-if-modify controller-0 $MGMT_IF -c platform system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host the reason for this being our switches are # MGMT interface vlan1648 address 10.16.48.2/24 address-virtual 44:38:39:FF:00:02 10.16.48.1 vlan-id 1648 vlan-raw-device bridge interface vlan1672 address 10.16.72.2/24 address-virtual 44:38:39:FF:00:03 10.16.72.1 vlan-id 1672 vlan-raw-device bridge interface vlan1680 address 10.16.80.2/24 address-virtual 44:38:39:FF:00:03 10.16.80.1 vlan-id 1680 vlan-raw-device bridge interface vlan1696 address 10.16.96.2/24 address-virtual 44:38:39:FF:00:03 10.16.96.1 vlan-id 1696 vlan-raw-device bridge interface vlan1664 address 10.16.64.2/24 address-virtual 44:38:39:FF:00:07 10.16.64.1 vlan-id 1664 vlan-raw-device bridge and further down DATAIF_0=bond0.1680 the reason being we are trying to have starlingx conform to our networks topology I also noted: in https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_bootstrap_configs.html#install-time-only-params-r6 ... Network Properties I listed at the bottom can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. Network Properties pxeboot_subnet pxeboot_start_address pxeboot_end_address management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_addres cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 valid_lft forever preferred_lft forever inet 10.16.48.114/24 scope global secondary bond0 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 valid_lft forever preferred_lft forever inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 valid_lft forever preferred_lft forever inet 169.254.202.1/24 scope global vlan1664 valid_lft forever preferred_lft forever inet 192.168.206.1/24 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet 192.168.204.1/24 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Greg.Waines at windriver.com Wed May 11 19:25:21 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 11 May 2022 19:25:21 +0000 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: replying to answer your questions from email below, see in-lined below, Greg. -----Original Message----- From: Outback Dingo Sent: Wednesday, May 11, 2022 12:00 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] scenario... i have a host say controller-0 prior to any ansible run [Greg] I assume you mean the bootstrap ansible playbook i need to create a bond, and bridges and vlans [Greg] You do need an interface to the outside world ... e.g. in order to download container images from docker hub. [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? sure.... Add a bond device as root: ip link add bond0 type bond ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up Set VLAN on the bond device: ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type vlan id 1680 ip link set bond0.1680 up Add the bridge device and attach VLAN to it: ip link add br0 type bridge ip link set bond0.1648 master br0 ip link set bond0.1664 master br0 ip link set bond0.1680 master br0 ip link set br0 up so i see where in starlingx [Greg] the following commands are only possible AFTER bootstrap system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. [Greg] No ... there is no requirement for a bridge with StarlingX. where i want to set example, since each needs its own interface can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ... i.e. OAM on port-based/untagged-vlan of bond and MGMT on vlan-tag=1664 on bond ( BUT here is where you need the pxeboot network because your MGMT network is vlan-tagged ... and you can't pxe boot over that ) OAM_IF=bond0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 vlan bond0 MGMT_IF=bond0.1664 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID} done system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host the reason for this being our switches are # MGMT interface vlan1648 address 10.16.48.2/24 address-virtual 44:38:39:FF:00:02 10.16.48.1 vlan-id 1648 vlan-raw-device bridge interface vlan1672 address 10.16.72.2/24 address-virtual 44:38:39:FF:00:03 10.16.72.1 vlan-id 1672 vlan-raw-device bridge interface vlan1680 address 10.16.80.2/24 address-virtual 44:38:39:FF:00:03 10.16.80.1 vlan-id 1680 vlan-raw-device bridge interface vlan1696 address 10.16.96.2/24 address-virtual 44:38:39:FF:00:03 10.16.96.1 vlan-id 1696 vlan-raw-device bridge interface vlan1664 address 10.16.64.2/24 address-virtual 44:38:39:FF:00:07 10.16.64.1 vlan-id 1664 vlan-raw-device bridge and further down DATAIF_0=bond0.1680 the reason being we are trying to have starlingx conform to our networks topology I also noted: in https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_bootstrap_configs.html#install-time-only-params-r6 ... Network Properties I listed at the bottom can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. Network Properties pxeboot_subnet pxeboot_start_address pxeboot_end_address management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_addres cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 valid_lft forever preferred_lft forever inet 10.16.48.114/24 scope global secondary bond0 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 valid_lft forever preferred_lft forever inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 valid_lft forever preferred_lft forever inet 169.254.202.1/24 scope global vlan1664 valid_lft forever preferred_lft forever inet 192.168.206.1/24 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet 192.168.204.1/24 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:34f0/64 scope link valid_lft forever preferred_lft forever _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Wed May 11 19:30:06 2022 From: scott.little at windriver.com (Scott Little) Date: Wed, 11 May 2022 15:30:06 -0400 Subject: [Starlingx-discuss] Debian builds now at CENGN Message-ID: <301466a7-569c-73f9-116b-a57e088fab25@windriver.com> We have started building Debian based loads of the master branch at CENGN. The build will probably start 10 am UTC, and are expected to complete around 10 pm UTC.? Two ISOs are currently being published, one each for standard and real-time kernels. Containers are not currently being built. Due to the long build times and large build sizes, I anticipate we won't be able to build every day.? Initially I aim to build 3-4 times a week... Monday, Wednesday, Friday, and possibly Sunday. The builds will be published at ... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/ The latest standard ISOs is at ... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/iso/std/starlingx-intel-x86-64-cd.iso and real-time ... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/iso/rt/starlingx-rt-intel-x86-64-cd.iso From outbackdingo at gmail.com Wed May 11 23:05:20 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 12 May 2022 06:05:20 +0700 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: https://bugs.launchpad.net/starlingx/+bug/1913582 i did also find this which should let me manage where pxe booting occurs, as in our network config you can only pxe from the 10.16.48 network, so some config changes are needed which ill be also trying today On Thu, May 12, 2022 at 2:11 AM Waines, Greg wrote: > > Hey Scott, > > Wrt what we were talking about in the community meeting: > > - ROOK Deployment for say an AIO-DX + Workers system > > ROOK deployment on Controllers > * see https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.html > - notice that there are multiple steps to setup rook in this guide > > adding of high-level ceph-rook backend for system, while configuring controller-0 > > adding of ceph-mon-placement and ceph-mgr-placement labels on each controller (controller-0 and controller-1) > > adding of OSDs to each controller (controller-0 and controller-1) > > ROOK OSD deployment on Workers > * DOH ... we are missing that ( I'll get the starlingx DOC TEAM to look into this ) > * in https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex_extend.html > - we forgot to add ... at the very end ... AFTER the unlocking of the worker nodes, > the section to optionally add OSDs to the worker nodes > - you can just use the section at the bottom of > https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/aio_duplex_install_kubernetes.html > (i.e. If using Rook container-based Ceph, finish configuring the ceph-rook Persistent Storage Backend) > but substitute in the worker hostnames. > > > - PXE BOOT details > > ( NOTE ... I can't remember how we decided you needed a pxeboot network. What was the original problem that you were trying to solve ? > e.g. if it was just to change the IP Address Subnet for mgmt., you can do that simply with a bootstrap override, see below ) > > > As discussed, only the first time the very first controller is installed (controller-0) can an external PXEBOOT server be used. > * The docs on how to do this are here: > - https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html > - https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/accessing-pxe-boot-server-files-for-a-custom-configuration.html > > > Also as discussed, > * For all subsequent pxeboot installs of starlingx hosts, it is done by the active controller (controller-0 or controller-1) > * By default, StarlingX uses the MGMT network for PXEBOOTING > * But for scenarios where the MGMT network can not be used > ( i.e. MGMT network needs to be vlan-tagged for some reason or MGMT network needs to be IPv6 ) > then one can configure a PXEBOOT network who's sole purpose is for pxebooting. > * This is discussed in many of the 'Network Planning' PLANNING sections of StarlingX DOCS > e.g. > https://docs.starlingx.io/planning/kubernetes/network-requirements.html > ... and every page in-between ... > https://docs.starlingx.io/planning/kubernetes/network-planning-the-pxe-boot-network.html > * DOH ... we don't have this well documented wrt how to configure this > - again, I'll get the DOC team to look at this > - for now here's the info on how to use it. > > Using 'pxeboot' network > --------------------------------- > - the 'pxeboot' network (and all networks I think) are always present in the system > e.g. you can see them with > system network-list > system addrpool-list > - the easiest way to configure the pxeboot IP Address Subnet is at bootstrap time in the localhost.yml (bootstrap override file) > e.g. > system_mode: duplex > > dns_servers: > - 8.8.8.8 > - 8.8.4.4 > > pxeboot_subnet: 169.254.202.0/24 > # pxeboot_start_address: > # pxeboot_end_address: > > management_subnet: 192.168.204.0/24 > # management_start_address: > # management_end_address: > > external_oam_subnet: / > external_oam_gateway_address: > external_oam_floating_address: > external_oam_node_0_address: > external_oam_node_1_address: > > admin_username: admin > admin_password: > ansible_become_pass: > > # OPTIONALLY provide a ROOT CA certificate and key for k8s root ca, > # if not specified, one will be auto-generated, > # see 'Kubernetes Root CA Certificate' in Security Guide for details. > k8s_root_ca_cert: < your_root_ca_cert.pem > > k8s_root_ca_key: < your_root_ca_key.pem > > apiserver_cert_sans: > - < your_hostname_for_oam_floating.your_domain > > > ( see all parameters and their defaults in /usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml on target ) > > > - you then need to configure a host interface on controller-0 and attach it to the 'pxeboot' network > IMPORTANT: a host's pxeboot interface MUST be a port-based/untagged vlan AND MUST be on the same physical interface as the mgmt interface (which would be vlan-tagged). > e.g. // the config of the mgmt. interface and pxeboot interface for controller-0 > MGMT_PORT= > system host-if-modify controller-0 lo -c none > IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do > system interface-network-remove ${UUID} > done > system host-if-modify controller-0 $MGMT_PORT -c platform > system interface-network-assign controller-0 $MGMT_PORT pxeboot > > system host-if-add controller-0 -V 22 -c platform mgmt0 vlan $MGMT_PORT > system interface-network-assign controller-0 mgmt0 mgmt > system interface-network-assign controller-0 mgmt0 cluster-host > > - dnsmasq (boot server on controllers) is configured to answer dhcp on the pxeboot and mgmt interface IPs > > - when adding new hosts (e.g. second controller or workers), > you need to ensure in BIOS that these nodes are trying to network boot off of the interface physically attached to the port-based/vlan-untagged pxeboot network. > > and then after new host boots, > you need to configure its host interfaces correctly ... e.g. with pxeboot interface on port-based/vlan-untagged physical interface, and the mgmt. interface as a vlan interface on that same physical interface. > > > > Greg. > > > -----Original Message----- > From: Outback Dingo > Sent: Wednesday, May 11, 2022 12:00 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > scenario... > > i have a host say controller-0 > > prior to any ansible run > > i need to create a bond, and bridges and vlans > > sure.... > Add a bond device as root: > > ip link add bond0 type bond > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up > > Set VLAN on the bond device: > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > Add the bridge device and attach VLAN to it: > ip link add br0 type bridge > ip link set bond0.1648 master br0 > ip link set bond0.1664 master br0 > ip link set bond0.1680 master br0 > ip link set br0 up > > so i see where in starlingx > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > where i want to set example, since each needs its own interface can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 > > OAM_IF=bond0 > system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 vlan bond0 > MGMT_IF=bond0.1664 > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if > ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do > system interface-network-remove ${UUID} done system host-if-modify controller-0 $MGMT_IF -c platform system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host > > the reason for this being our switches are > > # MGMT > interface vlan1648 > address 10.16.48.2/24 > address-virtual 44:38:39:FF:00:02 10.16.48.1 > vlan-id 1648 > vlan-raw-device bridge > > interface vlan1672 > address 10.16.72.2/24 > address-virtual 44:38:39:FF:00:03 10.16.72.1 > vlan-id 1672 > vlan-raw-device bridge > > interface vlan1680 > address 10.16.80.2/24 > address-virtual 44:38:39:FF:00:03 10.16.80.1 > vlan-id 1680 > vlan-raw-device bridge > > interface vlan1696 > address 10.16.96.2/24 > address-virtual 44:38:39:FF:00:03 10.16.96.1 > vlan-id 1696 > vlan-raw-device bridge > > interface vlan1664 > address 10.16.64.2/24 > address-virtual 44:38:39:FF:00:07 10.16.64.1 > vlan-id 1664 > vlan-raw-device bridge > > and further down DATAIF_0=bond0.1680 > > the reason being we are trying to have starlingx conform to our networks topology I also noted: in > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_bootstrap_configs.html#install-time-only-params-r6 > ... > > Network Properties I listed at the bottom > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > Network Properties > pxeboot_subnet > pxeboot_start_address > pxeboot_end_address > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_addres > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > valid_lft forever preferred_lft forever > inet 10.16.48.114/24 scope global secondary bond0 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > valid_lft forever preferred_lft forever > inet 169.254.202.1/24 scope global vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.206.1/24 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.204.1/24 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From outbackdingo at gmail.com Thu May 12 00:21:26 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 12 May 2022 07:21:26 +0700 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: Network Properties pxeboot_subnet: 10.16.48.1 pxeboot_start_address 10.16.48.100 pxeboot_end_address 10.16.48.125 management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_address cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address ip link add bond0 type bond ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up Set VLAN on the bond device: ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up and modify the host details as per below: OAM_IF=bond0.1648 MGMT_IF=bond0.1664 CLUSTER_IF=bond0.1680 PXE_IF=bond0 <- this puts pxe on bond0 On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > replying to answer your questions from email below, > see in-lined below, > Greg. > > -----Original Message----- > From: Outback Dingo > Sent: Wednesday, May 11, 2022 12:00 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > scenario... > > i have a host say controller-0 > > prior to any ansible run > [Greg] I assume you mean the bootstrap ansible playbook > > i need to create a bond, and bridges and vlans > [Greg] You do need an interface to the outside world ? e.g. in order to download container images from docker hub. > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > sure.... > Add a bond device as root: > > ip link add bond0 type bond > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up > > Set VLAN on the bond device: > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > Add the bridge device and attach VLAN to it: > ip link add br0 type bridge > ip link set bond0.1648 master br0 > ip link set bond0.1664 master br0 > ip link set bond0.1680 master br0 > ip link set br0 up > > so i see where in starlingx > [Greg] the following commands are only possible AFTER bootstrap > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 > system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam > system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 > system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > [Greg] No ? there is no requirement for a bridge with StarlingX. > > where i want to set example, since each needs its own interface can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 > [Greg] Yes ? i.e. OAM on port-based/untagged-vlan of bond and MGMT on vlan-tag=1664 on bond > ( BUT here is where you need the pxeboot network because your MGMT network is vlan-tagged ? and you can't pxe boot over that ) > > OAM_IF=bond0 > system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam > system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 > system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 > system host-if-add controller-0 -V 1672 -c platform bond0.1672 vlan bond0 > MGMT_IF=bond0.1664 > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if > ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do > system interface-network-remove ${UUID} > done > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > system interface-network-assign controller-0 $MGMT_IF mgmt > system interface-network-assign controller-0 $MGMT_IF cluster-host > > the reason for this being our switches are > > # MGMT > interface vlan1648 > address 10.16.48.2/24 > address-virtual 44:38:39:FF:00:02 10.16.48.1 > vlan-id 1648 > vlan-raw-device bridge > > interface vlan1672 > address 10.16.72.2/24 > address-virtual 44:38:39:FF:00:03 10.16.72.1 > vlan-id 1672 > vlan-raw-device bridge > > interface vlan1680 > address 10.16.80.2/24 > address-virtual 44:38:39:FF:00:03 10.16.80.1 > vlan-id 1680 > vlan-raw-device bridge > > interface vlan1696 > address 10.16.96.2/24 > address-virtual 44:38:39:FF:00:03 10.16.96.1 > vlan-id 1696 > vlan-raw-device bridge > > interface vlan1664 > address 10.16.64.2/24 > address-virtual 44:38:39:FF:00:07 10.16.64.1 > vlan-id 1664 > vlan-raw-device bridge > > and further down DATAIF_0=bond0.1680 > > the reason being we are trying to have starlingx conform to our networks topology I also noted: in > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_bootstrap_configs.html#install-time-only-params-r6 > ... > > Network Properties I listed at the bottom > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > Network Properties > pxeboot_subnet > pxeboot_start_address > pxeboot_end_address > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_addres > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > valid_lft forever preferred_lft forever > inet 10.16.48.114/24 scope global secondary bond0 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > valid_lft forever preferred_lft forever > inet 169.254.202.1/24 scope global vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.206.1/24 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.204.1/24 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From Greg.Waines at windriver.com Thu May 12 00:32:50 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 12 May 2022 00:32:50 +0000 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: Were you successful ? ( One question ... you are only having to do the 'ip link ...' commands BEFORE bootstrap in order to have IP Connectivity to the outside world for bootstrapping .. correct ? ) Greg. -----Original Message----- From: Outback Dingo Sent: Wednesday, May 11, 2022 8:21 PM To: Waines, Greg Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: Network Properties pxeboot_subnet: 10.16.48.1 pxeboot_start_address 10.16.48.100 pxeboot_end_address 10.16.48.125 management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_address cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address ip link add bond0 type bond ip link set bond0 type bond miimon 100 mode 80211.ad ip link set enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up Set VLAN on the bond device: ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up and modify the host details as per below: OAM_IF=bond0.1648 MGMT_IF=bond0.1664 CLUSTER_IF=bond0.1680 PXE_IF=bond0 <- this puts pxe on bond0 On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > replying to answer your questions from email below, see in-lined > below, Greg. > > -----Original Message----- > From: Outback Dingo > Sent: Wednesday, May 11, 2022 12:00 AM > To: starlingx-discuss at lists.starlingx.io > Subject: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > scenario... > > i have a host say controller-0 > > prior to any ansible run > [Greg] I assume you mean the bootstrap ansible playbook > > i need to create a bond, and bridges and vlans [Greg] You do need an > interface to the outside world ? e.g. in order to download container images from docker hub. > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > sure.... > Add a bond device as root: > > ip link add bond0 type bond > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down > ip link set enp44s0 master bond0 ip link set bond0 up > > Set VLAN on the bond device: > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 > ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type > vlan id 1680 ip link set bond0.1680 up > > Add the bridge device and attach VLAN to it: > ip link add br0 type bridge > ip link set bond0.1648 master br0 > ip link set bond0.1664 master br0 > ip link set bond0.1680 master br0 > ip link set br0 up > > so i see where in starlingx > [Greg] the following commands are only possible AFTER bootstrap > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 > ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c > platform system interface-network-assign controller-0 $OAM_IF oam > system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan > bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 > vlan bond0 system host-if-add controller-0 -V 1680 -c platform > bond0.1680 vlan bond0 > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > [Greg] No ? there is no requirement for a bridge with StarlingX. > > where i want to set example, since each needs its own interface can i > set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? i.e. OAM on > port-based/untagged-vlan of bond and MGMT on vlan-tag=1664 on bond ( > BUT here is where you need the pxeboot network because your MGMT > network is vlan-tagged ? and you can't pxe boot over that ) > > OAM_IF=bond0 > system host-if-modify controller-0 $OAM_IF -c platform system > interface-network-assign controller-0 $OAM_IF oam system host-if-add > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan > bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 > vlan bond0 > MGMT_IF=bond0.1664 > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > interface-network-list controller-0 | awk '{if > ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do > system interface-network-remove ${UUID} done > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > system interface-network-assign controller-0 $MGMT_IF mgmt system > interface-network-assign controller-0 $MGMT_IF cluster-host > > the reason for this being our switches are > > # MGMT > interface vlan1648 > address 10.16.48.2/24 > address-virtual 44:38:39:FF:00:02 10.16.48.1 > vlan-id 1648 > vlan-raw-device bridge > > interface vlan1672 > address 10.16.72.2/24 > address-virtual 44:38:39:FF:00:03 10.16.72.1 > vlan-id 1672 > vlan-raw-device bridge > > interface vlan1680 > address 10.16.80.2/24 > address-virtual 44:38:39:FF:00:03 10.16.80.1 > vlan-id 1680 > vlan-raw-device bridge > > interface vlan1696 > address 10.16.96.2/24 > address-virtual 44:38:39:FF:00:03 10.16.96.1 > vlan-id 1696 > vlan-raw-device bridge > > interface vlan1664 > address 10.16.64.2/24 > address-virtual 44:38:39:FF:00:07 10.16.64.1 > vlan-id 1664 > vlan-raw-device bridge > > and further down DATAIF_0=bond0.1680 > > the reason being we are trying to have starlingx conform to our > networks topology I also noted: in > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_boo > tstrap_configs.html#install-time-only-params-r6 > ... > > Network Properties I listed at the bottom > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > Network Properties > pxeboot_subnet > pxeboot_start_address > pxeboot_end_address > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_addres > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > valid_lft forever preferred_lft forever > inet 10.16.48.114/24 scope global secondary bond0 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > valid_lft forever preferred_lft forever > inet 169.254.202.1/24 scope global vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.206.1/24 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.204.1/24 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > valid_lft forever preferred_lft forever > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From outbackdingo at gmail.com Thu May 12 00:44:21 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 12 May 2022 07:44:21 +0700 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: working through the configuration now based on findings, and yes i only have to do the ip link commands once prior to bootstrap i did get bond0 and vlans on a previous try to be configured after system host-unlock controller-0 they were just in the wrong order, so rebuilding the primary node, if it works and put the interfaces and networks on proper interfaces and i can bootstrap controller-1 and get past unlocking that also... i think it will be a win! On Thu, May 12, 2022 at 7:32 AM Waines, Greg wrote: > > Were you successful ? > > ( One question ... you are only having to do the 'ip link ...' commands BEFORE bootstrap in order to have IP Connectivity to the outside world for bootstrapping .. correct ? ) > > Greg. > > -----Original Message----- > From: Outback Dingo > Sent: Wednesday, May 11, 2022 8:21 PM > To: Waines, Greg > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: > Network Properties > > pxeboot_subnet: 10.16.48.1 > pxeboot_start_address 10.16.48.100 > pxeboot_end_address 10.16.48.125 > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_address > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > ip link add bond0 type bond > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up > > Set VLAN on the bond device: > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up > > > and modify the host details as per below: > > OAM_IF=bond0.1648 > MGMT_IF=bond0.1664 > CLUSTER_IF=bond0.1680 > PXE_IF=bond0 <- this puts pxe on bond0 > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > > > replying to answer your questions from email below, see in-lined > > below, Greg. > > > > -----Original Message----- > > From: Outback Dingo > > Sent: Wednesday, May 11, 2022 12:00 AM > > To: starlingx-discuss at lists.starlingx.io > > Subject: [Starlingx-discuss] STX networking > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > scenario... > > > > i have a host say controller-0 > > > > prior to any ansible run > > [Greg] I assume you mean the bootstrap ansible playbook > > > > i need to create a bond, and bridges and vlans [Greg] You do need an > > interface to the outside world ? e.g. in order to download container images from docker hub. > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > > > sure.... > > Add a bond device as root: > > > > ip link add bond0 type bond > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down > > ip link set enp44s0 master bond0 ip link set bond0 up > > > > Set VLAN on the bond device: > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 > > ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type > > vlan id 1680 ip link set bond0.1680 up > > > > Add the bridge device and attach VLAN to it: > > ip link add br0 type bridge > > ip link set bond0.1648 master br0 > > ip link set bond0.1664 master br0 > > ip link set bond0.1680 master br0 > > ip link set br0 up > > > > so i see where in starlingx > > [Greg] the following commands are only possible AFTER bootstrap > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 > > ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c > > platform system interface-network-assign controller-0 $OAM_IF oam > > system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan > > bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 > > vlan bond0 system host-if-add controller-0 -V 1680 -c platform > > bond0.1680 vlan bond0 > > > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > where i want to set example, since each needs its own interface can i > > set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? i.e. OAM on > > port-based/untagged-vlan of bond and MGMT on vlan-tag=1664 on bond ( > > BUT here is where you need the pxeboot network because your MGMT > > network is vlan-tagged ? and you can't pxe boot over that ) > > > > OAM_IF=bond0 > > system host-if-modify controller-0 $OAM_IF -c platform system > > interface-network-assign controller-0 $OAM_IF oam system host-if-add > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan > > bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 > > vlan bond0 > > MGMT_IF=bond0.1664 > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > > interface-network-list controller-0 | awk '{if > > ($6=="lo") print $4;}') > > for UUID in $IFNET_UUIDS; do > > system interface-network-remove ${UUID} done > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > > system interface-network-assign controller-0 $MGMT_IF mgmt system > > interface-network-assign controller-0 $MGMT_IF cluster-host > > > > the reason for this being our switches are > > > > # MGMT > > interface vlan1648 > > address 10.16.48.2/24 > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > vlan-id 1648 > > vlan-raw-device bridge > > > > interface vlan1672 > > address 10.16.72.2/24 > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > vlan-id 1672 > > vlan-raw-device bridge > > > > interface vlan1680 > > address 10.16.80.2/24 > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > vlan-id 1680 > > vlan-raw-device bridge > > > > interface vlan1696 > > address 10.16.96.2/24 > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > vlan-id 1696 > > vlan-raw-device bridge > > > > interface vlan1664 > > address 10.16.64.2/24 > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > vlan-id 1664 > > vlan-raw-device bridge > > > > and further down DATAIF_0=bond0.1680 > > > > the reason being we are trying to have starlingx conform to our > > networks topology I also noted: in > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_boo > > tstrap_configs.html#install-time-only-params-r6 > > ... > > > > Network Properties I listed at the bottom > > > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > > > Network Properties > > pxeboot_subnet > > pxeboot_start_address > > pxeboot_end_address > > management_subnet > > management_start_address > > management_end_address > > cluster_host_subnet > > cluster_host_start_address > > cluster_host_end_address > > cluster_pod_subnet > > cluster_pod_start_addres > > cluster_pod_end_address > > cluster_service_subnet > > cluster_service_start_address > > cluster_service_end_address > > management_multicast_subnet > > management_multicast_start_address > > management_multicast_end_address > > > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > valid_lft forever preferred_lft forever > > inet 10.16.48.114/24 scope global secondary bond0 > > valid_lft forever preferred_lft forever > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > valid_lft forever preferred_lft forever > > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > > valid_lft forever preferred_lft forever > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > > valid_lft forever preferred_lft forever > > inet 169.254.202.1/24 scope global vlan1664 > > valid_lft forever preferred_lft forever > > inet 192.168.206.1/24 scope global secondary vlan1664 > > valid_lft forever preferred_lft forever > > inet 192.168.204.1/24 scope global secondary vlan1664 > > valid_lft forever preferred_lft forever > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > > valid_lft forever preferred_lft forever > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > valid_lft forever preferred_lft forever > > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > valid_lft forever preferred_lft forever > > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > valid_lft forever preferred_lft forever > > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > valid_lft forever preferred_lft forever > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > From outbackdingo at gmail.com Thu May 12 00:59:49 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 12 May 2022 07:59:49 +0700 Subject: [Starlingx-discuss] Debian builds now at CENGN In-Reply-To: <301466a7-569c-73f9-116b-a57e088fab25@windriver.com> References: <301466a7-569c-73f9-116b-a57e088fab25@windriver.com> Message-ID: On Thu, May 12, 2022 at 2:33 AM Scott Little wrote: > > We have started building Debian based loads of the master branch at CENGN. > > The build will probably start 10 am UTC, and are expected to complete > around 10 pm UTC. Two ISOs are currently being published, one each for > standard and real-time kernels. Containers are not currently being built. > > Due to the long build times and large build sizes, I anticipate we won't > be able to build every day. Initially I aim to build 3-4 times a > week... Monday, Wednesday, Friday, and possibly Sunday. > > The builds will be published at ... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/ > > The latest standard ISOs is at ... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/iso/std/starlingx-intel-x86-64-cd.iso > > and real-time ... > > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/debian/monolithic/latest_build/outputs/iso/rt/starlingx-rt-intel-x86-64-cd.iso sounds great, are these actually debian based, or yocto based ??? > > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Thu May 12 03:35:21 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 11 May 2022 20:35:21 -0700 Subject: [Starlingx-discuss] REMINDER of on-going discussion on UPDATING the STARLINGX MISSION Statement In-Reply-To: References: Message-ID: Hi Greg, Thank you for working on this and sending out the reminder along with the current variations! My main comment to the below statements is that they are not really mission statements. They are describing what the StarlingX platform currently is, but they don?t talk about what the project?s and community?s mission is. I looked up a few large companies? statements for reference: * Google: "Our mission is to organize the world?s information and make it universally accessible and useful." - https://about.google * Tesla: "Tesla?s mission is to accelerate the world?s transition to sustainable energy." - https://www.tesla.com/about To formulate one for StarlingX, I think it would be a good exercise to talk a bit more about the topic by trying to answer questions like: Why is the community working on this platform? What are the contributors trying to achieve and what they would like to help the project?s users to achieve? So when we say that ?The StarlingX project?s mission is to ?? we can finish the sentence with an active statement that describes why the project exists, what its purpose is and where it is going. Best Regards, Ildik? > On May 11, 2022, at 08:07, Waines, Greg wrote: > > ? just a reminder, > that there are on-going discussions around updating the StarlingX Mission Statement in the weekly TSC/Community meetings. > > See the following etherpad for details on the discussions so far: https://etherpad.opendev.org/p/starlingx-mission-statement > Currently some of the proposed new mission statements are: > ? StarlingX is a complete, open source, highly-scalable, performant, distributed container-based cloud infrastructure platform with Day1 and Day 2 management for the most demanding workloads as found in Industrial IoT, Telecom, Video Delivery, and more. > ? or > ? StarlingX is a ready-to-deploy open-source distributed kubernetes-based cloud infrastruture platform with full lifecycle management of the Plaform infrastructure and capable of supporting the most demanding workloads as found in Industrial IOT, Telecom and more. > > Feel free to join the discussion by adding to discussions in etherpad, attending TSC/Community meeting or replying to this email. > > Greg. > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From outbackdingo at gmail.com Thu May 12 08:30:01 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Thu, 12 May 2022 15:30:01 +0700 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: ok, so getting closer its about the ip space and preset variables now and ... after installing software on controller-0 last run i used note i set pxeboot_ vars to put the right network and interfaces on bond0 [sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml system_mode: duplex dns_servers: - 8.8.8.8 - 8.8.4.4 external_oam_subnet: 10.16.48.1/24 external_oam_gateway_address: 10.16.48.1 external_oam_floating_address: 10.16.48.110 external_oam_node_0_address: 10.16.48.114 external_oam_node_1_address: 10.16.48.115 external_oam_node_2_address: 10.16.48.116 external_oam_node_3_address: 10.16.48.117 external_oam_node_4_address: 10.16.48.118 admin_username: admin admin_password: somepass ansible_become_pass: somepass # Add these lines to configure Docker to use a proxy server # # docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: https://my.proxy.com:1443 # # docker_no_proxy: # # - 1.2.3.4 # kubernetes_version: 1.21.3 pxeboot_subnet: 10.16.48.1/24 pxeboot_start_address: 10.16.48.100 pxeboot_end_address: 10.16.48.151 then ran ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml and ... it was successful... onto configuring source /etc/platform/openrc system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 OAM_IF=bond0.1648 PXE_IF=bond0 MGMT_IF=bond0.1680 CLUSTER_IF=bond0.1664 ping 8.8.8.8 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID}; done system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-modify controller-0 $MGMT_IF -c platform system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host system host-if-modify controller-0 $PXE_IF -c platform system interface-network-assign controller-0 $PXE_IF pxeboot system host-if-modify controller-0 $CLUSTER_IF -c platform system interface-network-assign controller-0 $CLUSTER_IF cluster-host system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system host-label-assign controller-0 openstack-control-plane=enabled system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled system storage-backend-add ceph-rook --confirmed system host-unlock controller-0 where controller-0 does reboot and do its boot sequence, then comes up on the correct OAM_IF IP, and DOES have also the correct floating address assigned to bond0.1648 i can actually login, i them waited some minutes and source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ functionally working... with the following network config [sysadmin at controller-0 ~(keystone_admin)]$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff 4: enp33s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff 5: enp49s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr b8:59:9f:12:2c:fc 6: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0 valid_lft forever preferred_lft forever inet 10.16.48.100/24 scope global secondary bond0 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever 8: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648 valid_lft forever preferred_lft forever inet 10.16.48.110/24 scope global secondary vlan1648 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever 9: vlan1664 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever 10: vlan1680 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680 valid_lft forever preferred_lft forever inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12 valid_lft forever preferred_lft forever inet 192.168.206.1/24 scope global secondary vlan1680 valid_lft forever preferred_lft forever inet 192.168.204.1/24 scope global secondary vlan1680 valid_lft forever preferred_lft forever inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1680 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever ------------------snip---------------------- i pxe booted 2 more nodes, they did pxe fine from controller bond0 with 10.16.48.x as specified in localhost.yml they did show in system host-list... where i set their personalities. [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | offline | | 3 | worker-0 | worker | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ they both then proceeded to boot.... but now appear hung...... 1+ hours and i think the problem just might be... 192.168.204 and 192.168.206 addresses on bond0.1680 so.... they need to be 10.16.64.x which is what all our pods talk across for bond0.1664 or 10.16.80.x for bond0.1680 so now they question is which is what in the variables:, i believe i have the proper pxeboot_ and oam with 10.16.48.x on bond0 and bond0.1648 though you show oam and managment_ as different but i dont think i completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service and management multicast, so whats what ? in your ip space compare to what mine should be. pxeboot_subnet pxeboot_start_address pxeboot_end_address management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_address cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address On Thu, May 12, 2022 at 7:44 AM Outback Dingo wrote: > > working through the configuration now based on findings, and yes i > only have to do the ip link commands once prior to bootstrap > > i did get bond0 and vlans on a previous try to be configured after > system host-unlock controller-0 > they were just in the wrong order, so rebuilding the primary node, if > it works and put the interfaces > and networks on proper interfaces and i can bootstrap controller-1 and > get past unlocking that also... i think it will be a win! > > On Thu, May 12, 2022 at 7:32 AM Waines, Greg wrote: > > > > Were you successful ? > > > > ( One question ... you are only having to do the 'ip link ...' commands BEFORE bootstrap in order to have IP Connectivity to the outside world for bootstrapping .. correct ? ) > > > > Greg. > > > > -----Original Message----- > > From: Outback Dingo > > Sent: Wednesday, May 11, 2022 8:21 PM > > To: Waines, Greg > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] STX networking > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: > > Network Properties > > > > pxeboot_subnet: 10.16.48.1 > > pxeboot_start_address 10.16.48.100 > > pxeboot_end_address 10.16.48.125 > > management_subnet > > management_start_address > > management_end_address > > cluster_host_subnet > > cluster_host_start_address > > cluster_host_end_address > > cluster_pod_subnet > > cluster_pod_start_address > > cluster_pod_end_address > > cluster_service_subnet > > cluster_service_start_address > > cluster_service_end_address > > management_multicast_subnet > > management_multicast_start_address > > management_multicast_end_address > > > > ip link add bond0 type bond > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 up > > > > Set VLAN on the bond device: > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 ip link set bond0.1664 up > > > > > > and modify the host details as per below: > > > > OAM_IF=bond0.1648 > > MGMT_IF=bond0.1664 > > CLUSTER_IF=bond0.1680 > > PXE_IF=bond0 <- this puts pxe on bond0 > > > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > > > > > replying to answer your questions from email below, see in-lined > > > below, Greg. > > > > > > -----Original Message----- > > > From: Outback Dingo > > > Sent: Wednesday, May 11, 2022 12:00 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] STX networking > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > scenario... > > > > > > i have a host say controller-0 > > > > > > prior to any ansible run > > > [Greg] I assume you mean the bootstrap ansible playbook > > > > > > i need to create a bond, and bridges and vlans [Greg] You do need an > > > interface to the outside world ? e.g. in order to download container images from docker hub. > > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > > > > > sure.... > > > Add a bond device as root: > > > > > > ip link add bond0 type bond > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 down > > > ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > Set VLAN on the bond device: > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id 1664 > > > ip link set bond0.1664 up ip link add link bond0 name bond0.1680 type > > > vlan id 1680 ip link set bond0.1680 up > > > > > > Add the bridge device and attach VLAN to it: > > > ip link add br0 type bridge > > > ip link set bond0.1648 master br0 > > > ip link set bond0.1664 master br0 > > > ip link set bond0.1680 master br0 > > > ip link set br0 up > > > > > > so i see where in starlingx > > > [Greg] the following commands are only possible AFTER bootstrap > > > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 > > > ae enp33s0 enp49s0 system host-if-modify controller-0 $OAM_IF -c > > > platform system interface-network-assign controller-0 $OAM_IF oam > > > system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan > > > bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 > > > vlan bond0 system host-if-add controller-0 -V 1680 -c platform > > > bond0.1680 vlan bond0 > > > > > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > > > where i want to set example, since each needs its own interface can i > > > set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? i.e. OAM on > > > port-based/untagged-vlan of bond and MGMT on vlan-tag=1664 on bond ( > > > BUT here is where you need the pxeboot network because your MGMT > > > network is vlan-tagged ? and you can't pxe boot over that ) > > > > > > OAM_IF=bond0 > > > system host-if-modify controller-0 $OAM_IF -c platform system > > > interface-network-assign controller-0 $OAM_IF oam system host-if-add > > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > > > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan > > > bond0 system host-if-add controller-0 -V 1672 -c platform bond0.1672 > > > vlan bond0 > > > MGMT_IF=bond0.1664 > > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > > > interface-network-list controller-0 | awk '{if > > > ($6=="lo") print $4;}') > > > for UUID in $IFNET_UUIDS; do > > > system interface-network-remove ${UUID} done > > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > > > system interface-network-assign controller-0 $MGMT_IF mgmt system > > > interface-network-assign controller-0 $MGMT_IF cluster-host > > > > > > the reason for this being our switches are > > > > > > # MGMT > > > interface vlan1648 > > > address 10.16.48.2/24 > > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > > vlan-id 1648 > > > vlan-raw-device bridge > > > > > > interface vlan1672 > > > address 10.16.72.2/24 > > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > > vlan-id 1672 > > > vlan-raw-device bridge > > > > > > interface vlan1680 > > > address 10.16.80.2/24 > > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > > vlan-id 1680 > > > vlan-raw-device bridge > > > > > > interface vlan1696 > > > address 10.16.96.2/24 > > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > > vlan-id 1696 > > > vlan-raw-device bridge > > > > > > interface vlan1664 > > > address 10.16.64.2/24 > > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > > vlan-id 1664 > > > vlan-raw-device bridge > > > > > > and further down DATAIF_0=bond0.1680 > > > > > > the reason being we are trying to have starlingx conform to our > > > networks topology I also noted: in > > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible_boo > > > tstrap_configs.html#install-time-only-params-r6 > > > ... > > > > > > Network Properties I listed at the bottom > > > > > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > > > > > Network Properties > > > pxeboot_subnet > > > pxeboot_start_address > > > pxeboot_end_address > > > management_subnet > > > management_start_address > > > management_end_address > > > cluster_host_subnet > > > cluster_host_start_address > > > cluster_host_end_address > > > cluster_pod_subnet > > > cluster_pod_start_addres > > > cluster_pod_end_address > > > cluster_service_subnet > > > cluster_service_start_address > > > cluster_service_end_address > > > management_multicast_subnet > > > management_multicast_start_address > > > management_multicast_end_address > > > > > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > > valid_lft forever preferred_lft forever > > > inet 10.16.48.114/24 scope global secondary bond0 > > > valid_lft forever preferred_lft forever > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > > > valid_lft forever preferred_lft forever > > > inet 169.254.202.1/24 scope global vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.206.1/24 scope global secondary vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.204.1/24 scope global secondary vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > > > valid_lft forever preferred_lft forever > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > From ascaccia96 at hotmail.com Thu May 12 10:28:25 2022 From: ascaccia96 at hotmail.com (Amato Scaccia) Date: Thu, 12 May 2022 10:28:25 +0000 Subject: [Starlingx-discuss] deploy-prep failed when adding subcloud Message-ID: Hi all, We?re trying to add a subcloud through dcmanager(ref: https://docs.starlingx.io/deploy_install_guides/r6_release/distributed_cloud/index-install-r6-distcloud-46f4880ec78b.html). On the subcloud, after booting stx-6.0, we configure the OAM network with config_management. At this point the central cloud is able to reach the subcloud on the configured IP. Then we run this command on the central dcmanager subcloud add --bootstrap-address subcloud-FIP bootstrap-values bootstrap_values.yml. If we check the log in /var/log/dcmanager/dcmanager.log we retrieve the following error(from the GUI we observe pre-deploy-failed): 2022-05-12 10:23:14.155 206351 ERROR dcmanager.manager.subcloud_manager [req-cce9960a-0b5c-4467-b298-a2c061a375f1 4c7165924da94cc2ad2b6d9f0b3cdb16 - - default default] Failed to create subcloud edge-cs-02: Reason: Internal Server Error HTTP response headers: HTTPHeaderDict({'Content-Length': '499', 'X-Kubernetes-Pf-Prioritylevel-Uid': 'afae7f88-86e0-4f51-80e7-dc3b645b0dad', 'Cache-Control': 'no-cache, private', 'Date': 'Thu, 12 May 2022 0s-Pf-Flowschema-Uid': '781db709-8502-491a-8b4d-d011ffe56ec5', 'Content-Type': 'application/json'}) HTTP response body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"Internal error occurred: failed calling webhook \"webhook.cert-manager.io\": Post \"https://cm-cert-manager443/mutate?timeout=30s\": context deadline exceeded","reason":"InternalError","details":{"causes":[{"message":"failed calling webhook \"webhook.cert-manager.io\": Post \"https://cm-cert-manager-webhook.certmeout=30s\": context deadline exceeded"}]},"code":500} Before this procedure we reconfigure the OAM interface. It can be something related to expired certificates? Thanks for your help Sent from Mail for Windows -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu May 12 11:49:32 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 12 May 2022 11:49:32 +0000 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: I doubt if this is causing issues but shouldn't this: pxeboot_subnet: 10.16.48.1/24 be pxeboot_subnet: 10.16.48.0/24 ? ( and similar issue for oam_subnet ) ACTUALLY ... you have the pxeboot_subnet and the oam_subnet being the same ? external_oam_subnet: 10.16.48.1/24 pxeboot_subnet: 10.16.48.1/24 That is wrong ... they have to be separate IP subnets. You should also remove: external_oam_node_2_address: 10.16.48.116 external_oam_node_3_address: 10.16.48.117 external_oam_node_4_address: 10.16.48.118 Other comment on your config commands ... system host-if-modify controller-0 $MGMT_IF -c platform system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host // you should remove this, as you assign 'cluster-host' network below to a separate interface ... system host-if-modify controller-0 $CLUSTER_IF -c platform system interface-network-assign controller-0 $CLUSTER_IF cluster-host ... I would redo install using a unique IP Subnet for pxeboot and oam networks, Greg. -----Original Message----- From: Outback Dingo Sent: Thursday, May 12, 2022 4:30 AM To: Waines, Greg Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] ok, so getting closer its about the ip space and preset variables now and ... after installing software on controller-0 last run i used note i set pxeboot_ vars to put the right network and interfaces on bond0 [sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml system_mode: duplex dns_servers: - 8.8.8.8 - 8.8.4.4 external_oam_subnet: 10.16.48.1/24 external_oam_gateway_address: 10.16.48.1 external_oam_floating_address: 10.16.48.110 external_oam_node_0_address: 10.16.48.114 external_oam_node_1_address: 10.16.48.115 external_oam_node_2_address: 10.16.48.116 external_oam_node_3_address: 10.16.48.117 external_oam_node_4_address: 10.16.48.118 admin_username: admin admin_password: somepass ansible_become_pass: somepass # Add these lines to configure Docker to use a proxy server # # docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: https://my.proxy.com:1443 # # docker_no_proxy: # # - 1.2.3.4 # kubernetes_version: 1.21.3 pxeboot_subnet: 10.16.48.1/24 pxeboot_start_address: 10.16.48.100 pxeboot_end_address: 10.16.48.151 then ran ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml and ... it was successful... onto configuring source /etc/platform/openrc system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 OAM_IF=bond0.1648 PXE_IF=bond0 MGMT_IF=bond0.1680 CLUSTER_IF=bond0.1664 ping 8.8.8.8 system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID}; done system host-if-modify controller-0 $OAM_IF -c platform system interface-network-assign controller-0 $OAM_IF oam system host-if-modify controller-0 $MGMT_IF -c platform system interface-network-assign controller-0 $MGMT_IF mgmt system interface-network-assign controller-0 $MGMT_IF cluster-host system host-if-modify controller-0 $PXE_IF -c platform system interface-network-assign controller-0 $PXE_IF pxeboot system host-if-modify controller-0 $CLUSTER_IF -c platform system interface-network-assign controller-0 $CLUSTER_IF cluster-host system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org system host-label-assign controller-0 openstack-control-plane=enabled system host-label-assign controller-0 ceph-mon-placement=enabled system host-label-assign controller-0 ceph-mgr-placement=enabled system storage-backend-add ceph-rook --confirmed system host-unlock controller-0 where controller-0 does reboot and do its boot sequence, then comes up on the correct OAM_IF IP, and DOES have also the correct floating address assigned to bond0.1648 i can actually login, i them waited some minutes and source /etc/platform/openrc [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | +----+--------------+-------------+----------------+-------------+--------------+ functionally working... with the following network config [sysadmin at controller-0 ~(keystone_admin)]$ ip a 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 inet 127.0.0.1/8 scope host lo valid_lft forever preferred_lft forever inet6 ::1/128 scope host valid_lft forever preferred_lft forever 2: eno1: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff 4: enp33s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff 5: enp49s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr b8:59:9f:12:2c:fc 6: docker0: mtu 1500 qdisc noqueue state DOWN group default link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 valid_lft forever preferred_lft forever 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0 valid_lft forever preferred_lft forever inet 10.16.48.100/24 scope global secondary bond0 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever 8: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648 valid_lft forever preferred_lft forever inet 10.16.48.110/24 scope global secondary vlan1648 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever 9: vlan1664 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever 10: vlan1680 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680 valid_lft forever preferred_lft forever inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12 valid_lft forever preferred_lft forever inet 192.168.206.1/24 scope global secondary vlan1680 valid_lft forever preferred_lft forever inet 192.168.204.1/24 scope global secondary vlan1680 valid_lft forever preferred_lft forever inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1680 valid_lft forever preferred_lft forever inet6 fe80::ba59:9fff:fe12:3278/64 scope link valid_lft forever preferred_lft forever ------------------snip---------------------- i pxe booted 2 more nodes, they did pxe fine from controller bond0 with 10.16.48.x as specified in localhost.yml they did show in system host-list... where i set their personalities. [sysadmin at controller-0 ~(keystone_admin)]$ system host-list +----+--------------+-------------+----------------+-------------+--------------+ | id | hostname | personality | administrative | operational | availability | +----+--------------+-------------+----------------+-------------+--------------+ | 1 | controller-0 | controller | unlocked | enabled | available | | 2 | controller-1 | controller | locked | disabled | offline | | 3 | worker-0 | worker | locked | disabled | offline | +----+--------------+-------------+----------------+-------------+--------------+ they both then proceeded to boot.... but now appear hung...... 1+ hours and i think the problem just might be... 192.168.204 and 192.168.206 addresses on bond0.1680 so.... they need to be 10.16.64.x which is what all our pods talk across for bond0.1664 or 10.16.80.x for bond0.1680 so now they question is which is what in the variables:, i believe i have the proper pxeboot_ and oam with 10.16.48.x on bond0 and bond0.1648 though you show oam and managment_ as different but i dont think i completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service and management multicast, so whats what ? in your ip space compare to what mine should be. pxeboot_subnet pxeboot_start_address pxeboot_end_address management_subnet management_start_address management_end_address cluster_host_subnet cluster_host_start_address cluster_host_end_address cluster_pod_subnet cluster_pod_start_address cluster_pod_end_address cluster_service_subnet cluster_service_start_address cluster_service_end_address management_multicast_subnet management_multicast_start_address management_multicast_end_address On Thu, May 12, 2022 at 7:44 AM Outback Dingo wrote: > > working through the configuration now based on findings, and yes i > only have to do the ip link commands once prior to bootstrap > > i did get bond0 and vlans on a previous try to be configured after > system host-unlock controller-0 they were just in the wrong order, so > rebuilding the primary node, if it works and put the interfaces and > networks on proper interfaces and i can bootstrap controller-1 and get > past unlocking that also... i think it will be a win! > > On Thu, May 12, 2022 at 7:32 AM Waines, Greg wrote: > > > > Were you successful ? > > > > ( One question ... you are only having to do the 'ip link ...' > > commands BEFORE bootstrap in order to have IP Connectivity to the > > outside world for bootstrapping .. correct ? ) > > > > Greg. > > > > -----Original Message----- > > From: Outback Dingo > > Sent: Wednesday, May 11, 2022 8:21 PM > > To: Waines, Greg > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] STX networking > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: > > Network Properties > > > > pxeboot_subnet: 10.16.48.1 > > pxeboot_start_address 10.16.48.100 > > pxeboot_end_address 10.16.48.125 > > management_subnet > > management_start_address > > management_end_address > > cluster_host_subnet > > cluster_host_start_address > > cluster_host_end_address > > cluster_pod_subnet > > cluster_pod_start_address > > cluster_pod_end_address > > cluster_service_subnet > > cluster_service_start_address > > cluster_service_end_address > > management_multicast_subnet > > management_multicast_start_address > > management_multicast_end_address > > > > ip link add bond0 type bond > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > Set VLAN on the bond device: > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > 1664 ip link set bond0.1664 up > > > > > > and modify the host details as per below: > > > > OAM_IF=bond0.1648 > > MGMT_IF=bond0.1664 > > CLUSTER_IF=bond0.1680 > > PXE_IF=bond0 <- this puts pxe on bond0 > > > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > > > > > replying to answer your questions from email below, see in-lined > > > below, Greg. > > > > > > -----Original Message----- > > > From: Outback Dingo > > > Sent: Wednesday, May 11, 2022 12:00 AM > > > To: starlingx-discuss at lists.starlingx.io > > > Subject: [Starlingx-discuss] STX networking > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > scenario... > > > > > > i have a host say controller-0 > > > > > > prior to any ansible run > > > [Greg] I assume you mean the bootstrap ansible playbook > > > > > > i need to create a bond, and bridges and vlans [Greg] You do need > > > an interface to the outside world ? e.g. in order to download container images from docker hub. > > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > > > > > sure.... > > > Add a bond device as root: > > > > > > ip link add bond0 type bond > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > Set VLAN on the bond device: > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > set > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > > 1664 ip link set bond0.1664 up ip link add link bond0 name > > > bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > > > > > Add the bridge device and attach VLAN to it: > > > ip link add br0 type bridge > > > ip link set bond0.1648 master br0 > > > ip link set bond0.1664 master br0 > > > ip link set bond0.1680 master br0 > > > ip link set br0 up > > > > > > so i see where in starlingx > > > [Greg] the following commands are only possible AFTER bootstrap > > > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 > > > bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 > > > $OAM_IF -c platform system interface-network-assign controller-0 > > > $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform > > > bond0.1648 vlan > > > bond0 system host-if-add controller-0 -V 1664 -c platform > > > bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c > > > platform > > > bond0.1680 vlan bond0 > > > > > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > > > where i want to set example, since each needs its own interface > > > can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? > > > i.e. OAM on port-based/untagged-vlan of bond and MGMT on > > > vlan-tag=1664 on bond ( BUT here is where you need the pxeboot > > > network because your MGMT network is vlan-tagged ? and you can't > > > pxe boot over that ) > > > > > > OAM_IF=bond0 > > > system host-if-modify controller-0 $OAM_IF -c platform system > > > interface-network-assign controller-0 $OAM_IF oam system > > > host-if-add > > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > > > system host-if-add controller-0 -V 1680 -c platform bond0.1680 > > > vlan > > > bond0 system host-if-add controller-0 -V 1672 -c platform > > > bond0.1672 vlan bond0 > > > MGMT_IF=bond0.1664 > > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > > > interface-network-list controller-0 | awk '{if > > > ($6=="lo") print $4;}') > > > for UUID in $IFNET_UUIDS; do > > > system interface-network-remove ${UUID} done > > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > > > system interface-network-assign controller-0 $MGMT_IF mgmt system > > > interface-network-assign controller-0 $MGMT_IF cluster-host > > > > > > the reason for this being our switches are > > > > > > # MGMT > > > interface vlan1648 > > > address 10.16.48.2/24 > > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > > vlan-id 1648 > > > vlan-raw-device bridge > > > > > > interface vlan1672 > > > address 10.16.72.2/24 > > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > > vlan-id 1672 > > > vlan-raw-device bridge > > > > > > interface vlan1680 > > > address 10.16.80.2/24 > > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > > vlan-id 1680 > > > vlan-raw-device bridge > > > > > > interface vlan1696 > > > address 10.16.96.2/24 > > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > > vlan-id 1696 > > > vlan-raw-device bridge > > > > > > interface vlan1664 > > > address 10.16.64.2/24 > > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > > vlan-id 1664 > > > vlan-raw-device bridge > > > > > > and further down DATAIF_0=bond0.1680 > > > > > > the reason being we are trying to have starlingx conform to our > > > networks topology I also noted: in > > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible > > > _boo > > > tstrap_configs.html#install-time-only-params-r6 > > > ... > > > > > > Network Properties I listed at the bottom > > > > > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > > > > > Network Properties > > > pxeboot_subnet > > > pxeboot_start_address > > > pxeboot_end_address > > > management_subnet > > > management_start_address > > > management_end_address > > > cluster_host_subnet > > > cluster_host_start_address > > > cluster_host_end_address > > > cluster_pod_subnet > > > cluster_pod_start_addres > > > cluster_pod_end_address > > > cluster_service_subnet > > > cluster_service_start_address > > > cluster_service_end_address > > > management_multicast_subnet > > > management_multicast_start_address > > > management_multicast_end_address > > > > > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > > valid_lft forever preferred_lft forever > > > inet 10.16.48.114/24 scope global secondary bond0 > > > valid_lft forever preferred_lft forever > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > > > valid_lft forever preferred_lft forever > > > inet 169.254.202.1/24 scope global vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.206.1/24 scope global secondary vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.204.1/24 scope global secondary vlan1664 > > > valid_lft forever preferred_lft forever > > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > > > valid_lft forever preferred_lft forever > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > valid_lft forever preferred_lft forever > > > > > > _______________________________________________ > > > Starlingx-discuss mailing list > > > Starlingx-discuss at lists.starlingx.io > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > ss > > > From bogdan-iulian.andrei at intel.com Thu May 12 11:57:01 2022 From: bogdan-iulian.andrei at intel.com (Andrei, Bogdan-Iulian) Date: Thu, 12 May 2022 11:57:01 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220512T035413Z Message-ID: Sanity Test from 2022-May-12 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220512T035413Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220512T035413Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Ghada.Khalil at windriver.com Thu May 12 16:32:22 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 12 May 2022 16:32:22 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220512T035413Z In-Reply-To: References: Message-ID: Regarding https://bugs.launchpad.net/starlingx/+bug/1971981, we believe the main issue is the lack of access in the sanity lab to the public registries; it's an air-gapped system. Even though the images are in the private registry, there is a bug in the nginx setup that's not overriding the registry properly and is still attempting to pull images from the public registries. For those with systems that have public access, you should not be facing this issue. Investigation is continuing to determine a fix for this issue. From: Andrei, Bogdan-Iulian Sent: Thursday, May 12, 2022 7:57 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220512T035413Z [Please note: This e-mail is from an EXTERNAL e-mail address] Sanity Test from 2022-May-12 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220512T035413Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220512T035413Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by these LP bugs: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Ghada.Khalil at windriver.com Fri May 13 00:57:43 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 13 May 2022 00:57:43 +0000 Subject: [Starlingx-discuss] Minutes: Community Call (May 11 2022) Message-ID: Etherpad: https://etherpad.opendev.org/p/stx-status Minutes from the community call May 11 2022 Standing Topics - Build - CentOS builds - 6 out of 7 builds were successful. - The one failure was related to a known intermittent build issue. LP: https://bugs.launchpad.net/starlingx/+bug/1968583 not seen in the last two weeks - Debian builds - Had the first successful build on Saturday. Working on getting regular builds going. - Hitting some resource issues - specifically related to disk space. - May need to clean up older releases. - Agreed to remove build artifacts for stx.2.0, stx.3.0, stx.4.0, ussuri, f/centos - May need to only build Debian every other day instead of daily. TBD - Sanity - Sanity is Red due to two LPs: - https://bugs.launchpad.net/starlingx/+bug/1970645 - Issue under investigation, but the dev prime is not able to reproduce - https://bugs.launchpad.net/starlingx/+bug/1971981 - Issue seems related to the nginx/cert-manager upversion on May 3. - The new images are populated in the sanity private registry, but for some reason ansible is still trying to reference the old images. - nginx & cert-manager deploy correctly in multiple dev envs in WR. - Gerrit Reviews in Need of Attention - Nothing raised in this meeting - Reference Links: - Active Branch (open): https://review.opendev.org/q/projects:starlingx+is:open+branch:+master - Active Branch (merged): https://review.opendev.org/q/projects:starlingx+is:merged+branch:master Topics for This Week - Sanity - WR team is still trying to get the stx sanity going as Intel will stop running sanity on May 13 - WR team is negotiating with Intel extending their testing for a few days to ensure there is no gap in coverage - TSC Elections - Nominations for 2 TSC positions are open now until May 16 - See https://etherpad.opendev.org/p/stx-cores for more details ARs from Previous Meetings - None Open Requests for Help - A number of questions from OutbackDingo at gmail.com - Intro - Scott joined the community call - Located in Asia - Hosting a virtualized cloud in the Netherlands; Was able to deploy starlingx successfully one year ago in the Netherlands - Now trying to deploy another cloud instance, but having trouble deploying stx.6.0 - Key Issues - Setting up persistent storage on worker nodes - Want to deploy 6 large nodes w/ TBs of storage on each. Don't want to setup - Suggestion is to look at using rook - Limit of only 2 controllers - StarlingX doesn't current support more than 2 controllers currently - 6 nodes can be deployed as: AIO-DX + 4 worker nodes - pxeboot env - All nodes (other than controller-0) pxeboot from controller-0 - The pxeboot server can be setup for controller-0 if desired. Otherwise, USB boot is also a supported option. - Question related to stx.6.0 Build: - http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/012968.html - Instructions used are outdated. - Scott provided a link to the most recent instructions: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/012982.html - Question related to pxeboot: - http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/012969.html - http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/012970.html - Looking for documentation reference. Greg will provide a reference. - Question about network setup: - http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/012985.html - Greg will have a look at this email and will respond From outbackdingo at gmail.com Fri May 13 02:03:14 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Fri, 13 May 2022 09:03:14 +0700 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: On Thu, May 12, 2022 at 6:49 PM Waines, Greg wrote: > > I doubt if this is causing issues but shouldn't this: > pxeboot_subnet: 10.16.48.1/24 > be > pxeboot_subnet: 10.16.48.0/24 > ? ok changed to > pxeboot_subnet: 10.16.48.0/24 > ( and similar issue for oam_subnet ) > > > ACTUALLY ... you have the pxeboot_subnet and the oam_subnet being the same ? > external_oam_subnet: 10.16.48.1/24 > pxeboot_subnet: 10.16.48.1/24 > That is wrong ... they have to be separate IP subnets. > okay heres the curiosity, why do they require to be separate subnets, if their on the seperate interfaces bond0 / bond0.1648 even though its the same really as everything goes over the bond at some point. we really use 10.16.48.x for our management backplane, and it is the only network we allow pxe on. as shown in previous netplans # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: bonds: bond0: interfaces: - enp33s0 - enp101s0 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 parameters: down-delay: 0 lacp-rate: fast mii-monitor-interval: 100 mode: 802.3ad transmit-hash-policy: layer3+4 up-delay: 0 bridges: br-mgmt: addresses: - 10.16.48.10/24 gateway4: 10.16.48.1 interfaces: - bond0 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 10.16.48.11 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false br-storage: addresses: - 10.16.72.21/24 interfaces: - bond0.1672 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false br-vxlan: addresses: - 10.16.80.21/24 interfaces: - bond0.1680 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false ethernets: eno1np0: dhcp4: true match: macaddress: 00:25:90:b9:71:8c mtu: 9000 set-name: enp17s0f0 eno2np1: dhcp4: true match: macaddress: 00:25:90:b9:71:8d mtu: 9000 set-name: enp17s0f1 enp33s0: match: macaddress: 98:03:9b:54:9c:f4 mtu: 9000 set-name: enp33s0 enp101s0: match: macaddress: 98:03:9b:54:9c:e4 mtu: 9000 set-name: enp101s0 vlans: bond0.1672: id: 1672 link: bond0 mtu: 9000 bond0.1680: id: 1680 link: bond0 mtu: 9000 version: 2 > > You should also remove: > external_oam_node_2_address: 10.16.48.116 > external_oam_node_3_address: 10.16.48.117 > external_oam_node_4_address: 10.16.48.118 > done.... > > Other comment on your config commands > ... > system host-if-modify controller-0 $MGMT_IF -c platform > system interface-network-assign controller-0 $MGMT_IF mgmt > system interface-network-assign controller-0 $MGMT_IF cluster-host // you should remove this, as you assign 'cluster-host' network below to a separate interface > ... > system host-if-modify controller-0 $CLUSTER_IF -c platform > system interface-network-assign controller-0 $CLUSTER_IF cluster-host > ... also fixed ... > > > > I would redo install using a unique IP Subnet for pxeboot and oam networks, > Greg. > redeploying now, can i ask, whats the logical difference between cluster_host_subnet: 10.16.96.0/24 and cluster_pod_subnet: 10.16.64.0/16 all our pods should exist / be accessible from 10.16.64, do they really require a completely separated interface then hosts?? feels kinda vxlan type magic running new install again... > -----Original Message----- > From: Outback Dingo > Sent: Thursday, May 12, 2022 4:30 AM > To: Waines, Greg > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > ok, so getting closer its about the ip space and preset variables now > > and ... after installing software on controller-0 last run i used note i set pxeboot_ vars to put the right network and interfaces on bond0 > [sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml > system_mode: duplex > > dns_servers: > - 8.8.8.8 > - 8.8.4.4 > > external_oam_subnet: 10.16.48.1/24 > external_oam_gateway_address: 10.16.48.1 > external_oam_floating_address: 10.16.48.110 > external_oam_node_0_address: 10.16.48.114 > external_oam_node_1_address: 10.16.48.115 > external_oam_node_2_address: 10.16.48.116 > external_oam_node_3_address: 10.16.48.117 > external_oam_node_4_address: 10.16.48.118 > > admin_username: admin > admin_password: somepass > ansible_become_pass: somepass > > # Add these lines to configure Docker to use a proxy server # # docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: https://my.proxy.com:1443 # # docker_no_proxy: > # # - 1.2.3.4 > # > kubernetes_version: 1.21.3 > pxeboot_subnet: 10.16.48.1/24 > pxeboot_start_address: 10.16.48.100 > pxeboot_end_address: 10.16.48.151 > > then ran ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml > and ... it was successful... onto configuring > source /etc/platform/openrc > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 bond0 ae enp33s0 enp49s0 > system host-if-add controller-0 -V 1648 -c platform bond0.1648 vlan bond0 > system host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan bond0 > OAM_IF=bond0.1648 > PXE_IF=bond0 > MGMT_IF=bond0.1680 > CLUSTER_IF=bond0.1664 > ping 8.8.8.8 > > system host-if-modify controller-0 lo -c none > IFNET_UUIDS=$(system interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID}; done > > system host-if-modify controller-0 $OAM_IF -c platform > system interface-network-assign controller-0 $OAM_IF oam > > system host-if-modify controller-0 $MGMT_IF -c platform > system interface-network-assign controller-0 $MGMT_IF mgmt > system interface-network-assign controller-0 $MGMT_IF cluster-host > > system host-if-modify controller-0 $PXE_IF -c platform > system interface-network-assign controller-0 $PXE_IF pxeboot > > system host-if-modify controller-0 $CLUSTER_IF -c platform > system interface-network-assign controller-0 $CLUSTER_IF cluster-host > > system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org > system host-label-assign controller-0 openstack-control-plane=enabled > system host-label-assign controller-0 ceph-mon-placement=enabled > system host-label-assign controller-0 ceph-mgr-placement=enabled > system storage-backend-add ceph-rook --confirmed > system host-unlock controller-0 > > where controller-0 does reboot and do its boot sequence, then comes up on the correct OAM_IF IP, and DOES have also the correct floating address assigned to bond0.1648 > > i can actually login, i them waited some minutes and > > source /etc/platform/openrc > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > +----+--------------+-------------+----------------+-------------+--------------+ > > functionally working... with the following network config > > [sysadmin at controller-0 ~(keystone_admin)]$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eno1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff > 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff > 4: enp33s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > 5: enp49s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr b8:59:9f:12:2c:fc > 6: docker0: mtu 1500 qdisc noqueue state DOWN group default > link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff > inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 > valid_lft forever preferred_lft forever > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0 > valid_lft forever preferred_lft forever > inet 10.16.48.100/24 scope global secondary bond0 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 8: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648 > valid_lft forever preferred_lft forever > inet 10.16.48.110/24 scope global secondary vlan1648 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 9: vlan1664 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 10: vlan1680 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12 > valid_lft forever preferred_lft forever > inet 192.168.206.1/24 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.204.1/24 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > ------------------snip---------------------- > > i pxe booted 2 more nodes, they did pxe fine from controller bond0 with 10.16.48.x as specified in localhost.yml > > they did show in system host-list... where i set their personalities. > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > | 2 | controller-1 | controller | locked | disabled | > offline | > | 3 | worker-0 | worker | locked | disabled | > offline | > +----+--------------+-------------+----------------+-------------+--------------+ > > they both then proceeded to boot.... but now appear hung...... 1+ hours > > and i think the problem just might be... 192.168.204 and 192.168.206 addresses on bond0.1680 so.... they need to be 10.16.64.x which is what all our pods talk across for bond0.1664 or 10.16.80.x for bond0.1680 > > so now they question is which is what in the variables:, i believe i have the proper pxeboot_ and oam with 10.16.48.x on bond0 and > bond0.1648 > though you show oam and managment_ as different but i dont think i completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service and management multicast, so whats what ? > in your ip space compare to what mine should be. > > pxeboot_subnet > pxeboot_start_address > pxeboot_end_address > > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_address > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > > > On Thu, May 12, 2022 at 7:44 AM Outback Dingo wrote: > > > > working through the configuration now based on findings, and yes i > > only have to do the ip link commands once prior to bootstrap > > > > i did get bond0 and vlans on a previous try to be configured after > > system host-unlock controller-0 they were just in the wrong order, so > > rebuilding the primary node, if it works and put the interfaces and > > networks on proper interfaces and i can bootstrap controller-1 and get > > past unlocking that also... i think it will be a win! > > > > On Thu, May 12, 2022 at 7:32 AM Waines, Greg wrote: > > > > > > Were you successful ? > > > > > > ( One question ... you are only having to do the 'ip link ...' > > > commands BEFORE bootstrap in order to have IP Connectivity to the > > > outside world for bootstrapping .. correct ? ) > > > > > > Greg. > > > > > > -----Original Message----- > > > From: Outback Dingo > > > Sent: Wednesday, May 11, 2022 8:21 PM > > > To: Waines, Greg > > > Cc: starlingx-discuss at lists.starlingx.io > > > Subject: Re: [Starlingx-discuss] STX networking > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: > > > Network Properties > > > > > > pxeboot_subnet: 10.16.48.1 > > > pxeboot_start_address 10.16.48.100 > > > pxeboot_end_address 10.16.48.125 > > > management_subnet > > > management_start_address > > > management_end_address > > > cluster_host_subnet > > > cluster_host_start_address > > > cluster_host_end_address > > > cluster_pod_subnet > > > cluster_pod_start_address > > > cluster_pod_end_address > > > cluster_service_subnet > > > cluster_service_start_address > > > cluster_service_end_address > > > management_multicast_subnet > > > management_multicast_start_address > > > management_multicast_end_address > > > > > > ip link add bond0 type bond > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > Set VLAN on the bond device: > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link set > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > > 1664 ip link set bond0.1664 up > > > > > > > > > and modify the host details as per below: > > > > > > OAM_IF=bond0.1648 > > > MGMT_IF=bond0.1664 > > > CLUSTER_IF=bond0.1680 > > > PXE_IF=bond0 <- this puts pxe on bond0 > > > > > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > > > > > > > replying to answer your questions from email below, see in-lined > > > > below, Greg. > > > > > > > > -----Original Message----- > > > > From: Outback Dingo > > > > Sent: Wednesday, May 11, 2022 12:00 AM > > > > To: starlingx-discuss at lists.starlingx.io > > > > Subject: [Starlingx-discuss] STX networking > > > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > > > scenario... > > > > > > > > i have a host say controller-0 > > > > > > > > prior to any ansible run > > > > [Greg] I assume you mean the bootstrap ansible playbook > > > > > > > > i need to create a bond, and bridges and vlans [Greg] You do need > > > > an interface to the outside world ? e.g. in order to download container images from docker hub. > > > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > > > > > > > sure.... > > > > Add a bond device as root: > > > > > > > > ip link add bond0 type bond > > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > > > Set VLAN on the bond device: > > > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > > set > > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > > > 1664 ip link set bond0.1664 up ip link add link bond0 name > > > > bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > > > > > > > Add the bridge device and attach VLAN to it: > > > > ip link add br0 type bridge > > > > ip link set bond0.1648 master br0 > > > > ip link set bond0.1664 master br0 > > > > ip link set bond0.1680 master br0 > > > > ip link set br0 up > > > > > > > > so i see where in starlingx > > > > [Greg] the following commands are only possible AFTER bootstrap > > > > > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 > > > > bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 > > > > $OAM_IF -c platform system interface-network-assign controller-0 > > > > $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform > > > > bond0.1648 vlan > > > > bond0 system host-if-add controller-0 -V 1664 -c platform > > > > bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c > > > > platform > > > > bond0.1680 vlan bond0 > > > > > > > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > > > > > where i want to set example, since each needs its own interface > > > > can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? > > > > i.e. OAM on port-based/untagged-vlan of bond and MGMT on > > > > vlan-tag=1664 on bond ( BUT here is where you need the pxeboot > > > > network because your MGMT network is vlan-tagged ? and you can't > > > > pxe boot over that ) > > > > > > > > OAM_IF=bond0 > > > > system host-if-modify controller-0 $OAM_IF -c platform system > > > > interface-network-assign controller-0 $OAM_IF oam system > > > > host-if-add > > > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > > > > system host-if-add controller-0 -V 1680 -c platform bond0.1680 > > > > vlan > > > > bond0 system host-if-add controller-0 -V 1672 -c platform > > > > bond0.1672 vlan bond0 > > > > MGMT_IF=bond0.1664 > > > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > > > > interface-network-list controller-0 | awk '{if > > > > ($6=="lo") print $4;}') > > > > for UUID in $IFNET_UUIDS; do > > > > system interface-network-remove ${UUID} done > > > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > > > > system interface-network-assign controller-0 $MGMT_IF mgmt system > > > > interface-network-assign controller-0 $MGMT_IF cluster-host > > > > > > > > the reason for this being our switches are > > > > > > > > # MGMT > > > > interface vlan1648 > > > > address 10.16.48.2/24 > > > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > > > vlan-id 1648 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1672 > > > > address 10.16.72.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > > > vlan-id 1672 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1680 > > > > address 10.16.80.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > > > vlan-id 1680 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1696 > > > > address 10.16.96.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > > > vlan-id 1696 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1664 > > > > address 10.16.64.2/24 > > > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > > > vlan-id 1664 > > > > vlan-raw-device bridge > > > > > > > > and further down DATAIF_0=bond0.1680 > > > > > > > > the reason being we are trying to have starlingx conform to our > > > > networks topology I also noted: in > > > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansible > > > > _boo > > > > tstrap_configs.html#install-time-only-params-r6 > > > > ... > > > > > > > > Network Properties I listed at the bottom > > > > > > > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > > > > > > > Network Properties > > > > pxeboot_subnet > > > > pxeboot_start_address > > > > pxeboot_end_address > > > > management_subnet > > > > management_start_address > > > > management_end_address > > > > cluster_host_subnet > > > > cluster_host_start_address > > > > cluster_host_end_address > > > > cluster_pod_subnet > > > > cluster_pod_start_addres > > > > cluster_pod_end_address > > > > cluster_service_subnet > > > > cluster_service_start_address > > > > cluster_service_end_address > > > > management_multicast_subnet > > > > management_multicast_start_address > > > > management_multicast_end_address > > > > > > > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > > > valid_lft forever preferred_lft forever > > > > inet 10.16.48.114/24 scope global secondary bond0 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > > > > valid_lft forever preferred_lft forever > > > > inet 169.254.202.1/24 scope global vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.206.1/24 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.204.1/24 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discu > > > > ss > > > > From Greg.Waines at windriver.com Fri May 13 11:22:51 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Fri, 13 May 2022 11:22:51 +0000 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: > okay heres the curiosity, why do they require to be separate subnets, if their on the seperate interfaces bond0 / bond0.1648 > even though its the same really as everything goes over the bond at some point. we really use 10.16.48.x for our management > backplane, and it is the only network we allow pxe on. > as shown in previous netplans pxeboot and mgmt are on two separate layer 2 networks pxeboot on the untagged vlan of bond0, and mgmt on vlan=1648 of bond0 If those two networks are connected to the same routing instance (which they are in the case of StarlingX Platform Networking), then they must have unique IP Subnets, based on basic rules for IP Routing. So you say "10.16.48.x" is the only network that you support pxebooting on. Questions: - Does your management network for your StarlingX hosts need to be vlan tagged ? ( if yes, not a problem, just curious why ? ) - If YES * use pxeboot (bond0) = 10.16.48.0/24 mgmt. (bond0.1648) = 10.16.49.0/24 ( I just picked 49 as an arbitrary other network ... - if NO * DON"T use pxeboot * use mgmt. (bond0) = 10.16.48.0/24 Greg. -----Original Message----- From: Outback Dingo Sent: Thursday, May 12, 2022 10:03 PM To: Waines, Greg Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] On Thu, May 12, 2022 at 6:49 PM Waines, Greg wrote: > > I doubt if this is causing issues but shouldn't this: > pxeboot_subnet: 10.16.48.1/24 > be > pxeboot_subnet: 10.16.48.0/24 > ? ok changed to > pxeboot_subnet: 10.16.48.0/24 > ( and similar issue for oam_subnet ) > > > ACTUALLY ... you have the pxeboot_subnet and the oam_subnet being the same ? > external_oam_subnet: 10.16.48.1/24 > pxeboot_subnet: 10.16.48.1/24 > That is wrong ... they have to be separate IP subnets. > okay heres the curiosity, why do they require to be separate subnets, if their on the seperate interfaces bond0 / bond0.1648 even though its the same really as everything goes over the bond at some point. we really use 10.16.48.x for our management backplane, and it is the only network we allow pxe on. as shown in previous netplans # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: bonds: bond0: interfaces: - enp33s0 - enp101s0 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 parameters: down-delay: 0 lacp-rate: fast mii-monitor-interval: 100 mode: 802.3ad transmit-hash-policy: layer3+4 up-delay: 0 bridges: br-mgmt: addresses: - 10.16.48.10/24 gateway4: 10.16.48.1 interfaces: - bond0 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 10.16.48.11 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false br-storage: addresses: - 10.16.72.21/24 interfaces: - bond0.1672 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false br-vxlan: addresses: - 10.16.80.21/24 interfaces: - bond0.1680 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false ethernets: eno1np0: dhcp4: true match: macaddress: 00:25:90:b9:71:8c mtu: 9000 set-name: enp17s0f0 eno2np1: dhcp4: true match: macaddress: 00:25:90:b9:71:8d mtu: 9000 set-name: enp17s0f1 enp33s0: match: macaddress: 98:03:9b:54:9c:f4 mtu: 9000 set-name: enp33s0 enp101s0: match: macaddress: 98:03:9b:54:9c:e4 mtu: 9000 set-name: enp101s0 vlans: bond0.1672: id: 1672 link: bond0 mtu: 9000 bond0.1680: id: 1680 link: bond0 mtu: 9000 version: 2 > > You should also remove: > external_oam_node_2_address: 10.16.48.116 > external_oam_node_3_address: 10.16.48.117 > external_oam_node_4_address: 10.16.48.118 > done.... > > Other comment on your config commands > ... > system host-if-modify controller-0 $MGMT_IF -c platform system > interface-network-assign controller-0 $MGMT_IF mgmt > system interface-network-assign controller-0 $MGMT_IF cluster-host // you should remove this, as you assign 'cluster-host' network below to a separate interface > ... > system host-if-modify controller-0 $CLUSTER_IF -c platform system > interface-network-assign controller-0 $CLUSTER_IF cluster-host ... also fixed ... > > > > I would redo install using a unique IP Subnet for pxeboot and oam > networks, Greg. > redeploying now, can i ask, whats the logical difference between cluster_host_subnet: 10.16.96.0/24 and cluster_pod_subnet: 10.16.64.0/16 all our pods should exist / be accessible from 10.16.64, do they really require a completely separated interface then hosts?? feels kinda vxlan type magic running new install again... > -----Original Message----- > From: Outback Dingo > Sent: Thursday, May 12, 2022 4:30 AM > To: Waines, Greg > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > ok, so getting closer its about the ip space and preset variables now > > and ... after installing software on controller-0 last run i used > note i set pxeboot_ vars to put the right network and interfaces on > bond0 > [sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml > system_mode: duplex > > dns_servers: > - 8.8.8.8 > - 8.8.4.4 > > external_oam_subnet: 10.16.48.1/24 > external_oam_gateway_address: 10.16.48.1 > external_oam_floating_address: 10.16.48.110 > external_oam_node_0_address: 10.16.48.114 > external_oam_node_1_address: 10.16.48.115 > external_oam_node_2_address: 10.16.48.116 > external_oam_node_3_address: 10.16.48.117 > external_oam_node_4_address: 10.16.48.118 > > admin_username: admin > admin_password: somepass > ansible_become_pass: somepass > > # Add these lines to configure Docker to use a proxy server # # docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: https://my.proxy.com:1443 # # docker_no_proxy: > # # - 1.2.3.4 > # > kubernetes_version: 1.21.3 > pxeboot_subnet: 10.16.48.1/24 > pxeboot_start_address: 10.16.48.100 > pxeboot_end_address: 10.16.48.151 > > then ran ansible-playbook > /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml > and ... it was successful... onto configuring source > /etc/platform/openrc system host-if-add -c platform -a 802.3ad -x > layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-add > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan > bond0 > OAM_IF=bond0.1648 > PXE_IF=bond0 > MGMT_IF=bond0.1680 > CLUSTER_IF=bond0.1664 > ping 8.8.8.8 > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID}; done > > system host-if-modify controller-0 $OAM_IF -c platform system > interface-network-assign controller-0 $OAM_IF oam > > system host-if-modify controller-0 $MGMT_IF -c platform system > interface-network-assign controller-0 $MGMT_IF mgmt system > interface-network-assign controller-0 $MGMT_IF cluster-host > > system host-if-modify controller-0 $PXE_IF -c platform system > interface-network-assign controller-0 $PXE_IF pxeboot > > system host-if-modify controller-0 $CLUSTER_IF -c platform system > interface-network-assign controller-0 $CLUSTER_IF cluster-host > > system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org > system host-label-assign controller-0 openstack-control-plane=enabled > system host-label-assign controller-0 ceph-mon-placement=enabled > system host-label-assign controller-0 ceph-mgr-placement=enabled > system storage-backend-add ceph-rook --confirmed system host-unlock > controller-0 > > where controller-0 does reboot and do its boot sequence, then comes up > on the correct OAM_IF IP, and DOES have also the correct floating > address assigned to bond0.1648 > > i can actually login, i them waited some minutes and > > source /etc/platform/openrc > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > +----+--------------+-------------+----------------+-------------+--------------+ > > functionally working... with the following network config > > [sysadmin at controller-0 ~(keystone_admin)]$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eno1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff > 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff > 4: enp33s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > 5: enp49s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr > b8:59:9f:12:2c:fc > 6: docker0: mtu 1500 qdisc noqueue state DOWN group default > link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff > inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 > valid_lft forever preferred_lft forever > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0 > valid_lft forever preferred_lft forever > inet 10.16.48.100/24 scope global secondary bond0 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 8: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648 > valid_lft forever preferred_lft forever > inet 10.16.48.110/24 scope global secondary vlan1648 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 9: vlan1664 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 10: vlan1680 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12 > valid_lft forever preferred_lft forever > inet 192.168.206.1/24 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.204.1/24 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > ------------------snip---------------------- > > i pxe booted 2 more nodes, they did pxe fine from controller bond0 > with 10.16.48.x as specified in localhost.yml > > they did show in system host-list... where i set their personalities. > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > | 2 | controller-1 | controller | locked | disabled | > offline | > | 3 | worker-0 | worker | locked | disabled | > offline | > +----+--------------+-------------+----------------+-------------+--------------+ > > they both then proceeded to boot.... but now appear hung...... 1+ > hours > > and i think the problem just might be... 192.168.204 and 192.168.206 > addresses on bond0.1680 so.... they need to be 10.16.64.x which is > what all our pods talk across for bond0.1664 or 10.16.80.x for > bond0.1680 > > so now they question is which is what in the variables:, i believe i > have the proper pxeboot_ and oam with 10.16.48.x on bond0 and > bond0.1648 > though you show oam and managment_ as different but i dont think i completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service and management multicast, so whats what ? > in your ip space compare to what mine should be. > > pxeboot_subnet > pxeboot_start_address > pxeboot_end_address > > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_address > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > > > On Thu, May 12, 2022 at 7:44 AM Outback Dingo wrote: > > > > working through the configuration now based on findings, and yes i > > only have to do the ip link commands once prior to bootstrap > > > > i did get bond0 and vlans on a previous try to be configured after > > system host-unlock controller-0 they were just in the wrong order, > > so rebuilding the primary node, if it works and put the interfaces > > and networks on proper interfaces and i can bootstrap controller-1 > > and get past unlocking that also... i think it will be a win! > > > > On Thu, May 12, 2022 at 7:32 AM Waines, Greg wrote: > > > > > > Were you successful ? > > > > > > ( One question ... you are only having to do the 'ip link ...' > > > commands BEFORE bootstrap in order to have IP Connectivity to the > > > outside world for bootstrapping .. correct ? ) > > > > > > Greg. > > > > > > -----Original Message----- > > > From: Outback Dingo > > > Sent: Wednesday, May 11, 2022 8:21 PM > > > To: Waines, Greg > > > Cc: starlingx-discuss at lists.starlingx.io > > > Subject: Re: [Starlingx-discuss] STX networking > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: > > > Network Properties > > > > > > pxeboot_subnet: 10.16.48.1 > > > pxeboot_start_address 10.16.48.100 pxeboot_end_address > > > 10.16.48.125 management_subnet management_start_address > > > management_end_address cluster_host_subnet > > > cluster_host_start_address cluster_host_end_address > > > cluster_pod_subnet cluster_pod_start_address > > > cluster_pod_end_address cluster_service_subnet > > > cluster_service_start_address cluster_service_end_address > > > management_multicast_subnet management_multicast_start_address > > > management_multicast_end_address > > > > > > ip link add bond0 type bond > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > Set VLAN on the bond device: > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > set > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > > 1664 ip link set bond0.1664 up > > > > > > > > > and modify the host details as per below: > > > > > > OAM_IF=bond0.1648 > > > MGMT_IF=bond0.1664 > > > CLUSTER_IF=bond0.1680 > > > PXE_IF=bond0 <- this puts pxe on bond0 > > > > > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg wrote: > > > > > > > > replying to answer your questions from email below, see in-lined > > > > below, Greg. > > > > > > > > -----Original Message----- > > > > From: Outback Dingo > > > > Sent: Wednesday, May 11, 2022 12:00 AM > > > > To: starlingx-discuss at lists.starlingx.io > > > > Subject: [Starlingx-discuss] STX networking > > > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > > > scenario... > > > > > > > > i have a host say controller-0 > > > > > > > > prior to any ansible run > > > > [Greg] I assume you mean the bootstrap ansible playbook > > > > > > > > i need to create a bond, and bridges and vlans [Greg] You do > > > > need an interface to the outside world ? e.g. in order to download container images from docker hub. > > > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > > > > > > > sure.... > > > > Add a bond device as root: > > > > > > > > ip link add bond0 type bond > > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > > enp33s0 down ip link set snp33s0 master bond0 ip link set > > > > enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 > > > > up > > > > > > > > Set VLAN on the bond device: > > > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > > set > > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan > > > > id > > > > 1664 ip link set bond0.1664 up ip link add link bond0 name > > > > bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > > > > > > > Add the bridge device and attach VLAN to it: > > > > ip link add br0 type bridge > > > > ip link set bond0.1648 master br0 ip link set bond0.1664 master > > > > br0 ip link set bond0.1680 master br0 ip link set br0 up > > > > > > > > so i see where in starlingx > > > > [Greg] the following commands are only possible AFTER bootstrap > > > > > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 > > > > bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 > > > > $OAM_IF -c platform system interface-network-assign controller-0 > > > > $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform > > > > bond0.1648 vlan > > > > bond0 system host-if-add controller-0 -V 1664 -c platform > > > > bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c > > > > platform > > > > bond0.1680 vlan bond0 > > > > > > > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > > > > > where i want to set example, since each needs its own interface > > > > can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? > > > > i.e. OAM on port-based/untagged-vlan of bond and MGMT on > > > > vlan-tag=1664 on bond ( BUT here is where you need the pxeboot > > > > network because your MGMT network is vlan-tagged ? and you can't > > > > pxe boot over that ) > > > > > > > > OAM_IF=bond0 > > > > system host-if-modify controller-0 $OAM_IF -c platform system > > > > interface-network-assign controller-0 $OAM_IF oam system > > > > host-if-add > > > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan > > > > bond0 system host-if-add controller-0 -V 1680 -c platform > > > > bond0.1680 vlan > > > > bond0 system host-if-add controller-0 -V 1672 -c platform > > > > bond0.1672 vlan bond0 > > > > MGMT_IF=bond0.1664 > > > > system host-if-modify controller-0 lo -c none > > > > IFNET_UUIDS=$(system interface-network-list controller-0 | awk > > > > '{if > > > > ($6=="lo") print $4;}') > > > > for UUID in $IFNET_UUIDS; do > > > > system interface-network-remove ${UUID} done > > > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > > > > system interface-network-assign controller-0 $MGMT_IF mgmt > > > > system interface-network-assign controller-0 $MGMT_IF > > > > cluster-host > > > > > > > > the reason for this being our switches are > > > > > > > > # MGMT > > > > interface vlan1648 > > > > address 10.16.48.2/24 > > > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > > > vlan-id 1648 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1672 > > > > address 10.16.72.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > > > vlan-id 1672 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1680 > > > > address 10.16.80.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > > > vlan-id 1680 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1696 > > > > address 10.16.96.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > > > vlan-id 1696 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1664 > > > > address 10.16.64.2/24 > > > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > > > vlan-id 1664 > > > > vlan-raw-device bridge > > > > > > > > and further down DATAIF_0=bond0.1680 > > > > > > > > the reason being we are trying to have starlingx conform to our > > > > networks topology I also noted: in > > > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansib > > > > le > > > > _boo > > > > tstrap_configs.html#install-time-only-params-r6 > > > > ... > > > > > > > > Network Properties I listed at the bottom > > > > > > > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > > > > > > > Network Properties > > > > pxeboot_subnet > > > > pxeboot_start_address > > > > pxeboot_end_address > > > > management_subnet > > > > management_start_address > > > > management_end_address > > > > cluster_host_subnet > > > > cluster_host_start_address > > > > cluster_host_end_address > > > > cluster_pod_subnet > > > > cluster_pod_start_addres > > > > cluster_pod_end_address > > > > cluster_service_subnet > > > > cluster_service_start_address > > > > cluster_service_end_address > > > > management_multicast_subnet > > > > management_multicast_start_address > > > > management_multicast_end_address > > > > > > > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > > > valid_lft forever preferred_lft forever > > > > inet 10.16.48.114/24 scope global secondary bond0 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > > > > valid_lft forever preferred_lft forever > > > > inet 169.254.202.1/24 scope global vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.206.1/24 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.204.1/24 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-dis > > > > cu > > > > ss > > > > From outbackdingo at gmail.com Fri May 13 11:46:19 2022 From: outbackdingo at gmail.com (Outback Dingo) Date: Fri, 13 May 2022 18:46:19 +0700 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: On Fri, May 13, 2022, 6:22 PM Waines, Greg wrote: > > okay heres the curiosity, why do they require to be separate subnets, if > their on the seperate interfaces bond0 / bond0.1648 > > even though its the same really as everything goes over the bond at some > point. we really use 10.16.48.x for our management > > backplane, and it is the only network we allow pxe on. > > as shown in previous netplans > > pxeboot and mgmt are on two separate layer 2 networks > pxeboot on the untagged vlan of bond0, and > mgmt on vlan=1648 of bond0 > > If those two networks are connected to the same routing instance (which > they are in the case of StarlingX Platform Networking), then they must have > unique IP Subnets, based on basic rules for IP Routing. > > So you say "10.16.48.x" is the only network that you support pxebooting on. > > Questions: > - Does your management network for your StarlingX hosts need to be vlan > tagged ? > ( if yes, not a problem, just curious why ? ) > - If YES > * use pxeboot (bond0) = 10.16.48.0/24 > mgmt. (bond0.1648) = 10.16.49.0/24 ( I just picked > 49 as an arbitrary other network ... > - if NO > * DON"T use pxeboot > * use mgmt. (bond0) = 10.16.48.0/24 > > Greg > So wait.... Are you saying that I can put both pxeboot and mgmt on > untagged 10.16.48.x no need to separate them? > -----Original Message----- > From: Outback Dingo > Sent: Thursday, May 12, 2022 10:03 PM > To: Waines, Greg > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > On Thu, May 12, 2022 at 6:49 PM Waines, Greg > wrote: > > > > I doubt if this is causing issues but shouldn't this: > > pxeboot_subnet: 10.16.48.1/24 > > be > > pxeboot_subnet: 10.16.48.0/24 > > ? > > ok changed to > pxeboot_subnet: 10.16.48.0/24 > > > ( and similar issue for oam_subnet ) > > > > > > ACTUALLY ... you have the pxeboot_subnet and the oam_subnet being the > same ? > > external_oam_subnet: 10.16.48.1/24 > > pxeboot_subnet: 10.16.48.1/24 > > That is wrong ... they have to be separate IP subnets. > > > > okay heres the curiosity, why do they require to be separate subnets, if > their on the seperate interfaces bond0 / bond0.1648 even though its the > same really as everything goes over the bond at some point. we really use > 10.16.48.x for our management backplane, and it is the only network we > allow pxe on. > as shown in previous netplans > # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: > # network: {config: disabled} > network: > bonds: > bond0: > interfaces: > - enp33s0 > - enp101s0 > macaddress: 98:03:9b:54:9c:f4 > mtu: 9000 > parameters: > down-delay: 0 > lacp-rate: fast > mii-monitor-interval: 100 > mode: 802.3ad > transmit-hash-policy: layer3+4 > up-delay: 0 > bridges: > br-mgmt: > addresses: > - 10.16.48.10/24 > gateway4: 10.16.48.1 > interfaces: > - bond0 > macaddress: 98:03:9b:54:9c:f4 > mtu: 9000 > nameservers: > addresses: > - 10.16.48.10 > - 10.16.48.11 > - 1.1.1.1 > search: > - maas > parameters: > forward-delay: 15 > stp: false > br-storage: > addresses: > - 10.16.72.21/24 > interfaces: > - bond0.1672 > macaddress: 98:03:9b:54:9c:f4 > mtu: 9000 > nameservers: > addresses: > - 10.16.48.10 > - 1.1.1.1 > search: > - maas > parameters: > forward-delay: 15 > stp: false > br-vxlan: > addresses: > - 10.16.80.21/24 > interfaces: > - bond0.1680 > macaddress: 98:03:9b:54:9c:f4 > mtu: 9000 > nameservers: > addresses: > - 10.16.48.10 > - 1.1.1.1 > search: > - maas > parameters: > forward-delay: 15 > stp: false > ethernets: > eno1np0: > dhcp4: true > match: > macaddress: 00:25:90:b9:71:8c > mtu: 9000 > set-name: enp17s0f0 > eno2np1: > dhcp4: true > match: > macaddress: 00:25:90:b9:71:8d > mtu: 9000 > set-name: enp17s0f1 > enp33s0: > match: > macaddress: 98:03:9b:54:9c:f4 > mtu: 9000 > set-name: enp33s0 > enp101s0: > match: > macaddress: 98:03:9b:54:9c:e4 > mtu: 9000 > set-name: enp101s0 > vlans: > bond0.1672: > id: 1672 > link: bond0 > mtu: 9000 > bond0.1680: > id: 1680 > link: bond0 > mtu: 9000 > version: 2 > > > > > You should also remove: > > external_oam_node_2_address: 10.16.48.116 > > external_oam_node_3_address: 10.16.48.117 > > external_oam_node_4_address: 10.16.48.118 > > > done.... > > > > > Other comment on your config commands > > ... > > system host-if-modify controller-0 $MGMT_IF -c platform system > > interface-network-assign controller-0 $MGMT_IF mgmt > > system interface-network-assign controller-0 $MGMT_IF cluster-host // > you should remove this, as you assign 'cluster-host' network below to a > separate interface > > ... > > system host-if-modify controller-0 $CLUSTER_IF -c platform system > > interface-network-assign controller-0 $CLUSTER_IF cluster-host ... > > also fixed ... > > > > > > > > > I would redo install using a unique IP Subnet for pxeboot and oam > > networks, Greg. > > > > redeploying now, can i ask, whats the logical difference between > > cluster_host_subnet: 10.16.96.0/24 > and > cluster_pod_subnet: 10.16.64.0/16 > > all our pods should exist / be accessible from 10.16.64, do they really > require a completely separated interface then hosts?? feels kinda vxlan > type magic > > running new install again... > > > -----Original Message----- > > From: Outback Dingo > > Sent: Thursday, May 12, 2022 4:30 AM > > To: Waines, Greg > > Cc: starlingx-discuss at lists.starlingx.io > > Subject: Re: [Starlingx-discuss] STX networking > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > ok, so getting closer its about the ip space and preset variables now > > > > and ... after installing software on controller-0 last run i used > > note i set pxeboot_ vars to put the right network and interfaces on > > bond0 > > [sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml > > system_mode: duplex > > > > dns_servers: > > - 8.8.8.8 > > - 8.8.4.4 > > > > external_oam_subnet: 10.16.48.1/24 > > external_oam_gateway_address: 10.16.48.1 > > external_oam_floating_address: 10.16.48.110 > > external_oam_node_0_address: 10.16.48.114 > > external_oam_node_1_address: 10.16.48.115 > > external_oam_node_2_address: 10.16.48.116 > > external_oam_node_3_address: 10.16.48.117 > > external_oam_node_4_address: 10.16.48.118 > > > > admin_username: admin > > admin_password: somepass > > ansible_become_pass: somepass > > > > # Add these lines to configure Docker to use a proxy server # # > docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: > https://my.proxy.com:1443 # # docker_no_proxy: > > # # - 1.2.3.4 > > # > > kubernetes_version: 1.21.3 > > pxeboot_subnet: 10.16.48.1/24 > > pxeboot_start_address: 10.16.48.100 > > pxeboot_end_address: 10.16.48.151 > > > > then ran ansible-playbook > > /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml > > and ... it was successful... onto configuring source > > /etc/platform/openrc system host-if-add -c platform -a 802.3ad -x > > layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-add > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan > > bond0 > > OAM_IF=bond0.1648 > > PXE_IF=bond0 > > MGMT_IF=bond0.1680 > > CLUSTER_IF=bond0.1664 > > ping 8.8.8.8 > > > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > > interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') > > for UUID in $IFNET_UUIDS; do system interface-network-remove > ${UUID}; done > > > > system host-if-modify controller-0 $OAM_IF -c platform system > > interface-network-assign controller-0 $OAM_IF oam > > > > system host-if-modify controller-0 $MGMT_IF -c platform system > > interface-network-assign controller-0 $MGMT_IF mgmt system > > interface-network-assign controller-0 $MGMT_IF cluster-host > > > > system host-if-modify controller-0 $PXE_IF -c platform system > > interface-network-assign controller-0 $PXE_IF pxeboot > > > > system host-if-modify controller-0 $CLUSTER_IF -c platform system > > interface-network-assign controller-0 $CLUSTER_IF cluster-host > > > > system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org > > system host-label-assign controller-0 openstack-control-plane=enabled > > system host-label-assign controller-0 ceph-mon-placement=enabled > > system host-label-assign controller-0 ceph-mgr-placement=enabled > > system storage-backend-add ceph-rook --confirmed system host-unlock > > controller-0 > > > > where controller-0 does reboot and do its boot sequence, then comes up > > on the correct OAM_IF IP, and DOES have also the correct floating > > address assigned to bond0.1648 > > > > i can actually login, i them waited some minutes and > > > > source /etc/platform/openrc > > > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | id | hostname | personality | administrative | operational | > > availability | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | 1 | controller-0 | controller | unlocked | enabled | > > available | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > > > functionally working... with the following network config > > > > [sysadmin at controller-0 ~(keystone_admin)]$ ip a > > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN > group default qlen 1000 > > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > > inet 127.0.0.1/8 scope host lo > > valid_lft forever preferred_lft forever > > inet6 ::1/128 scope host > > valid_lft forever preferred_lft forever > > 2: eno1: mtu 1500 qdisc mq state DOWN group > default qlen 1000 > > link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff > > 3: eno2: mtu 1500 qdisc mq state DOWN group > default qlen 1000 > > link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff > > 4: enp33s0: mtu 1500 > qdisc mq master bond0 state UP group default qlen 1000 > > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > > 5: enp49s0: mtu 1500 > qdisc mq master bond0 state UP group default qlen 1000 > > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr > > b8:59:9f:12:2c:fc > > 6: docker0: mtu 1500 qdisc noqueue > state DOWN group default > > link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff > > inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 > > valid_lft forever preferred_lft forever > > 7: bond0: mtu 1500 qdisc htb > state UP group default qlen 1000 > > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > > inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0 > > valid_lft forever preferred_lft forever > > inet 10.16.48.100/24 scope global secondary bond0 > > valid_lft forever preferred_lft forever > > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > > valid_lft forever preferred_lft forever > > 8: vlan1648 at bond0: mtu 1500 qdisc > noqueue state UP group default qlen 1000 > > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > > inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648 > > valid_lft forever preferred_lft forever > > inet 10.16.48.110/24 scope global secondary vlan1648 > > valid_lft forever preferred_lft forever > > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > > valid_lft forever preferred_lft forever > > 9: vlan1664 at bond0: mtu 1500 qdisc > noqueue state UP group default qlen 1000 > > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > > valid_lft forever preferred_lft forever > > 10: vlan1680 at bond0: mtu 1500 qdisc > htb state UP group default qlen 1000 > > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680 > > valid_lft forever preferred_lft forever > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12 > > valid_lft forever preferred_lft forever > > inet 192.168.206.1/24 scope global secondary vlan1680 > > valid_lft forever preferred_lft forever > > inet 192.168.204.1/24 scope global secondary vlan1680 > > valid_lft forever preferred_lft forever > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary > vlan1680 > > valid_lft forever preferred_lft forever > > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > > valid_lft forever preferred_lft forever > > ------------------snip---------------------- > > > > i pxe booted 2 more nodes, they did pxe fine from controller bond0 > > with 10.16.48.x as specified in localhost.yml > > > > they did show in system host-list... where i set their personalities. > > > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | id | hostname | personality | administrative | operational | > > availability | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > | 1 | controller-0 | controller | unlocked | enabled | > > available | > > | 2 | controller-1 | controller | locked | disabled | > > offline | > > | 3 | worker-0 | worker | locked | disabled | > > offline | > > > +----+--------------+-------------+----------------+-------------+--------------+ > > > > they both then proceeded to boot.... but now appear hung...... 1+ > > hours > > > > and i think the problem just might be... 192.168.204 and 192.168.206 > > addresses on bond0.1680 so.... they need to be 10.16.64.x which is > > what all our pods talk across for bond0.1664 or 10.16.80.x for > > bond0.1680 > > > > so now they question is which is what in the variables:, i believe i > > have the proper pxeboot_ and oam with 10.16.48.x on bond0 and > > bond0.1648 > > though you show oam and managment_ as different but i dont think i > completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service > and management multicast, so whats what ? > > in your ip space compare to what mine should be. > > > > pxeboot_subnet > > pxeboot_start_address > > pxeboot_end_address > > > > management_subnet > > management_start_address > > management_end_address > > cluster_host_subnet > > cluster_host_start_address > > cluster_host_end_address > > cluster_pod_subnet > > cluster_pod_start_address > > cluster_pod_end_address > > cluster_service_subnet > > cluster_service_start_address > > cluster_service_end_address > > management_multicast_subnet > > management_multicast_start_address > > management_multicast_end_address > > > > > > > > On Thu, May 12, 2022 at 7:44 AM Outback Dingo > wrote: > > > > > > working through the configuration now based on findings, and yes i > > > only have to do the ip link commands once prior to bootstrap > > > > > > i did get bond0 and vlans on a previous try to be configured after > > > system host-unlock controller-0 they were just in the wrong order, > > > so rebuilding the primary node, if it works and put the interfaces > > > and networks on proper interfaces and i can bootstrap controller-1 > > > and get past unlocking that also... i think it will be a win! > > > > > > On Thu, May 12, 2022 at 7:32 AM Waines, Greg < > Greg.Waines at windriver.com> wrote: > > > > > > > > Were you successful ? > > > > > > > > ( One question ... you are only having to do the 'ip link ...' > > > > commands BEFORE bootstrap in order to have IP Connectivity to the > > > > outside world for bootstrapping .. correct ? ) > > > > > > > > Greg. > > > > > > > > -----Original Message----- > > > > From: Outback Dingo > > > > Sent: Wednesday, May 11, 2022 8:21 PM > > > > To: Waines, Greg > > > > Cc: starlingx-discuss at lists.starlingx.io > > > > Subject: Re: [Starlingx-discuss] STX networking > > > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > > > I think even allowing for special conditions of bond0 to allow > OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and > even dropping the vlans for corner cases then i would only need to set the > Install-time-only parameters: > > > > Network Properties > > > > > > > > pxeboot_subnet: 10.16.48.1 > > > > pxeboot_start_address 10.16.48.100 pxeboot_end_address > > > > 10.16.48.125 management_subnet management_start_address > > > > management_end_address cluster_host_subnet > > > > cluster_host_start_address cluster_host_end_address > > > > cluster_pod_subnet cluster_pod_start_address > > > > cluster_pod_end_address cluster_service_subnet > > > > cluster_service_start_address cluster_service_end_address > > > > management_multicast_subnet management_multicast_start_address > > > > management_multicast_end_address > > > > > > > > ip link add bond0 type bond > > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > > > Set VLAN on the bond device: > > > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > > set > > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > > > 1664 ip link set bond0.1664 up > > > > > > > > > > > > and modify the host details as per below: > > > > > > > > OAM_IF=bond0.1648 > > > > MGMT_IF=bond0.1664 > > > > CLUSTER_IF=bond0.1680 > > > > PXE_IF=bond0 <- this puts pxe on bond0 > > > > > > > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg < > Greg.Waines at windriver.com> wrote: > > > > > > > > > > replying to answer your questions from email below, see in-lined > > > > > below, Greg. > > > > > > > > > > -----Original Message----- > > > > > From: Outback Dingo > > > > > Sent: Wednesday, May 11, 2022 12:00 AM > > > > > To: starlingx-discuss at lists.starlingx.io > > > > > Subject: [Starlingx-discuss] STX networking > > > > > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > > > > > scenario... > > > > > > > > > > i have a host say controller-0 > > > > > > > > > > prior to any ansible run > > > > > [Greg] I assume you mean the bootstrap ansible playbook > > > > > > > > > > i need to create a bond, and bridges and vlans [Greg] You do > > > > > need an interface to the outside world ? e.g. in order to download > container images from docker hub. > > > > > [Greg] Why can you not simply create a single interface (one link > of bond) with a vlan ? > > > > > > > > > > sure.... > > > > > Add a bond device as root: > > > > > > > > > > ip link add bond0 type bond > > > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > > > enp33s0 down ip link set snp33s0 master bond0 ip link set > > > > > enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 > > > > > up > > > > > > > > > > Set VLAN on the bond device: > > > > > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > > > set > > > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan > > > > > id > > > > > 1664 ip link set bond0.1664 up ip link add link bond0 name > > > > > bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > > > > > > > > > Add the bridge device and attach VLAN to it: > > > > > ip link add br0 type bridge > > > > > ip link set bond0.1648 master br0 ip link set bond0.1664 master > > > > > br0 ip link set bond0.1680 master br0 ip link set br0 up > > > > > > > > > > so i see where in starlingx > > > > > [Greg] the following commands are only possible AFTER bootstrap > > > > > > > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 > > > > > bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 > > > > > $OAM_IF -c platform system interface-network-assign controller-0 > > > > > $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform > > > > > bond0.1648 vlan > > > > > bond0 system host-if-add controller-0 -V 1664 -c platform > > > > > bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c > > > > > platform > > > > > bond0.1680 vlan bond0 > > > > > > > > > > does allow for me to create the bond0 and the vlans? but I dont > see any documentation for bridges anywhere ? Do I even need the bridge. > > > > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > > > > > > > where i want to set example, since each needs its own interface > > > > > can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? > > > > > i.e. OAM on port-based/untagged-vlan of bond and MGMT on > > > > > vlan-tag=1664 on bond ( BUT here is where you need the pxeboot > > > > > network because your MGMT network is vlan-tagged ? and you can't > > > > > pxe boot over that ) > > > > > > > > > > OAM_IF=bond0 > > > > > system host-if-modify controller-0 $OAM_IF -c platform system > > > > > interface-network-assign controller-0 $OAM_IF oam system > > > > > host-if-add > > > > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > > > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan > > > > > bond0 system host-if-add controller-0 -V 1680 -c platform > > > > > bond0.1680 vlan > > > > > bond0 system host-if-add controller-0 -V 1672 -c platform > > > > > bond0.1672 vlan bond0 > > > > > MGMT_IF=bond0.1664 > > > > > system host-if-modify controller-0 lo -c none > > > > > IFNET_UUIDS=$(system interface-network-list controller-0 | awk > > > > > '{if > > > > > ($6=="lo") print $4;}') > > > > > for UUID in $IFNET_UUIDS; do > > > > > system interface-network-remove ${UUID} done > > > > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] > don't think you actually need this command > > > > > system interface-network-assign controller-0 $MGMT_IF mgmt > > > > > system interface-network-assign controller-0 $MGMT_IF > > > > > cluster-host > > > > > > > > > > the reason for this being our switches are > > > > > > > > > > # MGMT > > > > > interface vlan1648 > > > > > address 10.16.48.2/24 > > > > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > > > > vlan-id 1648 > > > > > vlan-raw-device bridge > > > > > > > > > > interface vlan1672 > > > > > address 10.16.72.2/24 > > > > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > > > > vlan-id 1672 > > > > > vlan-raw-device bridge > > > > > > > > > > interface vlan1680 > > > > > address 10.16.80.2/24 > > > > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > > > > vlan-id 1680 > > > > > vlan-raw-device bridge > > > > > > > > > > interface vlan1696 > > > > > address 10.16.96.2/24 > > > > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > > > > vlan-id 1696 > > > > > vlan-raw-device bridge > > > > > > > > > > interface vlan1664 > > > > > address 10.16.64.2/24 > > > > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > > > > vlan-id 1664 > > > > > vlan-raw-device bridge > > > > > > > > > > and further down DATAIF_0=bond0.1680 > > > > > > > > > > the reason being we are trying to have starlingx conform to our > > > > > networks topology I also noted: in > > > > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansib > > > > > le > > > > > _boo > > > > > tstrap_configs.html#install-time-only-params-r6 > > > > > ... > > > > > > > > > > Network Properties I listed at the bottom > > > > > > > > > > can i modify these addresses to conform to our networks, as our > switches wount pass the traffic you set as defaults, as seen in my first > attempt at the bottom. Though i dont believe still dhcp/pxe will work on a > vlan interface. > > > > > > > > > > Network Properties > > > > > pxeboot_subnet > > > > > pxeboot_start_address > > > > > pxeboot_end_address > > > > > management_subnet > > > > > management_start_address > > > > > management_end_address > > > > > cluster_host_subnet > > > > > cluster_host_start_address > > > > > cluster_host_end_address > > > > > cluster_pod_subnet > > > > > cluster_pod_start_addres > > > > > cluster_pod_end_address > > > > > cluster_service_subnet > > > > > cluster_service_start_address > > > > > cluster_service_end_address > > > > > management_multicast_subnet > > > > > management_multicast_start_address > > > > > management_multicast_end_address > > > > > > > > > > 7: bond0: mtu 1500 qdisc > htb state UP group default qlen 1000 > > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > > > > valid_lft forever preferred_lft forever > > > > > inet 10.16.48.114/24 scope global secondary bond0 > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 8: vlan1664 at bond0: mtu 1500 > qdisc htb state UP group default qlen 1000 > > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > > inet 192.168.204.2/24 brd 192.168.204.255 scope global > vlan1664 > > > > > valid_lft forever preferred_lft forever > > > > > inet 192.168.206.2/24 brd 192.168.206.255 scope global > vlan1664:12 > > > > > valid_lft forever preferred_lft forever > > > > > inet 169.254.202.1/24 scope global vlan1664 > > > > > valid_lft forever preferred_lft forever > > > > > inet 192.168.206.1/24 scope global secondary vlan1664 > > > > > valid_lft forever preferred_lft forever > > > > > inet 192.168.204.1/24 scope global secondary vlan1664 > > > > > valid_lft forever preferred_lft forever > > > > > inet 192.168.204.4/24 brd 192.168.204.255 scope global > secondary vlan1664 > > > > > valid_lft forever preferred_lft forever > > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 9: vlan1672 at bond0: mtu 1500 > qdisc noqueue state UP group default qlen 1000 > > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 10: vlan1648 at bond0: mtu 1500 > qdisc noqueue state UP group default qlen 1000 > > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > 11: vlan1680 at bond0: mtu 1500 > qdisc noqueue state UP group default qlen 1000 > > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > > valid_lft forever preferred_lft forever > > > > > > > > > > _______________________________________________ > > > > > Starlingx-discuss mailing list > > > > > Starlingx-discuss at lists.starlingx.io > > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-dis > > > > > cu > > > > > ss > > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Sat May 14 02:14:40 2022 From: lists at optimcloud.com (Embedded Devel) Date: Sat, 14 May 2022 02:14:40 +0000 Subject: [Starlingx-discuss] stx 6.0 tools / image build Message-ID: <1651368964471.3775959830.3105671480@optimcloud.com> i must be missing something trying to get an iso built, seems way to hard by these docks, for such a simple procless reading https://opendev.org/starlingx/tools it states To generate centos-repo The centos-repo is a set of symbolic links to the packages in the mirror and the mock configuration file. It is needed to create these links if this is the first build or the mirror has been updated. generate-centos-repo.sh /import/mirrors/CentOS Where the argument to the script is the path of the mirror. To build all packages: $ cd $MY_REPO $ build-pkgs or build-pkgs --clean ; build-pkgs To generate local-repo: The local-repo has the dependency information that sequences the build order; To generate or update the information the following command needs to be executed after building modified or new packages. $ generate-local-repo.sh however inside the container, [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS mirror_dir=/import/mirrors/CentOS config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config distro=centos layer=all layer_pkg_urls= layer_image_inc_urls= layer_wheels_inc_urls= The mirror /import/mirrors/CentOS doesn't has the Binary and Source folders. Please provide a valid mirror [dingo at 25d9abcf4450 starlingx]$ $ build-iso bash: $: command not found [dingo at 25d9abcf4450 starlingx]$ build-iso 05:56:09 05:56:09 ************************* 05:56:09 Create StarlingX/CentOS Boot CD 05:56:09 ************************* 05:56:09 05:56:09 ERROR: create-yum-conf failed [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh /import/mirrors/CentOS mirror_dir=/import/mirrors/CentOS config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config distro=centos layer=all layer_pkg_urls= layer_image_inc_urls= layer_wheels_inc_urls= The mirror /import/mirrors/CentOS doesn't has the Binary and Source folders. Please provide a valid mirror -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com From lists at optimcloud.com Sat May 14 02:14:46 2022 From: lists at optimcloud.com (Embedded Devel) Date: Sat, 14 May 2022 02:14:46 +0000 Subject: [Starlingx-discuss] pxe boot network Message-ID: <1651983500351.1065673003.2927018964@optimcloud.com> Im trying to deploy 6 nodes with 512G ram and 24TB disk each Im envisioning 2-3 AIO DUPLEX Controller and 4 WORKER/STORAGE SO i deployed a single AIO duplex node, works fine, then had issues with pxe booting other nodes from it, due to switch configuration, without having to reinvent the switch topology to accomodate the pxe 169.254.202.1 pxe network, im reading below where it states PXE Boot Network VERSION You can set up a PXE boot network for booting all nodes to allow a non-standard management network configuration. The internal management network is used for PXE booting of new hosts and the PXE boot network is not required. However there are scenarios where the internal management network cannot be used for PXE booting of new hosts. For example, if the internal management network needs to be on a VLAN-tagged network for deployment reasons, or if it must support IPv6, you must configure the optional untagged PXE boot network for PXE booting of new hosts using IPv4. According to: https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html#configuring-a-pxe-boot-server-r6 Okay so it seems i need to go this route, and pxe boot all nodes, however.. reading the configure a pxe boot server doc, it states "You can optionally set up a PXE Boot Server to support controller-0 initialization." so this tells me that it can only be used for pxe booting controller-0 ???? https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html#configuring-a-pxe-boot-server-r6 so questions: 1) can i boot multiple aio duplex from a pxe boot server 2) once controller-0 is up, how does controller-1 connect to it. 3) which then i have to inquire how does discovery work for controller-1 on controller-0 4) in your opinions, which are welcome, how should i be deploying 6 nodes with this much memery and storage per node, to use 3 as controllers, and all 6 as compute/storage nodes? -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com From lists at optimcloud.com Sat May 14 02:14:51 2022 From: lists at optimcloud.com (Embedded Devel) Date: Sat, 14 May 2022 02:14:51 +0000 Subject: [Starlingx-discuss] multi-node AIO Message-ID: <1652091065332.3855897794.3972467120@optimcloud.com> not sure why my emails not going through so again.... I have 6 nodes, 512Gb Memory and 24TB disk 8x3TB per node can i deploy a duplex aio on 2-3 nodes? and worker/storage on 4 nodes second to that, since we cannot confirm our network environment to meet the pxe boot on 169.254 for other controller-1, worker nodes it states here https://docs.starlingx.io/planning/kubernetes/network-planning-the-pxe-boot-network.html that we can create a pxe boot environment to boot all nodes... okay PXE Boot Network VERSION You can set up a PXE boot network for booting all nodes to allow a non-standard management network configuration. however, this doc states https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html Configure a PXE Boot Server VERSION You can optionally set up a PXE Boot Server to support controller-0 initialization. so questions: 1) can i boot all nodes from pxe server? 2) after booting installing the controller-0 node i can then pxe boot install controller-1 node 3) in this scenerio how does controller-0 discover controller-1 so i can set the host "system host-update 2 personality=controller" 4) does the same apply to workers 5) if you have 6 fully configured server each with 24TB and 512Gb memory, how would you deploy to get a full 6 nodes with storage and compute? -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com From scott.little at windriver.com Mon May 16 15:03:57 2022 From: scott.little at windriver.com (Scott Little) Date: Mon, 16 May 2022 11:03:57 -0400 Subject: [Starlingx-discuss] stx 6.0 tools / image build In-Reply-To: <1651368964471.3775959830.3105671480@optimcloud.com> References: <1651368964471.3775959830.3105671480@optimcloud.com> Message-ID: The official build instructions have moved here: https://docs.starlingx.io/developer_resources/build_guide.html You appear to be reading stx-tools/README.rst which is likely very out off date.? I'll create a launchpad to correct that. Scott On 2022-05-13 22:14, Embedded Devel wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > i must be missing something trying to get an iso built, seems way to hard > by these docks, for? such a simple procless > reading https://opendev.org/starlingx/tools > it states > > To generate centos-repo > The centos-repo is a set of symbolic links to the packages in the mirror > and the mock configuration file. It is needed to create these links if > this > is the first build or the mirror has been updated. > > generate-centos-repo.sh /import/mirrors/CentOS > Where the argument to the script is the path of the mirror. > > To build all packages: > $ cd $MY_REPO > $ build-pkgs or build-pkgs --clean ; build-pkgs > To generate local-repo: > The local-repo has the dependency information that sequences the build > order; To generate or update the information the following command > needs to > be executed after building modified or new packages. > > $ generate-local-repo.sh > > > however inside the container, > > [dingo at 25d9abcf4450 starlingx]$ generate-local-repo.sh > ERROR: directory not found '/import/mirrors/CentOS/stx/CentOS' > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh > /import/mirrors/CentOS > mirror_dir=/import/mirrors/CentOS > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > > > distro=centos > layer=all > > layer_pkg_urls= > > layer_image_inc_urls= > > layer_wheels_inc_urls= > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > folders. Please provide a valid mirror > [dingo at 25d9abcf4450 starlingx]$ $ build-iso > bash: $: command not found > [dingo at 25d9abcf4450 starlingx]$ build-iso > 05:56:09 > 05:56:09 ************************* > 05:56:09 Create StarlingX/CentOS Boot CD > 05:56:09 ************************* > 05:56:09 > 05:56:09 ERROR: create-yum-conf failed > [dingo at 25d9abcf4450 starlingx]$ generate-centos-repo.sh > /import/mirrors/CentOS > mirror_dir=/import/mirrors/CentOS > config_dir=/localdisk/designer/dingo/starlingx/cgcs-root/../stx-tools/centos-mirror-tools/config > > > distro=centos > layer=all > > layer_pkg_urls= > > layer_image_inc_urls= > > layer_wheels_inc_urls= > > The mirror /import/mirrors/CentOS doesn't has the Binary and Source > folders. Please provide a valid mirror > > > > > > > -- > Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From balendu.burla at intel.com Mon May 16 03:51:44 2022 From: balendu.burla at intel.com (Burla, Balendu) Date: Mon, 16 May 2022 03:51:44 +0000 Subject: [Starlingx-discuss] StarlingX build environment errors Message-ID: Hi, I was trying to prepare a StarlingX build environment by following the steps captured in the below link: https://docs.starlingx.io/developer_resources/build_guide.html#build-the-centos-mirror-repository while building the packages, I see below errors: (similar errors are observed for each package build). It seems, I am missing some basic configuration but not sure what it is. Spent decent amount of time to try to resolve the issue.. but no luck. Looking for your help. cd $MY_REPO_ROOT_DIR/stx-tools/toCOPY bash generate-centos-repo.sh /import/mirrors/CentOS/stx/CentOS/ build-pkgs 20:16:43 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... [899/1984] 20:16:43 Start: init plugins 20:16:43 INFO: selinux disabled 20:16:43 Finish: init plugins 20:16:43 Start: run 20:16:43 INFO: Start(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 20:16:43 Start: chroot init 20:16:43 INFO: calling preinit hooks 20:16:43 INFO: enabled root cache 20:16:43 INFO: enabled yum cache 20:16:43 Start: cleaning yum metadata 20:16:43 Finish: cleaning yum metadata 20:16:43 INFO: enabled HW Info plugin 20:16:43 Mock Version: 1.4.16 20:16:43 INFO: Mock Version: 1.4.16 20:16:43 Start: yum install 20:16:43 ERROR: Exception(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 0 minutes 0 seconds 20:16:43 INFO: Results and/or logs in: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/workerconfig-1.0-14.tis 20:16:43 ERROR: Command failed: 20:16:43 # /usr/bin/yum --installroot /localdisk/loadbuild/stx_builder/fec-operator/std/mock/b0/root/ --releasever 7 install @buildsys-build pigz lbzip2 bash yum python3 20:16:43 Failed to set locale, defaulting to C 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 To address this issue please refer to the below wiki article 20:16:43 20:16:43 https://wiki.centos.org/yum-errors 20:16:43 20:16:43 If above article doesn't help to resolve this issue please use https://bugs.centos.org/. 20:16:43 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 20:16:43 20:16:43 One of the configured repositories failed (Stx-Centos-7-Distro), 20:16:43 and yum doesn't have enough cached data to continue. At this point the only 20:16:43 safe thing yum can do is fail. There are a few ways to work "fix" this: 20:16:43 20:16:43 1. Contact the upstream for the repository and get them to fix the problem. 20:16:43 [859/1984] 20:16:43 2. Reconfigure the baseurl/etc. for the repository, to point to a working 20:16:43 upstream. This is most often useful if you are using a newer 20:16:43 distribution release than is supported by the repository (and the 20:16:43 packages for the previous distribution release still work). 20:16:43 20:16:43 3. Run the command with the repository temporarily disabled 20:16:43 yum --disablerepo=StxCentos7Distro ... 20:16:43 20:16:43 4. Disable the repository permanently, so yum won't use it by default. Yum 20:16:43 will then just ignore the repository until you permanently enable it 20:16:43 again or use --enablerepo for temporary usage: 20:16:43 20:16:43 yum-config-manager --disable StxCentos7Distro 20:16:43 or 20:16:43 subscription-manager repos --disable=StxCentos7Distro 20:16:43 20:16:43 5. Configure the failing repository to be skipped, if it is unavailable. 20:16:43 Note that yum will try to contact the repo. when it runs most commands, 20:16:43 so will have to try and fail each time (and thus. yum will be be much 20:16:43 slower). If it is a very temporary problem though, this is often a nice 20:16:43 compromise: 20:16:43 20:16:43 yum-config-manager --save --setopt=StxCentos7Distro.skip_if_unavailable=true 20:16:43 20:16:43 failure: repodata/repomd.xml from StxCentos7Distro: [Errno 256] No more mirrors to try. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 20:16:43 End build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm 20:16:43 Error building workerconfig-1.0-14.tis.src.rpm on 'b0'. 20:16:43 Will try to build again (if some other package will succeed). 20:16:43 schedule2: no unbuilt deps for 'worker-utils', searching at depth 3 20:16:43 Start build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/worker-utils-1.0-27.tis.src.rpm 20:16:46 building worker-utils-1.0-27.tis.src.rpm 20:16:46 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... 20:16:46 Start: init plugins 20:16:46 INFO: selinux disabled 20:16:46 Finish: init plugins 20:16:46 Start: run 20:16:46 Start: chroot init 20:16:46 INFO: calling preinit hooks .... 20:16:47 20:16:47 Results out to: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std 20:16:47 20:16:47 Pkgs built: 0 20:16:47 dirname: missing operand 20:16:47 Try 'dirname --help' for more information. 20:16:47 20:16:47 Auditing for obsolete srpms 20:16:47 waiting for srpm audit to complete 20:16:47 Auditing for obsolete rpms 20:16:47 waiting for rpm audit to complete 20:16:47 Audit complete 20:16:47 20:16:47 Recreate repodata 20:16:49 20:16:49 Failed to build packages: worker-utils-1.0-27.tis.src.rpm workerconfig-1.0-14.tis.src.rpm watchdog-5.13-12.el7.tis.8.src.rpm vm-topology-1.0-18.tis.src.rpm tuned-config-1.0-4.tis.src.rpm vault-helm-1.0-27.tis.src.rpm util-linux-config-1.0-5.tis.src.rpm update-motd-1.0-7.tis.src.rpm tzdata-2021e-1.el7.tis.1.src.rpm tss2-930-1.tis.2.src.rpm tsconfig-1.0-60.tis.src.rpm trident-installer-22.01.0-0.tis.8.src.rpm systemd-config-1.0-12.tis.src.rpm tpm2-tools-3.0.4-2.el7.tis.6.src.rpm tpm2-openssl-engine-1.0-3.tis.src.rpm tboot-1.9.6-3.el7.tis.5.src.rpm syslog-ng-config-1.0-34.tis.src.rpm sysinv-fpga-agent-1.0-13.tis.src.rpm sysinv-agent-1.0-15.tis.src.rpm stx-ssl-1.0.0-15.tis.src.rpm sysinv-1.0-2684.tis.src.rpm sudo-config-1.0-5.tis.src.rpm stx-vault-helm-1.0-27.tis.src.rpm stx-snmp-helm-1.0-32.tis.src.rpm stx-sdo-helm-1.0-6.tis.src.rpm stx-rook-ceph-1.0-17.tis.src.rpm stx-ptp-notification-helm-1.0-57.tis.src.rpm stx-ocf-scripts-1.0-11.tis.src.rpm stx-portieris-helm-1.0-37.tis.src.rpm stx-platform-helm-1.0-46.tis.src.rpm stx-openstack-helm-1.0-199.tis.src.rpm stx-oidc-auth-helm-1.0-64.tis.src.rpm stx-nginx-ingress-controller-helm-1.1-25.tis.src.rpm stx-monitor-helm-1.0-37.tis.src.rpm stx-metrics-server-helm-1.0-11.tis.src.rpm storageconfig-1.0-12.tis.src.rpm stx-istio-helm-1.0-4.tis.src.rpm stx-extensions-1.0-7.tis.src.rpm stx-cert-manager-helm-1.0-33.tis.src.rpm stx-audit-helm-1.0-22.tis.src.rpm starlingx-dashboard-1.0-307.tis.src.rpm spectre-meltdown-checker-0.37+-3.tis.src.rpm sm-tools-1.0-22.tis.src.rpm sm-api-1.0-49.tis.src.rpm sm-db-1.0.0-57.tis.src.rpm sm-common-1.0.0-32.tis.src.rpm sm-client-1.0-34.tis.src.rpm sm-1.0.0-55.tis.src.rpm shim-signed-15-1.tis.5.src.rpm shim-15-1.el7.tis.7.src.rpm rpm-4.14.0-1.tis.6.src.rpm shadow-utils-config-1.0-6.tis.src.rpm setup-config-1.0-4.tis.src.rpm rsync-config-1.0-5.tis.src.rpm resource-agents-4.1.1-12.el7_6.7.tis.21.src.rpm requests-toolbelt-0.9.1-0.tis.4.src.rpm registry-token-server-1.0.0-1.tis.15.src.rpm python-webencodings-0.5.1-1.el7.tis.4.src.rpm Redfishtool-1.1.0-.tis.3.src.rpm rdma-core-55mlnx37-1.55103.tis.21.src.rpm rabbitmq-server-config-1.0-6.tis.src.rpm python-wsme-0.9.2-1.el7.tis.5.src.rpm python-voluptuous-0.8.9-1.el7.tis.2.src.rpm python-siteconfig-1.0-1.tis.src.rpm python-ryu-4.19-0.tis.5.src.rpm python-setuptools-38.5.1-1.el7.tis.2.src.rpm python-openstacksdk-0.36.0-1.tis.33.src.rpm python-psycopg2-2.5.1-3.el7.tis.2.src.rpm python-pankoclient-0.7.0-1.tis.2.src.rpm python-os-vif-1.9.1-1.el7.tis.2.src.rpm python-oslo-messaging-5.30.6-1.el7.tis.6.src.rpm python-openstackdocstheme-1.11.0-1.tis.2.src.rpm python-openstackclient-4.0.0-1.tis.18.src.rpm python-novaclient-15.1.0-1.tis.4.src.rpm python-keystoneclient-3.21.0-2.tis.2.src.rpm python-neutronclient-6.14.0-1.tis.4.src.rpm python-lefthandclient-2.1.0-0.tis.3.src.rpm python-kubernetes-8.0.0-8.el7.tis.1.src.rpm python-keystoneauth1-3.17.1-2.tis.2.src.rpm python-keyring-5.7.1-1.tis.6.src.rpm python-k8sapp-vault-20.06-27.tis.src.rpm python-k8sapp-portieris-1.0-37.tis.src.rpm python-k8sapp-snmp-1.0-9.tis.src.rpm python-k8sapp-rook-1.0-17.tis.src.rpm python-k8sapp-ptp-notification-1.0-57.tis.src.rpm python-k8sapp-platform-1.0-46.tis.src.rpm python-k8sapp-oidc-1.0-64.tis.src.rpm python-k8sapp-openstack-1.0-199.tis.src.rpm python-k8sapp-auditd-1.0-22.tis.src.rpm python-k8sapp-nginx-ingress-controller-1.0-14.tis.src.rpm python-k8sapp-istio-1.0-4.tis.src.rpm python-k8sapp-cert-manager-1.0-33.tis.src.rpm python-ironicclient-3.1.0-1.tis.2.src.rpm python-heatclient-1.18.0-1.tis.4.src.rpm python-gnocchiclient-7.0.4-1.tis.31.src.rpm python-daemon-2.2.3-7.el8.tis.4.src.rpm python-glanceclient-2.17.0-1.tis.4.src.rpm python-fmclient-1.0-35.tis.src.rpm python-docker-3.3.0-1.el7.tis.6.src.rpm python-django-horizon-15.1.0-1.tis.54.src.rpm python-cinderclient-5.0.0-1.tis.6.src.rpm python-cephclient-13.2.2.0-20.tis.src.rpm python-barbicanclient-4.9.0-1.tis.3.src.rpm puppet-sshd-1.0.0-9.tis.src.rpm python-aodhclient-1.3.0-1.tis.1.src.rpm python-3parclient-4.2.3-0.tis.3.src.rpm puppet-sysinv-1.0.0-43.tis.src.rpm puppet-stdlib-4.18.0-2.el7.tis.3.src.rpm puppet-staging-1.0.4-1.b466d93git.el7.tis.4.src.rpm puppet-smapi-1.0.0-7.tis.src.rpm puppet-rabbitmq-5.6.0-4.5ac45degit.el7.tis.2.src.rpm puppet-puppi-2.2.3-0.tis.4.src.rpm puppet-openstacklib-11.5.0-1.el7.tis.8.src.rpm puppet-postgresql-4.8.0-0.tis.5.src.rpm puppet-patching-1.0.0-13.tis.src.rpm puppet-oslo-11.3.0-1.el7.tis.2.src.rpm puppet-nslcd-0.0.1-0.tis.4.src.rpm puppet-nfv-1.0.0-19.tis.src.rpm puppet-network-1.0.2-0.tis.10.src.rpm puppet-ldap-0.2.4-0.tis.4.src.rpm puppet-mtce-1.0.0-14.tis.src.rpm puppet-manifests-1.0.0-1066.tis.src.rpm puppet-lvm-0.5.0-0.tis.4.src.rpm puppet-keystone-11.3.0-1.el7.tis.7.src.rpm puppet-horizon-11.5.0-1.el7.tis.4.src.rpm puppet-haproxy-1.5.0-4.6ffcb07git.el7.tis.5.src.rpm puppet-dnsmasq-1.1.0-0.tis.4.src.rpm puppet-fm-1.0.0-17.tis.src.rpm puppet-filemapper-1.1.3-0.tis.2.src.rpm puppet-drbd-0.3.1-rc0.tis.4.src.rpm puppet-dcorch-1.0.0-29.tis.src.rpm puppet-dcmanager-1.0.0-22.tis.src.rpm puppet-dcdbsync-1.0.0-14.tis.src.rpm portieris-helm-0.7.0-14.tis.src.rpm puppet-create_resources-0.0.1-0.tis.2.src.rpm puppet-ceph-2.4.1-1.el7.tis.9.src.rpm puppet-boolean-1.0.2-1.tis.2.src.rpm puppet-4.8.2-1.el7.tis.3.src.rpm playbookconfig-1.0-784.tis.src.rpm platform-util-1.0-89.tis.src.rpm platform-kickstarts-1.0.0-291.tis.src.rpm pam-config-1.0-10.tis.src.rpm pf-bb-config-21.6-0.tis.8.src.rpm pci-irq-affinity-agent-1.0-33.tis.src.rpm patch-alarm-1.0-26.tis.src.rpm openvswitch-config-1.0-5.tis.src.rpm openstack-ras-1.0.0-0.tis.3.src.rpm openstack-keystone-16.0.0-1.el7.tis.23.src.rpm opae-intel-fpga-driver-2.0.1-10.tis.55.src.rpm openstack-helm-infra-1.0-57.tis.src.rpm openstack-helm-1.0-59.tis.src.rpm openssh-config-1.0-11.tis.src.rpm oidcauthtools-1.0-5.tis.src.rpm openldap-config-1.0-17.tis.src.rpm ntp-config-1.0-4.tis.src.rpm nova-api-proxy-1.0-38.tis.src.rpm ntp-4.2.6p5-29.el7.centos.2.tis.9.src.rpm net-tools-2.0-0.24.20131004git.el7.tis.6.src.rpm nfv-1.0-233.tis.src.rpm nfs-utils-config-1.0-5.tis.src.rpm nfscheck-1.0-5.tis.src.rpm namespace-utils-1.0-4.tis.src.rpm multus-config-1.0-1.tis.src.rpm mtce-storage-1.0-11.tis.src.rpm monitor-helm-elastic-1.0-19.tis.src.rpm mtce-control-1.0-15.tis.src.rpm mtce-compute-1.0-17.tis.src.rpm mstflint-4.16.0-1.55103.tis.2.src.rpm monitor-tools-1.0-10.tis.src.rpm monitor-helm-1.0-25.tis.src.rpm mlnx-tools-5.2.0-0.55103.tis.21.src.rpm metrics-server-helm-1.0-1.tis.src.rpm logrotate-3.8.6-17.el7.tis.6.src.rpm memcached-custom-1.0-5.tis.src.rpm mechanize-0.4.5-1.el7.tis.3.src.rpm logrotate-config-1.0-5.tis.src.rpm logmgmt-1.0-18.tis.src.rpm lldpd-0.9.0-0.tis.9.src.rpm linuxptp-3.1.1-1.tis.5.src.rpm libtpms-0.6.0-2.tis.2.src.rpm lighttpd-config-1.0-9.tis.src.rpm lighttpd-1.4.54-1.el7.tis.12.src.rpm libvirt-python-4.7.0-1.tis.6.src.rpm libnftnl-1.1.5-4.tis.1.src.rpm libfdt-1.4.4-0.tis.5.src.rpm libevent-2.0.21-4.el7.tis.3.src.rpm kvm-timer-advance-1.0-3.tis.src.rpm libbpf-0.5.0-1.tis.1.src.rpm libbnxt_re-220.0.5.0-rhel7u9.tis.3.src.rpm ldapscripts-2.0.8-0.tis.8.src.rpm kubernetes-1.23.1-1.23.1-1.tis.4.src.rpm kube-memory-1.0-8.tis.src.rpm kube-cpusets-1.0-6.tis.src.rpm istio-helm-1.13.3-2.tis.src.rpm kmod-bnxt_en-1.10.2-220.0.13.0.tis.19.src.rpm kiali-helm-1.45.0-3.tis.src.rpm kexec-tools-2.0.21-1.tis.2.src.rpm keepalived-2.1.5-6.tis.1.src.rpm k8s-pod-recovery-1.0-0.tis.15.src.rpm k8s-cni-cache-cleanup-1.0-0.tis.1.src.rpm isolcpus-device-plugin-1.0-5.tis.src.rpm iscsi-initiator-utils-config-1.0-4.tis.src.rpm iptables-config-1.0-4.tis.src.rpm initscripts-config-1.0-12.tis.src.rpm iptables-1.8.4-21.tis.6.src.rpm iproute-5.12.0-4.tis.4.src.rpm io-scheduler-1.0-6.tis.src.rpm initscripts-9.49.46-1.el7.tis.15.src.rpm inih-44-0.tis.1.src.rpm igb_uio-kmod-21.02-0.tis.57.src.rpm html5lib-python-1.0.1-1.el7.tis.4.src.rpm ice-kmod-1.8.3-1.tis.25.src.rpm iavf-kmod-4.4.2-1.tis.23.src.rpm i40e-kmod-2.18.9-1.tis.23.src.rpm helm-3.2.1-0.tis.17.src.rpm haproxy-config-1.0-5.tis.src.rpm haproxy-1.5.18-8.el7.tis.12.src.rpm golang-1.17.5-1.17.5-1.tis.1.src.rpm grubby-8.28-25.el7.tis.5.src.rpm gpu-operator-1.8.1-0.tis.4.src.rpm golang-dep-0.5.0-4.tis.src.rpm golang-1.16.12-1.16.12-2.tis.3.src.rpm fm-rest-api-1.0-72.tis.src.rpm fm-mgr-1.0-25.tis.src.rpm EXAMPLE_SYSINV-1.0-2.tis.src.rpm fm-doc-1.0-52.tis.src.rpm fm-common-1.0-69.tis.src.rpm fm-api-1.0-46.tis.src.rpm filesystem-scripts-1.0-4.tis.src.rpm facter-2.4.4-4.el7.tis.7.src.rpm EXAMPLE_VIM-1.0-4.tis.src.rpm EXAMPLE_SERVICE-1.0-2.tis.src.rpm EXAMPLE_RR-1.0-2.tis.src.rpm EXAMPLE_MTCE-1.0-4.tis.src.rpm EXAMPLE_0001-1.0-2.tis.src.rpm EXAMPLE_KUBELET-1.0-1.tis.src.rpm EXAMPLE_DC-1.0-3.tis.src.rpm EXAMPLE_0002-1.0-2.tis.src.rpm etcd-3.3.15-1.tis.7.src.rpm engtools-1.0-37.tis.src.rpm enable-dev-patch-1.0-4.tis.src.rpm docker-distribution-2.7.1-1.tis.13.src.rpm dwarves-1.22-1.tis.1.src.rpm drbd-9.15.1-0.tis.11.src.rpm dpkg-1.18.24-0.tis.2.src.rpm docker-config-1.0-5.tis.src.rpm dnsmasq-config-1.0-4.tis.src.rpm dnsmasq-2.76-7.el7.tis.7.src.rpm dhcp-config-1.0-8.tis.src.rpm dmesg-config-1.0-1.tis.src.rpm distributedcloud-client-1.0.0-1.tis.65.src.rpm distributedcloud-1.0.0-1.tis.422.src.rpm dhcp-4.2.5-82.el7.centos.tis.13.src.rpm dex-helm-1.0-10.tis.src.rpm controllerconfig-1.0-327.tis.src.rpm collector-1.0-69.tis.src.rpm containernetworking-plugins-1.0.1-1.tis.9.src.rpm containerd-config-1.0-4.tis.src.rpm containerd-1.4.11-22.tis.src.rpm config-gate-1.0-13.tis.src.rpm collectd-extensions-1.0-0.tis.85.src.rpm cloud-init-0.7.9-24.el7.centos.1.tis.4.src.rpm cgts-client-1.0-295.tis.src.rpm chartmuseum-0.12.0-6.tis.src.rpm ceph-manager-1.0-28.tis.src.rpm cgcs-patch-1.0-105.tis.src.rpm cert-mon-1.0-7.tis.src.rpm cert-manager-helm-1.0-15.tis.src.rpm cert-alarm-1.0-5.tis.src.rpm centos-release-config-1.0-3.tis.src.rpm build-info-1.0-4.tis.src.rpm bond-cni-1.0-bff6422.tis.3.src.rpm audit-config-1.0-4.tis.src.rpm armada-0.2.0-0.tis.14.src.rpm armada-helm-toolkit-1.0-8.tis.src.rpm rabbitmq-server-3.6.5-1.el7.tis.9.src.rpm parted-3.1-29.el7.tis.7.src.rpm qat17-4.14.0-00031.tis.60.src.rpm libvirt-4.7.0-1.tis.31.src.rpm grub2-2.02-0.86.el7.centos.tis.14.src.rpm openvswitch-2.11.0-0.tis.13.src.rpm mtce-guest-1.0-146.tis.src.rpm mtce-common-1.0-142.tis.src.rpm mtce-1.0-217.tis.src.rpm mlnx-ofa_kernel-5.5-OFED.5.5.1.0.3.1.tis.26.src.rpm ceph-14.2.22-0.el7.tis.35.src.rpm sudo-1.8.23-10.el7_9.1.tis.10.src.rpm qemu-kvm-ev-3.0.0-0.tis.20.src.rpm openssh-7.4p1-21.el7_4.tis.9.src.rpm mariadb-10.1.28-1.el7.tis.8.src.rpm kubernetes-unversioned-1.0-1.tis.9.src.rpm kubernetes-1.22.5-1.22.5-1.tis.9.src.rpm kubernetes-1.21.8-1.21.8-1.tis.16.src.rpm openldap-2.4.44-20.el7.tis.11.src.rpm setup-2.8.71-10.el7.tis.11.src.rpm bash-4.2.46-34.el7.tis.10.src.rpm systemd-219-78.el7_9.3.tis.19.src.rpm python-2.7.5-89.el7.tis.8.src.rpm kernel-5.10.99-200.42.tis.el7.src.rpm pxe-network-installer-1.0-35.tis.src.rpm ######## Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 [stx_builder at 4df2aa3dafa0 toCOPY]$ Best regards, Mouli. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue May 17 18:30:20 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 17 May 2022 18:30:20 +0000 Subject: [Starlingx-discuss] STX networking In-Reply-To: References: Message-ID: > Greg > So wait.... Are you saying that I can put both pxeboot and mgmt on untagged 10.16.48.x no need to separate them? Well the options are: * Option 1: use only mgmt network * mgmt. untagged, with address 10.16.48.0/24 * where mgmt. would be used for * pxebooting between StarlingX Controller(s) and other StarlingX hosts, and * internal StarlingX infrastructure mgmt traffic * Option 2: use both pxeboot and mgmt. network * pxeboot untagged, with address 10.16.48.0/24 * to be used for pxebooting between StarlingX Controller(s) and other StarlingX hosts * mgmt. tagged, with address * to be used for internal StarlingX infrastructure mgmt. traffic. Greg. From: Outback Dingo Sent: Friday, May 13, 2022 7:46 AM To: Waines, Greg Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] On Fri, May 13, 2022, 6:22 PM Waines, Greg > wrote: > okay heres the curiosity, why do they require to be separate subnets, if their on the seperate interfaces bond0 / bond0.1648 > even though its the same really as everything goes over the bond at some point. we really use 10.16.48.x for our management > backplane, and it is the only network we allow pxe on. > as shown in previous netplans pxeboot and mgmt are on two separate layer 2 networks pxeboot on the untagged vlan of bond0, and mgmt on vlan=1648 of bond0 If those two networks are connected to the same routing instance (which they are in the case of StarlingX Platform Networking), then they must have unique IP Subnets, based on basic rules for IP Routing. So you say "10.16.48.x" is the only network that you support pxebooting on. Questions: - Does your management network for your StarlingX hosts need to be vlan tagged ? ( if yes, not a problem, just curious why ? ) - If YES * use pxeboot (bond0) = 10.16.48.0/24 mgmt. (bond0.1648) = 10.16.49.0/24 ( I just picked 49 as an arbitrary other network ... - if NO * DON"T use pxeboot * use mgmt. (bond0) = 10.16.48.0/24 Greg So wait.... Are you saying that I can put both pxeboot and mgmt on untagged 10.16.48.x no need to separate them? -----Original Message----- From: Outback Dingo > Sent: Thursday, May 12, 2022 10:03 PM To: Waines, Greg > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] STX networking [Please note: This e-mail is from an EXTERNAL e-mail address] On Thu, May 12, 2022 at 6:49 PM Waines, Greg > wrote: > > I doubt if this is causing issues but shouldn't this: > pxeboot_subnet: 10.16.48.1/24 > be > pxeboot_subnet: 10.16.48.0/24 > ? ok changed to > pxeboot_subnet: 10.16.48.0/24 > ( and similar issue for oam_subnet ) > > > ACTUALLY ... you have the pxeboot_subnet and the oam_subnet being the same ? > external_oam_subnet: 10.16.48.1/24 > pxeboot_subnet: 10.16.48.1/24 > That is wrong ... they have to be separate IP subnets. > okay heres the curiosity, why do they require to be separate subnets, if their on the seperate interfaces bond0 / bond0.1648 even though its the same really as everything goes over the bond at some point. we really use 10.16.48.x for our management backplane, and it is the only network we allow pxe on. as shown in previous netplans # /etc/cloud/cloud.cfg.d/99-disable-network-config.cfg with the following: # network: {config: disabled} network: bonds: bond0: interfaces: - enp33s0 - enp101s0 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 parameters: down-delay: 0 lacp-rate: fast mii-monitor-interval: 100 mode: 802.3ad transmit-hash-policy: layer3+4 up-delay: 0 bridges: br-mgmt: addresses: - 10.16.48.10/24 gateway4: 10.16.48.1 interfaces: - bond0 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 10.16.48.11 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false br-storage: addresses: - 10.16.72.21/24 interfaces: - bond0.1672 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false br-vxlan: addresses: - 10.16.80.21/24 interfaces: - bond0.1680 macaddress: 98:03:9b:54:9c:f4 mtu: 9000 nameservers: addresses: - 10.16.48.10 - 1.1.1.1 search: - maas parameters: forward-delay: 15 stp: false ethernets: eno1np0: dhcp4: true match: macaddress: 00:25:90:b9:71:8c mtu: 9000 set-name: enp17s0f0 eno2np1: dhcp4: true match: macaddress: 00:25:90:b9:71:8d mtu: 9000 set-name: enp17s0f1 enp33s0: match: macaddress: 98:03:9b:54:9c:f4 mtu: 9000 set-name: enp33s0 enp101s0: match: macaddress: 98:03:9b:54:9c:e4 mtu: 9000 set-name: enp101s0 vlans: bond0.1672: id: 1672 link: bond0 mtu: 9000 bond0.1680: id: 1680 link: bond0 mtu: 9000 version: 2 > > You should also remove: > external_oam_node_2_address: 10.16.48.116 > external_oam_node_3_address: 10.16.48.117 > external_oam_node_4_address: 10.16.48.118 > done.... > > Other comment on your config commands > ... > system host-if-modify controller-0 $MGMT_IF -c platform system > interface-network-assign controller-0 $MGMT_IF mgmt > system interface-network-assign controller-0 $MGMT_IF cluster-host // you should remove this, as you assign 'cluster-host' network below to a separate interface > ... > system host-if-modify controller-0 $CLUSTER_IF -c platform system > interface-network-assign controller-0 $CLUSTER_IF cluster-host ... also fixed ... > > > > I would redo install using a unique IP Subnet for pxeboot and oam > networks, Greg. > redeploying now, can i ask, whats the logical difference between cluster_host_subnet: 10.16.96.0/24 and cluster_pod_subnet: 10.16.64.0/16 all our pods should exist / be accessible from 10.16.64, do they really require a completely separated interface then hosts?? feels kinda vxlan type magic running new install again... > -----Original Message----- > From: Outback Dingo > > Sent: Thursday, May 12, 2022 4:30 AM > To: Waines, Greg > > Cc: starlingx-discuss at lists.starlingx.io > Subject: Re: [Starlingx-discuss] STX networking > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > ok, so getting closer its about the ip space and preset variables now > > and ... after installing software on controller-0 last run i used > note i set pxeboot_ vars to put the right network and interfaces on > bond0 > [sysadmin at controller-0 ~(keystone_admin)]$ cat localhost.yml > system_mode: duplex > > dns_servers: > - 8.8.8.8 > - 8.8.4.4 > > external_oam_subnet: 10.16.48.1/24 > external_oam_gateway_address: 10.16.48.1 > external_oam_floating_address: 10.16.48.110 > external_oam_node_0_address: 10.16.48.114 > external_oam_node_1_address: 10.16.48.115 > external_oam_node_2_address: 10.16.48.116 > external_oam_node_3_address: 10.16.48.117 > external_oam_node_4_address: 10.16.48.118 > > admin_username: admin > admin_password: somepass > ansible_become_pass: somepass > > # Add these lines to configure Docker to use a proxy server # # docker_http_proxy: http://my.proxy.com:1080 # # docker_https_proxy: https://my.proxy.com:1443 # # docker_no_proxy: > # # - 1.2.3.4 > # > kubernetes_version: 1.21.3 > pxeboot_subnet: 10.16.48.1/24 > pxeboot_start_address: 10.16.48.100 > pxeboot_end_address: 10.16.48.151 > > then ran ansible-playbook > /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml > and ... it was successful... onto configuring source > /etc/platform/openrc system host-if-add -c platform -a 802.3ad -x > layer2 controller-0 bond0 ae enp33s0 enp49s0 system host-if-add > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan bond0 > system host-if-add controller-0 -V 1680 -c platform bond0.1680 vlan > bond0 > OAM_IF=bond0.1648 > PXE_IF=bond0 > MGMT_IF=bond0.1680 > CLUSTER_IF=bond0.1664 > ping 8.8.8.8 > > system host-if-modify controller-0 lo -c none IFNET_UUIDS=$(system > interface-network-list controller-0 | awk '{if ($6=="lo") print $4;}') > for UUID in $IFNET_UUIDS; do system interface-network-remove ${UUID}; done > > system host-if-modify controller-0 $OAM_IF -c platform system > interface-network-assign controller-0 $OAM_IF oam > > system host-if-modify controller-0 $MGMT_IF -c platform system > interface-network-assign controller-0 $MGMT_IF mgmt system > interface-network-assign controller-0 $MGMT_IF cluster-host > > system host-if-modify controller-0 $PXE_IF -c platform system > interface-network-assign controller-0 $PXE_IF pxeboot > > system host-if-modify controller-0 $CLUSTER_IF -c platform system > interface-network-assign controller-0 $CLUSTER_IF cluster-host > > system ntp-modify ntpservers=0.pool.ntp.org,1.pool.ntp.org > system host-label-assign controller-0 openstack-control-plane=enabled > system host-label-assign controller-0 ceph-mon-placement=enabled > system host-label-assign controller-0 ceph-mgr-placement=enabled > system storage-backend-add ceph-rook --confirmed system host-unlock > controller-0 > > where controller-0 does reboot and do its boot sequence, then comes up > on the correct OAM_IF IP, and DOES have also the correct floating > address assigned to bond0.1648 > > i can actually login, i them waited some minutes and > > source /etc/platform/openrc > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > +----+--------------+-------------+----------------+-------------+--------------+ > > functionally working... with the following network config > > [sysadmin at controller-0 ~(keystone_admin)]$ ip a > 1: lo: mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000 > link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00 > inet 127.0.0.1/8 scope host lo > valid_lft forever preferred_lft forever > inet6 ::1/128 scope host > valid_lft forever preferred_lft forever > 2: eno1: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ac:1f:6b:60:97:52 brd ff:ff:ff:ff:ff:ff > 3: eno2: mtu 1500 qdisc mq state DOWN group default qlen 1000 > link/ether ac:1f:6b:60:97:53 brd ff:ff:ff:ff:ff:ff > 4: enp33s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > 5: enp49s0: mtu 1500 qdisc mq master bond0 state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff permaddr > b8:59:9f:12:2c:fc > 6: docker0: mtu 1500 qdisc noqueue state DOWN group default > link/ether 02:42:d5:7a:2c:4c brd ff:ff:ff:ff:ff:ff > inet 172.17.0.1/16 brd 172.17.255.255 scope global docker0 > valid_lft forever preferred_lft forever > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.101/24 brd 10.16.48.255 scope global bond0 > valid_lft forever preferred_lft forever > inet 10.16.48.100/24 scope global secondary bond0 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 8: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 10.16.48.114/24 brd 10.16.48.255 scope global vlan1648 > valid_lft forever preferred_lft forever > inet 10.16.48.110/24 scope global secondary vlan1648 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 9: vlan1664 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > 10: vlan1680 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > link/ether b8:59:9f:12:32:78 brd ff:ff:ff:ff:ff:ff > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1680:12 > valid_lft forever preferred_lft forever > inet 192.168.206.1/24 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.204.1/24 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1680 > valid_lft forever preferred_lft forever > inet6 fe80::ba59:9fff:fe12:3278/64 scope link > valid_lft forever preferred_lft forever > ------------------snip---------------------- > > i pxe booted 2 more nodes, they did pxe fine from controller bond0 > with 10.16.48.x as specified in localhost.yml > > they did show in system host-list... where i set their personalities. > > [sysadmin at controller-0 ~(keystone_admin)]$ system host-list > +----+--------------+-------------+----------------+-------------+--------------+ > | id | hostname | personality | administrative | operational | > availability | > +----+--------------+-------------+----------------+-------------+--------------+ > | 1 | controller-0 | controller | unlocked | enabled | > available | > | 2 | controller-1 | controller | locked | disabled | > offline | > | 3 | worker-0 | worker | locked | disabled | > offline | > +----+--------------+-------------+----------------+-------------+--------------+ > > they both then proceeded to boot.... but now appear hung...... 1+ > hours > > and i think the problem just might be... 192.168.204 and 192.168.206 > addresses on bond0.1680 so.... they need to be 10.16.64.x which is > what all our pods talk across for bond0.1664 or 10.16.80.x for > bond0.1680 > > so now they question is which is what in the variables:, i believe i > have the proper pxeboot_ and oam with 10.16.48.x on bond0 and > bond0.1648 > though you show oam and managment_ as different but i dont think i completely grasp whats mgmt_, cluster_host, cluster_pod, cluster_service and management multicast, so whats what ? > in your ip space compare to what mine should be. > > pxeboot_subnet > pxeboot_start_address > pxeboot_end_address > > management_subnet > management_start_address > management_end_address > cluster_host_subnet > cluster_host_start_address > cluster_host_end_address > cluster_pod_subnet > cluster_pod_start_address > cluster_pod_end_address > cluster_service_subnet > cluster_service_start_address > cluster_service_end_address > management_multicast_subnet > management_multicast_start_address > management_multicast_end_address > > > > On Thu, May 12, 2022 at 7:44 AM Outback Dingo > wrote: > > > > working through the configuration now based on findings, and yes i > > only have to do the ip link commands once prior to bootstrap > > > > i did get bond0 and vlans on a previous try to be configured after > > system host-unlock controller-0 they were just in the wrong order, > > so rebuilding the primary node, if it works and put the interfaces > > and networks on proper interfaces and i can bootstrap controller-1 > > and get past unlocking that also... i think it will be a win! > > > > On Thu, May 12, 2022 at 7:32 AM Waines, Greg > wrote: > > > > > > Were you successful ? > > > > > > ( One question ... you are only having to do the 'ip link ...' > > > commands BEFORE bootstrap in order to have IP Connectivity to the > > > outside world for bootstrapping .. correct ? ) > > > > > > Greg. > > > > > > -----Original Message----- > > > From: Outback Dingo > > > > Sent: Wednesday, May 11, 2022 8:21 PM > > > To: Waines, Greg > > > > Cc: starlingx-discuss at lists.starlingx.io > > > Subject: Re: [Starlingx-discuss] STX networking > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > I think even allowing for special conditions of bond0 to allow OAM_IF, MGMT_IF, CLUSTER_IF, PXE_IF to all be set on the same bond0, and even dropping the vlans for corner cases then i would only need to set the Install-time-only parameters: > > > Network Properties > > > > > > pxeboot_subnet: 10.16.48.1 > > > pxeboot_start_address 10.16.48.100 pxeboot_end_address > > > 10.16.48.125 management_subnet management_start_address > > > management_end_address cluster_host_subnet > > > cluster_host_start_address cluster_host_end_address > > > cluster_pod_subnet cluster_pod_start_address > > > cluster_pod_end_address cluster_service_subnet > > > cluster_service_start_address cluster_service_end_address > > > management_multicast_subnet management_multicast_start_address > > > management_multicast_end_address > > > > > > ip link add bond0 type bond > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > enp33s0 down ip link set snp33s0 master bond0 ip link set enp44s0 > > > down ip link set enp44s0 master bond0 ip link set bond0 up > > > > > > Set VLAN on the bond device: > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > set > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan id > > > 1664 ip link set bond0.1664 up > > > > > > > > > and modify the host details as per below: > > > > > > OAM_IF=bond0.1648 > > > MGMT_IF=bond0.1664 > > > CLUSTER_IF=bond0.1680 > > > PXE_IF=bond0 <- this puts pxe on bond0 > > > > > > On Thu, May 12, 2022 at 2:25 AM Waines, Greg > wrote: > > > > > > > > replying to answer your questions from email below, see in-lined > > > > below, Greg. > > > > > > > > -----Original Message----- > > > > From: Outback Dingo > > > > > Sent: Wednesday, May 11, 2022 12:00 AM > > > > To: starlingx-discuss at lists.starlingx.io > > > > Subject: [Starlingx-discuss] STX networking > > > > > > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > > > > > > > scenario... > > > > > > > > i have a host say controller-0 > > > > > > > > prior to any ansible run > > > > [Greg] I assume you mean the bootstrap ansible playbook > > > > > > > > i need to create a bond, and bridges and vlans [Greg] You do > > > > need an interface to the outside world ? e.g. in order to download container images from docker hub. > > > > [Greg] Why can you not simply create a single interface (one link of bond) with a vlan ? > > > > > > > > sure.... > > > > Add a bond device as root: > > > > > > > > ip link add bond0 type bond > > > > ip link set bond0 type bond miimon 100 mode 80211.ad ip link set > > > > enp33s0 down ip link set snp33s0 master bond0 ip link set > > > > enp44s0 down ip link set enp44s0 master bond0 ip link set bond0 > > > > up > > > > > > > > Set VLAN on the bond device: > > > > > > > > ip link add link bond0 name bond0.1648 type vlan id 1648 ip link > > > > set > > > > bond0.1648 up ip link add link bond0 name bond0.1664 type vlan > > > > id > > > > 1664 ip link set bond0.1664 up ip link add link bond0 name > > > > bond0.1680 type vlan id 1680 ip link set bond0.1680 up > > > > > > > > Add the bridge device and attach VLAN to it: > > > > ip link add br0 type bridge > > > > ip link set bond0.1648 master br0 ip link set bond0.1664 master > > > > br0 ip link set bond0.1680 master br0 ip link set br0 up > > > > > > > > so i see where in starlingx > > > > [Greg] the following commands are only possible AFTER bootstrap > > > > > > > > system host-if-add -c platform -a 802.3ad -x layer2 controller-0 > > > > bond0 ae enp33s0 enp49s0 system host-if-modify controller-0 > > > > $OAM_IF -c platform system interface-network-assign controller-0 > > > > $OAM_IF oam system host-if-add controller-0 -V 1648 -c platform > > > > bond0.1648 vlan > > > > bond0 system host-if-add controller-0 -V 1664 -c platform > > > > bond0.1664 vlan bond0 system host-if-add controller-0 -V 1680 -c > > > > platform > > > > bond0.1680 vlan bond0 > > > > > > > > does allow for me to create the bond0 and the vlans? but I dont see any documentation for bridges anywhere ? Do I even need the bridge. > > > > [Greg] No ? there is no requirement for a bridge with StarlingX. > > > > > > > > where i want to set example, since each needs its own interface > > > > can i set OAM_IF=bond0, and say MGMT_IF=bond0.1664 [Greg] Yes ? > > > > i.e. OAM on port-based/untagged-vlan of bond and MGMT on > > > > vlan-tag=1664 on bond ( BUT here is where you need the pxeboot > > > > network because your MGMT network is vlan-tagged ? and you can't > > > > pxe boot over that ) > > > > > > > > OAM_IF=bond0 > > > > system host-if-modify controller-0 $OAM_IF -c platform system > > > > interface-network-assign controller-0 $OAM_IF oam system > > > > host-if-add > > > > controller-0 -V 1648 -c platform bond0.1648 vlan bond0 system > > > > host-if-add controller-0 -V 1664 -c platform bond0.1664 vlan > > > > bond0 system host-if-add controller-0 -V 1680 -c platform > > > > bond0.1680 vlan > > > > bond0 system host-if-add controller-0 -V 1672 -c platform > > > > bond0.1672 vlan bond0 > > > > MGMT_IF=bond0.1664 > > > > system host-if-modify controller-0 lo -c none > > > > IFNET_UUIDS=$(system interface-network-list controller-0 | awk > > > > '{if > > > > ($6=="lo") print $4;}') > > > > for UUID in $IFNET_UUIDS; do > > > > system interface-network-remove ${UUID} done > > > > system host-if-modify controller-0 $MGMT_IF -c platform [Greg] don't think you actually need this command > > > > system interface-network-assign controller-0 $MGMT_IF mgmt > > > > system interface-network-assign controller-0 $MGMT_IF > > > > cluster-host > > > > > > > > the reason for this being our switches are > > > > > > > > # MGMT > > > > interface vlan1648 > > > > address 10.16.48.2/24 > > > > address-virtual 44:38:39:FF:00:02 10.16.48.1 > > > > vlan-id 1648 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1672 > > > > address 10.16.72.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.72.1 > > > > vlan-id 1672 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1680 > > > > address 10.16.80.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.80.1 > > > > vlan-id 1680 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1696 > > > > address 10.16.96.2/24 > > > > address-virtual 44:38:39:FF:00:03 10.16.96.1 > > > > vlan-id 1696 > > > > vlan-raw-device bridge > > > > > > > > interface vlan1664 > > > > address 10.16.64.2/24 > > > > address-virtual 44:38:39:FF:00:07 10.16.64.1 > > > > vlan-id 1664 > > > > vlan-raw-device bridge > > > > > > > > and further down DATAIF_0=bond0.1680 > > > > > > > > the reason being we are trying to have starlingx conform to our > > > > networks topology I also noted: in > > > > https://docs.starlingx.io/deploy_install_guides/r6_release/ansib > > > > le > > > > _boo > > > > tstrap_configs.html#install-time-only-params-r6 > > > > ... > > > > > > > > Network Properties I listed at the bottom > > > > > > > > can i modify these addresses to conform to our networks, as our switches wount pass the traffic you set as defaults, as seen in my first attempt at the bottom. Though i dont believe still dhcp/pxe will work on a vlan interface. > > > > > > > > Network Properties > > > > pxeboot_subnet > > > > pxeboot_start_address > > > > pxeboot_end_address > > > > management_subnet > > > > management_start_address > > > > management_end_address > > > > cluster_host_subnet > > > > cluster_host_start_address > > > > cluster_host_end_address > > > > cluster_pod_subnet > > > > cluster_pod_start_addres > > > > cluster_pod_end_address > > > > cluster_service_subnet > > > > cluster_service_start_address > > > > cluster_service_end_address > > > > management_multicast_subnet > > > > management_multicast_start_address > > > > management_multicast_end_address > > > > > > > > 7: bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet 10.16.48.112/24 brd 10.16.48.255 scope global bond0 > > > > valid_lft forever preferred_lft forever > > > > inet 10.16.48.114/24 scope global secondary bond0 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 8: vlan1664 at bond0: mtu 1500 qdisc htb state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet 192.168.204.2/24 brd 192.168.204.255 scope global vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.206.2/24 brd 192.168.206.255 scope global vlan1664:12 > > > > valid_lft forever preferred_lft forever > > > > inet 169.254.202.1/24 scope global vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.206.1/24 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.204.1/24 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet 192.168.204.4/24 brd 192.168.204.255 scope global secondary vlan1664 > > > > valid_lft forever preferred_lft forever > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 9: vlan1672 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 10: vlan1648 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > 11: vlan1680 at bond0: mtu 1500 qdisc noqueue state UP group default qlen 1000 > > > > link/ether b8:59:9f:12:34:f0 brd ff:ff:ff:ff:ff:ff > > > > inet6 fe80::ba59:9fff:fe12:34f0/64 scope link > > > > valid_lft forever preferred_lft forever > > > > > > > > _______________________________________________ > > > > Starlingx-discuss mailing list > > > > Starlingx-discuss at lists.starlingx.io > > > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-dis > > > > cu > > > > ss > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Tue May 17 18:40:01 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Tue, 17 May 2022 18:40:01 +0000 Subject: [Starlingx-discuss] pxe boot network In-Reply-To: <1651983500351.1065673003.2927018964@optimcloud.com> References: <1651983500351.1065673003.2927018964@optimcloud.com> Message-ID: 1) can i boot multiple aio duplex from a pxe boot server - You can ONLY pxeboot the first controller of a system (controller-0) from an external pxe boot server - after controller-0 is up, controller-0 is used to pxeboot all the remaining starlingx hosts (e.g. controller-1, worker-0, worker-1, etc.) 2) once controller-0 is up, how does controller-1 connect to it. - controller-0 pxeboots controller-1 over mgmt. network (by default ... you can use a separate pxeboot network if your mgmt. network needs to be configured in a fashion that does not support pxebooting, e.g. ipv6 or vlan tagged) - controller-0 then communicates with controller-1 over mgmt. network and cluster-host network for starlingx infrastructure reasons and kubernetes reasons respectively 3) which then i have to inquire how does discovery work for controller-1 on controller-0 - controller-1 bios is configured to network boot on mgmt. network - controller-0 dnsmasq pxeboots controller-1 and kicks starlingx systeminventory to auto-discover node 4) in your opinions, which are welcome, how should i be deploying 6 nodes with this much memery and storage per node, to use 3 as controllers, and all 6 as compute/storage nodes? - starlingx currently only supports only up to 2 controllers - in order to us 6x nodes as both compute/storage ... for a 6 node cluster, you would have to: * 2x AIO controllers (controller and worker/compute function) * 4x workers (worker/compute function) * the only way to use storage on all nodes is to use ROOK (and not the host-based ceph) for your storage backend Greg. -----Original Message----- From: Embedded Devel Sent: Friday, May 13, 2022 10:15 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] pxe boot network [Please note: This e-mail is from an EXTERNAL e-mail address] Im trying to deploy 6 nodes with 512G ram and 24TB disk each Im envisioning 2-3 AIO DUPLEX Controller and 4 WORKER/STORAGE SO i deployed a single AIO duplex node, works fine, then had issues with pxe booting other nodes from it, due to switch configuration, without having to reinvent the switch topology to accomodate the pxe 169.254.202.1 pxe network, im reading below where it states PXE Boot Network VERSION You can set up a PXE boot network for booting all nodes to allow a non-standard management network configuration. The internal management network is used for PXE booting of new hosts and the PXE boot network is not required. However there are scenarios where the internal management network cannot be used for PXE booting of new hosts. For example, if the internal management network needs to be on a VLAN-tagged network for deployment reasons, or if it must support IPv6, you must configure the optional untagged PXE boot network for PXE booting of new hosts using IPv4. According to: https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html#configuring-a-pxe-boot-server-r6 Okay so it seems i need to go this route, and pxe boot all nodes, however.. reading the configure a pxe boot server doc, it states "You can optionally set up a PXE Boot Server to support controller-0 initialization." so this tells me that it can only be used for pxe booting controller-0 ???? https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/configuring-a-pxe-boot-server.html#configuring-a-pxe-boot-server-r6 so questions: 1) can i boot multiple aio duplex from a pxe boot server 2) once controller-0 is up, how does controller-1 connect to it. 3) which then i have to inquire how does discovery work for controller-1 on controller-0 4) in your opinions, which are welcome, how should i be deploying 6 nodes with this much memery and storage per node, to use 3 as controllers, and all 6 as compute/storage nodes? -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Tue May 17 19:40:53 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 17 May 2022 15:40:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 2543 - Failure! Message-ID: <1174307064.10.1652816457878.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 2543 Status: Failure Timestamp: 20220517T194051Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220512T054038Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20220512T054038Z OS: centos DOCKER_BUILD_ID: jenkins-master-containers-20220512T054038Z-builder MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220512T054038Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20220512T054038Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers From build.starlingx at gmail.com Tue May 17 19:40:59 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 17 May 2022 15:40:59 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 290 - Failure! Message-ID: <1335422000.13.1652816460715.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_layered Build #: 290 Status: Failure Timestamp: 20220512T063442Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220512T054038Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20220512T054038Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220512T054038Z/logs MASTER_BUILD_NUMBER: 292 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20220512T054038Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20220512T054038Z DOCKER_BUILD_ID: jenkins-master-containers-20220512T054038Z-builder TIMESTAMP: 20220512T054038Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20220512T054038Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20220512T054038Z/outputs From build.starlingx at gmail.com Tue May 17 19:41:02 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 17 May 2022 15:41:02 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 292 - Failure! Message-ID: <1261773806.16.1652816463311.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 292 Status: Failure Timestamp: 20220512T054038Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220512T054038Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From ildiko.vancsa at gmail.com Tue May 17 20:18:36 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 17 May 2022 13:18:36 -0700 Subject: [Starlingx-discuss] StarlingX TSC election - Nomination period ended In-Reply-To: References: Message-ID: Hi StarlingX Community, I would like to inform you that the nomination period[1] for the StarlingX TSC election has ended. Thank you to all candidates who submitted their nominations into this round. The election officials[2] are still finalizing some details and will come back with further updates shortly. Thank you, [1] https://docs.starlingx.io/election/ [2] https://docs.starlingx.io/election/#election-officials > On May 10, 2022, at 13:42, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > Nominations for the 2 Technical Steering Committee positions are now open and will remain open until __May 17, 2022, 20:00 UTC__. > > All nominations must be submitted as a text file to the starlingx/election repository as explained on the election website[1]. > > Please note that the name of the file should match the email address in your Gerrit configuration. > > Candidates for the Technical Steering Committee Positions: Any contributing community member can propose their candidacy for an available, directly-elected TSC seat. > > The election will be held from May 24, 2022, 20:00 UTC through to May 31, 2022, 20:00 UTC. > > The electorate are the community members that are also contributors for one of the official teams[2] or served in a leadership role (TSC, PL, TL) over the 12-month timeframe May 10, 2021 to May 10, 2022, as well as the contributors who are acknowledged by the TSC. > > Please see the website[3] for additional details about this election. > Please find below the timeline: > > TC nomination starts @ May 10, 2022, 20:00 UTC > TC nomination ends @ May 17, 2022, 20:00 UTC > TC campaigning starts @ May 17, 2022, 20:00 UTC > TC campaigning ends @ May 24, 2022, 20:00 UTC > TC elections starts @ May 24, 2022, 20:00 UTC > TC elections ends @ May 31, 2022, 20:00 UTC > > If you have any questions please be sure to either ask them on the mailing list or to the elections officials[4]. > > Thank you, > > [1] https://docs.starlingx.io/election/#how-to-submit-a-candidacy > [2] https://docs.starlingx.io/governance/reference/tsc/projects/index.html > [3] https://docs.starlingx.io/election/ > [4] https://docs.starlingx.io/election/#election-officials > > > From balendu.burla at intel.com Tue May 17 22:41:27 2022 From: balendu.burla at intel.com (Burla, Balendu) Date: Tue, 17 May 2022 22:41:27 +0000 Subject: [Starlingx-discuss] StarlingX build environment errors In-Reply-To: References: Message-ID: Hi Ghada, We are blocked by the below error for preparing the build environment. Who can be the right contact to help us in resolving the below errors? Is there other email or, or specific location where I should post my request? Thank you, Best regards, Mouli. From: Burla, Balendu Sent: Sunday, May 15, 2022 10:52 PM To: starlingx-discuss at lists.starlingx.io; Ho, Teresa Cc: davlet.panech at windriver.com; Saracin, Mihnea ; Khalil, Ghada ; Nidhi Shivashankara Belur (nidhi.shivashankara.belur at intel.com) ; Li, Baoqian ; Burla, Balendu Subject: StarlingX build environment errors Hi, I was trying to prepare a StarlingX build environment by following the steps captured in the below link: https://docs.starlingx.io/developer_resources/build_guide.html#build-the-centos-mirror-repository while building the packages, I see below errors: (similar errors are observed for each package build). It seems, I am missing some basic configuration but not sure what it is. Spent decent amount of time to try to resolve the issue.. but no luck. Looking for your help. cd $MY_REPO_ROOT_DIR/stx-tools/toCOPY bash generate-centos-repo.sh /import/mirrors/CentOS/stx/CentOS/ build-pkgs 20:16:43 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... [899/1984] 20:16:43 Start: init plugins 20:16:43 INFO: selinux disabled 20:16:43 Finish: init plugins 20:16:43 Start: run 20:16:43 INFO: Start(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 20:16:43 Start: chroot init 20:16:43 INFO: calling preinit hooks 20:16:43 INFO: enabled root cache 20:16:43 INFO: enabled yum cache 20:16:43 Start: cleaning yum metadata 20:16:43 Finish: cleaning yum metadata 20:16:43 INFO: enabled HW Info plugin 20:16:43 Mock Version: 1.4.16 20:16:43 INFO: Mock Version: 1.4.16 20:16:43 Start: yum install 20:16:43 ERROR: Exception(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 0 minutes 0 seconds 20:16:43 INFO: Results and/or logs in: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/workerconfig-1.0-14.tis 20:16:43 ERROR: Command failed: 20:16:43 # /usr/bin/yum --installroot /localdisk/loadbuild/stx_builder/fec-operator/std/mock/b0/root/ --releasever 7 install @buildsys-build pigz lbzip2 bash yum python3 20:16:43 Failed to set locale, defaulting to C 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 To address this issue please refer to the below wiki article 20:16:43 20:16:43 https://wiki.centos.org/yum-errors 20:16:43 20:16:43 If above article doesn't help to resolve this issue please use https://bugs.centos.org/. 20:16:43 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 20:16:43 20:16:43 One of the configured repositories failed (Stx-Centos-7-Distro), 20:16:43 and yum doesn't have enough cached data to continue. At this point the only 20:16:43 safe thing yum can do is fail. There are a few ways to work "fix" this: 20:16:43 20:16:43 1. Contact the upstream for the repository and get them to fix the problem. 20:16:43 [859/1984] 20:16:43 2. Reconfigure the baseurl/etc. for the repository, to point to a working 20:16:43 upstream. This is most often useful if you are using a newer 20:16:43 distribution release than is supported by the repository (and the 20:16:43 packages for the previous distribution release still work). 20:16:43 20:16:43 3. Run the command with the repository temporarily disabled 20:16:43 yum --disablerepo=StxCentos7Distro ... 20:16:43 20:16:43 4. Disable the repository permanently, so yum won't use it by default. Yum 20:16:43 will then just ignore the repository until you permanently enable it 20:16:43 again or use --enablerepo for temporary usage: 20:16:43 20:16:43 yum-config-manager --disable StxCentos7Distro 20:16:43 or 20:16:43 subscription-manager repos --disable=StxCentos7Distro 20:16:43 20:16:43 5. Configure the failing repository to be skipped, if it is unavailable. 20:16:43 Note that yum will try to contact the repo. when it runs most commands, 20:16:43 so will have to try and fail each time (and thus. yum will be be much 20:16:43 slower). If it is a very temporary problem though, this is often a nice 20:16:43 compromise: 20:16:43 20:16:43 yum-config-manager --save --setopt=StxCentos7Distro.skip_if_unavailable=true 20:16:43 20:16:43 failure: repodata/repomd.xml from StxCentos7Distro: [Errno 256] No more mirrors to try. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 20:16:43 End build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm 20:16:43 Error building workerconfig-1.0-14.tis.src.rpm on 'b0'. 20:16:43 Will try to build again (if some other package will succeed). 20:16:43 schedule2: no unbuilt deps for 'worker-utils', searching at depth 3 20:16:43 Start build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/worker-utils-1.0-27.tis.src.rpm 20:16:46 building worker-utils-1.0-27.tis.src.rpm 20:16:46 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... 20:16:46 Start: init plugins 20:16:46 INFO: selinux disabled 20:16:46 Finish: init plugins 20:16:46 Start: run 20:16:46 Start: chroot init 20:16:46 INFO: calling preinit hooks .... 20:16:47 20:16:47 Results out to: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std 20:16:47 20:16:47 Pkgs built: 0 20:16:47 dirname: missing operand 20:16:47 Try 'dirname --help' for more information. 20:16:47 20:16:47 Auditing for obsolete srpms 20:16:47 waiting for srpm audit to complete 20:16:47 Auditing for obsolete rpms 20:16:47 waiting for rpm audit to complete 20:16:47 Audit complete 20:16:47 20:16:47 Recreate repodata 20:16:49 20:16:49 Failed to build packages: worker-utils-1.0-27.tis.src.rpm workerconfig-1.0-14.tis.src.rpm watchdog-5.13-12.el7.tis.8.src.rpm vm-topology-1.0-18.tis.src.rpm tuned-config-1.0-4.tis.src.rpm vault-helm-1.0-27.tis.src.rpm util-linux-config-1.0-5.tis.src.rpm update-motd-1.0-7.tis.src.rpm tzdata-2021e-1.el7.tis.1.src.rpm tss2-930-1.tis.2.src.rpm tsconfig-1.0-60.tis.src.rpm trident-installer-22.01.0-0.tis.8.src.rpm systemd-config-1.0-12.tis.src.rpm tpm2-tools-3.0.4-2.el7.tis.6.src.rpm tpm2-openssl-engine-1.0-3.tis.src.rpm tboot-1.9.6-3.el7.tis.5.src.rpm syslog-ng-config-1.0-34.tis.src.rpm sysinv-fpga-agent-1.0-13.tis.src.rpm sysinv-agent-1.0-15.tis.src.rpm stx-ssl-1.0.0-15.tis.src.rpm sysinv-1.0-2684.tis.src.rpm sudo-config-1.0-5.tis.src.rpm stx-vault-helm-1.0-27.tis.src.rpm stx-snmp-helm-1.0-32.tis.src.rpm stx-sdo-helm-1.0-6.tis.src.rpm stx-rook-ceph-1.0-17.tis.src.rpm stx-ptp-notification-helm-1.0-57.tis.src.rpm stx-ocf-scripts-1.0-11.tis.src.rpm stx-portieris-helm-1.0-37.tis.src.rpm stx-platform-helm-1.0-46.tis.src.rpm stx-openstack-helm-1.0-199.tis.src.rpm stx-oidc-auth-helm-1.0-64.tis.src.rpm stx-nginx-ingress-controller-helm-1.1-25.tis.src.rpm stx-monitor-helm-1.0-37.tis.src.rpm stx-metrics-server-helm-1.0-11.tis.src.rpm storageconfig-1.0-12.tis.src.rpm stx-istio-helm-1.0-4.tis.src.rpm stx-extensions-1.0-7.tis.src.rpm stx-cert-manager-helm-1.0-33.tis.src.rpm stx-audit-helm-1.0-22.tis.src.rpm starlingx-dashboard-1.0-307.tis.src.rpm spectre-meltdown-checker-0.37+-3.tis.src.rpm sm-tools-1.0-22.tis.src.rpm sm-api-1.0-49.tis.src.rpm sm-db-1.0.0-57.tis.src.rpm sm-common-1.0.0-32.tis.src.rpm sm-client-1.0-34.tis.src.rpm sm-1.0.0-55.tis.src.rpm shim-signed-15-1.tis.5.src.rpm shim-15-1.el7.tis.7.src.rpm rpm-4.14.0-1.tis.6.src.rpm shadow-utils-config-1.0-6.tis.src.rpm setup-config-1.0-4.tis.src.rpm rsync-config-1.0-5.tis.src.rpm resource-agents-4.1.1-12.el7_6.7.tis.21.src.rpm requests-toolbelt-0.9.1-0.tis.4.src.rpm registry-token-server-1.0.0-1.tis.15.src.rpm python-webencodings-0.5.1-1.el7.tis.4.src.rpm Redfishtool-1.1.0-.tis.3.src.rpm rdma-core-55mlnx37-1.55103.tis.21.src.rpm rabbitmq-server-config-1.0-6.tis.src.rpm python-wsme-0.9.2-1.el7.tis.5.src.rpm python-voluptuous-0.8.9-1.el7.tis.2.src.rpm python-siteconfig-1.0-1.tis.src.rpm python-ryu-4.19-0.tis.5.src.rpm python-setuptools-38.5.1-1.el7.tis.2.src.rpm python-openstacksdk-0.36.0-1.tis.33.src.rpm python-psycopg2-2.5.1-3.el7.tis.2.src.rpm python-pankoclient-0.7.0-1.tis.2.src.rpm python-os-vif-1.9.1-1.el7.tis.2.src.rpm python-oslo-messaging-5.30.6-1.el7.tis.6.src.rpm python-openstackdocstheme-1.11.0-1.tis.2.src.rpm python-openstackclient-4.0.0-1.tis.18.src.rpm python-novaclient-15.1.0-1.tis.4.src.rpm python-keystoneclient-3.21.0-2.tis.2.src.rpm python-neutronclient-6.14.0-1.tis.4.src.rpm python-lefthandclient-2.1.0-0.tis.3.src.rpm python-kubernetes-8.0.0-8.el7.tis.1.src.rpm python-keystoneauth1-3.17.1-2.tis.2.src.rpm python-keyring-5.7.1-1.tis.6.src.rpm python-k8sapp-vault-20.06-27.tis.src.rpm python-k8sapp-portieris-1.0-37.tis.src.rpm python-k8sapp-snmp-1.0-9.tis.src.rpm python-k8sapp-rook-1.0-17.tis.src.rpm python-k8sapp-ptp-notification-1.0-57.tis.src.rpm python-k8sapp-platform-1.0-46.tis.src.rpm python-k8sapp-oidc-1.0-64.tis.src.rpm python-k8sapp-openstack-1.0-199.tis.src.rpm python-k8sapp-auditd-1.0-22.tis.src.rpm python-k8sapp-nginx-ingress-controller-1.0-14.tis.src.rpm python-k8sapp-istio-1.0-4.tis.src.rpm python-k8sapp-cert-manager-1.0-33.tis.src.rpm python-ironicclient-3.1.0-1.tis.2.src.rpm python-heatclient-1.18.0-1.tis.4.src.rpm python-gnocchiclient-7.0.4-1.tis.31.src.rpm python-daemon-2.2.3-7.el8.tis.4.src.rpm python-glanceclient-2.17.0-1.tis.4.src.rpm python-fmclient-1.0-35.tis.src.rpm python-docker-3.3.0-1.el7.tis.6.src.rpm python-django-horizon-15.1.0-1.tis.54.src.rpm python-cinderclient-5.0.0-1.tis.6.src.rpm python-cephclient-13.2.2.0-20.tis.src.rpm python-barbicanclient-4.9.0-1.tis.3.src.rpm puppet-sshd-1.0.0-9.tis.src.rpm python-aodhclient-1.3.0-1.tis.1.src.rpm python-3parclient-4.2.3-0.tis.3.src.rpm puppet-sysinv-1.0.0-43.tis.src.rpm puppet-stdlib-4.18.0-2.el7.tis.3.src.rpm puppet-staging-1.0.4-1.b466d93git.el7.tis.4.src.rpm puppet-smapi-1.0.0-7.tis.src.rpm puppet-rabbitmq-5.6.0-4.5ac45degit.el7.tis.2.src.rpm puppet-puppi-2.2.3-0.tis.4.src.rpm puppet-openstacklib-11.5.0-1.el7.tis.8.src.rpm puppet-postgresql-4.8.0-0.tis.5.src.rpm puppet-patching-1.0.0-13.tis.src.rpm puppet-oslo-11.3.0-1.el7.tis.2.src.rpm puppet-nslcd-0.0.1-0.tis.4.src.rpm puppet-nfv-1.0.0-19.tis.src.rpm puppet-network-1.0.2-0.tis.10.src.rpm puppet-ldap-0.2.4-0.tis.4.src.rpm puppet-mtce-1.0.0-14.tis.src.rpm puppet-manifests-1.0.0-1066.tis.src.rpm puppet-lvm-0.5.0-0.tis.4.src.rpm puppet-keystone-11.3.0-1.el7.tis.7.src.rpm puppet-horizon-11.5.0-1.el7.tis.4.src.rpm puppet-haproxy-1.5.0-4.6ffcb07git.el7.tis.5.src.rpm puppet-dnsmasq-1.1.0-0.tis.4.src.rpm puppet-fm-1.0.0-17.tis.src.rpm puppet-filemapper-1.1.3-0.tis.2.src.rpm puppet-drbd-0.3.1-rc0.tis.4.src.rpm puppet-dcorch-1.0.0-29.tis.src.rpm puppet-dcmanager-1.0.0-22.tis.src.rpm puppet-dcdbsync-1.0.0-14.tis.src.rpm portieris-helm-0.7.0-14.tis.src.rpm puppet-create_resources-0.0.1-0.tis.2.src.rpm puppet-ceph-2.4.1-1.el7.tis.9.src.rpm puppet-boolean-1.0.2-1.tis.2.src.rpm puppet-4.8.2-1.el7.tis.3.src.rpm playbookconfig-1.0-784.tis.src.rpm platform-util-1.0-89.tis.src.rpm platform-kickstarts-1.0.0-291.tis.src.rpm pam-config-1.0-10.tis.src.rpm pf-bb-config-21.6-0.tis.8.src.rpm pci-irq-affinity-agent-1.0-33.tis.src.rpm patch-alarm-1.0-26.tis.src.rpm openvswitch-config-1.0-5.tis.src.rpm openstack-ras-1.0.0-0.tis.3.src.rpm openstack-keystone-16.0.0-1.el7.tis.23.src.rpm opae-intel-fpga-driver-2.0.1-10.tis.55.src.rpm openstack-helm-infra-1.0-57.tis.src.rpm openstack-helm-1.0-59.tis.src.rpm openssh-config-1.0-11.tis.src.rpm oidcauthtools-1.0-5.tis.src.rpm openldap-config-1.0-17.tis.src.rpm ntp-config-1.0-4.tis.src.rpm nova-api-proxy-1.0-38.tis.src.rpm ntp-4.2.6p5-29.el7.centos.2.tis.9.src.rpm net-tools-2.0-0.24.20131004git.el7.tis.6.src.rpm nfv-1.0-233.tis.src.rpm nfs-utils-config-1.0-5.tis.src.rpm nfscheck-1.0-5.tis.src.rpm namespace-utils-1.0-4.tis.src.rpm multus-config-1.0-1.tis.src.rpm mtce-storage-1.0-11.tis.src.rpm monitor-helm-elastic-1.0-19.tis.src.rpm mtce-control-1.0-15.tis.src.rpm mtce-compute-1.0-17.tis.src.rpm mstflint-4.16.0-1.55103.tis.2.src.rpm monitor-tools-1.0-10.tis.src.rpm monitor-helm-1.0-25.tis.src.rpm mlnx-tools-5.2.0-0.55103.tis.21.src.rpm metrics-server-helm-1.0-1.tis.src.rpm logrotate-3.8.6-17.el7.tis.6.src.rpm memcached-custom-1.0-5.tis.src.rpm mechanize-0.4.5-1.el7.tis.3.src.rpm logrotate-config-1.0-5.tis.src.rpm logmgmt-1.0-18.tis.src.rpm lldpd-0.9.0-0.tis.9.src.rpm linuxptp-3.1.1-1.tis.5.src.rpm libtpms-0.6.0-2.tis.2.src.rpm lighttpd-config-1.0-9.tis.src.rpm lighttpd-1.4.54-1.el7.tis.12.src.rpm libvirt-python-4.7.0-1.tis.6.src.rpm libnftnl-1.1.5-4.tis.1.src.rpm libfdt-1.4.4-0.tis.5.src.rpm libevent-2.0.21-4.el7.tis.3.src.rpm kvm-timer-advance-1.0-3.tis.src.rpm libbpf-0.5.0-1.tis.1.src.rpm libbnxt_re-220.0.5.0-rhel7u9.tis.3.src.rpm ldapscripts-2.0.8-0.tis.8.src.rpm kubernetes-1.23.1-1.23.1-1.tis.4.src.rpm kube-memory-1.0-8.tis.src.rpm kube-cpusets-1.0-6.tis.src.rpm istio-helm-1.13.3-2.tis.src.rpm kmod-bnxt_en-1.10.2-220.0.13.0.tis.19.src.rpm kiali-helm-1.45.0-3.tis.src.rpm kexec-tools-2.0.21-1.tis.2.src.rpm keepalived-2.1.5-6.tis.1.src.rpm k8s-pod-recovery-1.0-0.tis.15.src.rpm k8s-cni-cache-cleanup-1.0-0.tis.1.src.rpm isolcpus-device-plugin-1.0-5.tis.src.rpm iscsi-initiator-utils-config-1.0-4.tis.src.rpm iptables-config-1.0-4.tis.src.rpm initscripts-config-1.0-12.tis.src.rpm iptables-1.8.4-21.tis.6.src.rpm iproute-5.12.0-4.tis.4.src.rpm io-scheduler-1.0-6.tis.src.rpm initscripts-9.49.46-1.el7.tis.15.src.rpm inih-44-0.tis.1.src.rpm igb_uio-kmod-21.02-0.tis.57.src.rpm html5lib-python-1.0.1-1.el7.tis.4.src.rpm ice-kmod-1.8.3-1.tis.25.src.rpm iavf-kmod-4.4.2-1.tis.23.src.rpm i40e-kmod-2.18.9-1.tis.23.src.rpm helm-3.2.1-0.tis.17.src.rpm haproxy-config-1.0-5.tis.src.rpm haproxy-1.5.18-8.el7.tis.12.src.rpm golang-1.17.5-1.17.5-1.tis.1.src.rpm grubby-8.28-25.el7.tis.5.src.rpm gpu-operator-1.8.1-0.tis.4.src.rpm golang-dep-0.5.0-4.tis.src.rpm golang-1.16.12-1.16.12-2.tis.3.src.rpm fm-rest-api-1.0-72.tis.src.rpm fm-mgr-1.0-25.tis.src.rpm EXAMPLE_SYSINV-1.0-2.tis.src.rpm fm-doc-1.0-52.tis.src.rpm fm-common-1.0-69.tis.src.rpm fm-api-1.0-46.tis.src.rpm filesystem-scripts-1.0-4.tis.src.rpm facter-2.4.4-4.el7.tis.7.src.rpm EXAMPLE_VIM-1.0-4.tis.src.rpm EXAMPLE_SERVICE-1.0-2.tis.src.rpm EXAMPLE_RR-1.0-2.tis.src.rpm EXAMPLE_MTCE-1.0-4.tis.src.rpm EXAMPLE_0001-1.0-2.tis.src.rpm EXAMPLE_KUBELET-1.0-1.tis.src.rpm EXAMPLE_DC-1.0-3.tis.src.rpm EXAMPLE_0002-1.0-2.tis.src.rpm etcd-3.3.15-1.tis.7.src.rpm engtools-1.0-37.tis.src.rpm enable-dev-patch-1.0-4.tis.src.rpm docker-distribution-2.7.1-1.tis.13.src.rpm dwarves-1.22-1.tis.1.src.rpm drbd-9.15.1-0.tis.11.src.rpm dpkg-1.18.24-0.tis.2.src.rpm docker-config-1.0-5.tis.src.rpm dnsmasq-config-1.0-4.tis.src.rpm dnsmasq-2.76-7.el7.tis.7.src.rpm dhcp-config-1.0-8.tis.src.rpm dmesg-config-1.0-1.tis.src.rpm distributedcloud-client-1.0.0-1.tis.65.src.rpm distributedcloud-1.0.0-1.tis.422.src.rpm dhcp-4.2.5-82.el7.centos.tis.13.src.rpm dex-helm-1.0-10.tis.src.rpm controllerconfig-1.0-327.tis.src.rpm collector-1.0-69.tis.src.rpm containernetworking-plugins-1.0.1-1.tis.9.src.rpm containerd-config-1.0-4.tis.src.rpm containerd-1.4.11-22.tis.src.rpm config-gate-1.0-13.tis.src.rpm collectd-extensions-1.0-0.tis.85.src.rpm cloud-init-0.7.9-24.el7.centos.1.tis.4.src.rpm cgts-client-1.0-295.tis.src.rpm chartmuseum-0.12.0-6.tis.src.rpm ceph-manager-1.0-28.tis.src.rpm cgcs-patch-1.0-105.tis.src.rpm cert-mon-1.0-7.tis.src.rpm cert-manager-helm-1.0-15.tis.src.rpm cert-alarm-1.0-5.tis.src.rpm centos-release-config-1.0-3.tis.src.rpm build-info-1.0-4.tis.src.rpm bond-cni-1.0-bff6422.tis.3.src.rpm audit-config-1.0-4.tis.src.rpm armada-0.2.0-0.tis.14.src.rpm armada-helm-toolkit-1.0-8.tis.src.rpm rabbitmq-server-3.6.5-1.el7.tis.9.src.rpm parted-3.1-29.el7.tis.7.src.rpm qat17-4.14.0-00031.tis.60.src.rpm libvirt-4.7.0-1.tis.31.src.rpm grub2-2.02-0.86.el7.centos.tis.14.src.rpm openvswitch-2.11.0-0.tis.13.src.rpm mtce-guest-1.0-146.tis.src.rpm mtce-common-1.0-142.tis.src.rpm mtce-1.0-217.tis.src.rpm mlnx-ofa_kernel-5.5-OFED.5.5.1.0.3.1.tis.26.src.rpm ceph-14.2.22-0.el7.tis.35.src.rpm sudo-1.8.23-10.el7_9.1.tis.10.src.rpm qemu-kvm-ev-3.0.0-0.tis.20.src.rpm openssh-7.4p1-21.el7_4.tis.9.src.rpm mariadb-10.1.28-1.el7.tis.8.src.rpm kubernetes-unversioned-1.0-1.tis.9.src.rpm kubernetes-1.22.5-1.22.5-1.tis.9.src.rpm kubernetes-1.21.8-1.21.8-1.tis.16.src.rpm openldap-2.4.44-20.el7.tis.11.src.rpm setup-2.8.71-10.el7.tis.11.src.rpm bash-4.2.46-34.el7.tis.10.src.rpm systemd-219-78.el7_9.3.tis.19.src.rpm python-2.7.5-89.el7.tis.8.src.rpm kernel-5.10.99-200.42.tis.el7.src.rpm pxe-network-installer-1.0-35.tis.src.rpm ######## Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 [stx_builder at 4df2aa3dafa0 toCOPY]$ Best regards, Mouli. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed May 18 05:39:44 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 18 May 2022 01:39:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 2545 - Failure! Message-ID: <1567771536.20.1652852395894.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 2545 Status: Failure Timestamp: 20220518T053938Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220517T235521Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20220517T235521Z OS: centos DOCKER_BUILD_ID: jenkins-master-containers-20220517T235521Z-builder MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220517T235521Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20220517T235521Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers From build.starlingx at gmail.com Wed May 18 05:39:57 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 18 May 2022 01:39:57 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 291 - Still Failing! In-Reply-To: <1278888664.11.1652816458856.JavaMail.javamailuser@localhost> References: <1278888664.11.1652816458856.JavaMail.javamailuser@localhost> Message-ID: <864365146.23.1652852398076.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_layered Build #: 291 Status: Still Failing Timestamp: 20220518T010701Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220517T235521Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20220517T235521Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220517T235521Z/logs MASTER_BUILD_NUMBER: 293 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20220517T235521Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20220517T235521Z DOCKER_BUILD_ID: jenkins-master-containers-20220517T235521Z-builder TIMESTAMP: 20220517T235521Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20220517T235521Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20220517T235521Z/outputs From build.starlingx at gmail.com Wed May 18 05:39:59 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 18 May 2022 01:39:59 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 293 - Still Failing! In-Reply-To: <1069493224.14.1652816461465.JavaMail.javamailuser@localhost> References: <1069493224.14.1652816461465.JavaMail.javamailuser@localhost> Message-ID: <1456234772.26.1652852400401.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 293 Status: Still Failing Timestamp: 20220517T235521Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220517T235521Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From alexandru.dimofte at intel.com Wed May 18 10:01:59 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 18 May 2022 10:01:59 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220517T213711Z Message-ID: Sanity Test from 2022-May-18 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220517T213711Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220517T213711Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz Bug fixed: https://bugs.launchpad.net/starlingx/+bug/1971981 - nginx-ingress-controller apply-failed during setup of the StarlingX OBS: The Setup stage is now passing on all configurations All BARE-METAL configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1973888 - StarlingX provision failed for all bare-metal configurations At least virtual configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, OBS: All pods are fine now but stx-openstack apply still fails. I attached new logs. Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Greg.Waines at windriver.com Wed May 18 15:24:53 2022 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 18 May 2022 15:24:53 +0000 Subject: [Starlingx-discuss] StarlingX build environment errors In-Reply-To: References: Message-ID: I would talk to Davlet Panech or Scott Little. Greg. From: Burla, Balendu Sent: Tuesday, May 17, 2022 6:41 PM To: starlingx-discuss at lists.starlingx.io; Ho, Teresa Cc: Panech, Davlet ; Term Saracin, Mihnea ; Khalil, Ghada ; Shivashankara Belur, Nidhi ; Li, Baoqian Subject: Re: [Starlingx-discuss] StarlingX build environment errors [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Ghada, We are blocked by the below error for preparing the build environment. Who can be the right contact to help us in resolving the below errors? Is there other email or, or specific location where I should post my request? Thank you, Best regards, Mouli. From: Burla, Balendu Sent: Sunday, May 15, 2022 10:52 PM To: starlingx-discuss at lists.starlingx.io; Ho, Teresa > Cc: davlet.panech at windriver.com; Saracin, Mihnea >; Khalil, Ghada >; Nidhi Shivashankara Belur (nidhi.shivashankara.belur at intel.com) >; Li, Baoqian >; Burla, Balendu > Subject: StarlingX build environment errors Hi, I was trying to prepare a StarlingX build environment by following the steps captured in the below link: https://docs.starlingx.io/developer_resources/build_guide.html#build-the-centos-mirror-repository while building the packages, I see below errors: (similar errors are observed for each package build). It seems, I am missing some basic configuration but not sure what it is. Spent decent amount of time to try to resolve the issue.. but no luck. Looking for your help. cd $MY_REPO_ROOT_DIR/stx-tools/toCOPY bash generate-centos-repo.sh /import/mirrors/CentOS/stx/CentOS/ build-pkgs 20:16:43 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... [899/1984] 20:16:43 Start: init plugins 20:16:43 INFO: selinux disabled 20:16:43 Finish: init plugins 20:16:43 Start: run 20:16:43 INFO: Start(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 20:16:43 Start: chroot init 20:16:43 INFO: calling preinit hooks 20:16:43 INFO: enabled root cache 20:16:43 INFO: enabled yum cache 20:16:43 Start: cleaning yum metadata 20:16:43 Finish: cleaning yum metadata 20:16:43 INFO: enabled HW Info plugin 20:16:43 Mock Version: 1.4.16 20:16:43 INFO: Mock Version: 1.4.16 20:16:43 Start: yum install 20:16:43 ERROR: Exception(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 0 minutes 0 seconds 20:16:43 INFO: Results and/or logs in: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/workerconfig-1.0-14.tis 20:16:43 ERROR: Command failed: 20:16:43 # /usr/bin/yum --installroot /localdisk/loadbuild/stx_builder/fec-operator/std/mock/b0/root/ --releasever 7 install @buildsys-build pigz lbzip2 bash yum python3 20:16:43 Failed to set locale, defaulting to C 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 To address this issue please refer to the below wiki article 20:16:43 20:16:43 https://wiki.centos.org/yum-errors 20:16:43 20:16:43 If above article doesn't help to resolve this issue please use https://bugs.centos.org/. 20:16:43 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 20:16:43 20:16:43 One of the configured repositories failed (Stx-Centos-7-Distro), 20:16:43 and yum doesn't have enough cached data to continue. At this point the only 20:16:43 safe thing yum can do is fail. There are a few ways to work "fix" this: 20:16:43 20:16:43 1. Contact the upstream for the repository and get them to fix the problem. 20:16:43 [859/1984] 20:16:43 2. Reconfigure the baseurl/etc. for the repository, to point to a working 20:16:43 upstream. This is most often useful if you are using a newer 20:16:43 distribution release than is supported by the repository (and the 20:16:43 packages for the previous distribution release still work). 20:16:43 20:16:43 3. Run the command with the repository temporarily disabled 20:16:43 yum --disablerepo=StxCentos7Distro ... 20:16:43 20:16:43 4. Disable the repository permanently, so yum won't use it by default. Yum 20:16:43 will then just ignore the repository until you permanently enable it 20:16:43 again or use --enablerepo for temporary usage: 20:16:43 20:16:43 yum-config-manager --disable StxCentos7Distro 20:16:43 or 20:16:43 subscription-manager repos --disable=StxCentos7Distro 20:16:43 20:16:43 5. Configure the failing repository to be skipped, if it is unavailable. 20:16:43 Note that yum will try to contact the repo. when it runs most commands, 20:16:43 so will have to try and fail each time (and thus. yum will be be much 20:16:43 slower). If it is a very temporary problem though, this is often a nice 20:16:43 compromise: 20:16:43 20:16:43 yum-config-manager --save --setopt=StxCentos7Distro.skip_if_unavailable=true 20:16:43 20:16:43 failure: repodata/repomd.xml from StxCentos7Distro: [Errno 256] No more mirrors to try. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 20:16:43 End build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm 20:16:43 Error building workerconfig-1.0-14.tis.src.rpm on 'b0'. 20:16:43 Will try to build again (if some other package will succeed). 20:16:43 schedule2: no unbuilt deps for 'worker-utils', searching at depth 3 20:16:43 Start build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/worker-utils-1.0-27.tis.src.rpm 20:16:46 building worker-utils-1.0-27.tis.src.rpm 20:16:46 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... 20:16:46 Start: init plugins 20:16:46 INFO: selinux disabled 20:16:46 Finish: init plugins 20:16:46 Start: run 20:16:46 Start: chroot init 20:16:46 INFO: calling preinit hooks .... 20:16:47 20:16:47 Results out to: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std 20:16:47 20:16:47 Pkgs built: 0 20:16:47 dirname: missing operand 20:16:47 Try 'dirname --help' for more information. 20:16:47 20:16:47 Auditing for obsolete srpms 20:16:47 waiting for srpm audit to complete 20:16:47 Auditing for obsolete rpms 20:16:47 waiting for rpm audit to complete 20:16:47 Audit complete 20:16:47 20:16:47 Recreate repodata 20:16:49 20:16:49 Failed to build packages: worker-utils-1.0-27.tis.src.rpm workerconfig-1.0-14.tis.src.rpm watchdog-5.13-12.el7.tis.8.src.rpm vm-topology-1.0-18.tis.src.rpm tuned-config-1.0-4.tis.src.rpm vault-helm-1.0-27.tis.src.rpm util-linux-config-1.0-5.tis.src.rpm update-motd-1.0-7.tis.src.rpm tzdata-2021e-1.el7.tis.1.src.rpm tss2-930-1.tis.2.src.rpm tsconfig-1.0-60.tis.src.rpm trident-installer-22.01.0-0.tis.8.src.rpm systemd-config-1.0-12.tis.src.rpm tpm2-tools-3.0.4-2.el7.tis.6.src.rpm tpm2-openssl-engine-1.0-3.tis.src.rpm tboot-1.9.6-3.el7.tis.5.src.rpm syslog-ng-config-1.0-34.tis.src.rpm sysinv-fpga-agent-1.0-13.tis.src.rpm sysinv-agent-1.0-15.tis.src.rpm stx-ssl-1.0.0-15.tis.src.rpm sysinv-1.0-2684.tis.src.rpm sudo-config-1.0-5.tis.src.rpm stx-vault-helm-1.0-27.tis.src.rpm stx-snmp-helm-1.0-32.tis.src.rpm stx-sdo-helm-1.0-6.tis.src.rpm stx-rook-ceph-1.0-17.tis.src.rpm stx-ptp-notification-helm-1.0-57.tis.src.rpm stx-ocf-scripts-1.0-11.tis.src.rpm stx-portieris-helm-1.0-37.tis.src.rpm stx-platform-helm-1.0-46.tis.src.rpm stx-openstack-helm-1.0-199.tis.src.rpm stx-oidc-auth-helm-1.0-64.tis.src.rpm stx-nginx-ingress-controller-helm-1.1-25.tis.src.rpm stx-monitor-helm-1.0-37.tis.src.rpm stx-metrics-server-helm-1.0-11.tis.src.rpm storageconfig-1.0-12.tis.src.rpm stx-istio-helm-1.0-4.tis.src.rpm stx-extensions-1.0-7.tis.src.rpm stx-cert-manager-helm-1.0-33.tis.src.rpm stx-audit-helm-1.0-22.tis.src.rpm starlingx-dashboard-1.0-307.tis.src.rpm spectre-meltdown-checker-0.37+-3.tis.src.rpm sm-tools-1.0-22.tis.src.rpm sm-api-1.0-49.tis.src.rpm sm-db-1.0.0-57.tis.src.rpm sm-common-1.0.0-32.tis.src.rpm sm-client-1.0-34.tis.src.rpm sm-1.0.0-55.tis.src.rpm shim-signed-15-1.tis.5.src.rpm shim-15-1.el7.tis.7.src.rpm rpm-4.14.0-1.tis.6.src.rpm shadow-utils-config-1.0-6.tis.src.rpm setup-config-1.0-4.tis.src.rpm rsync-config-1.0-5.tis.src.rpm resource-agents-4.1.1-12.el7_6.7.tis.21.src.rpm requests-toolbelt-0.9.1-0.tis.4.src.rpm registry-token-server-1.0.0-1.tis.15.src.rpm python-webencodings-0.5.1-1.el7.tis.4.src.rpm Redfishtool-1.1.0-.tis.3.src.rpm rdma-core-55mlnx37-1.55103.tis.21.src.rpm rabbitmq-server-config-1.0-6.tis.src.rpm python-wsme-0.9.2-1.el7.tis.5.src.rpm python-voluptuous-0.8.9-1.el7.tis.2.src.rpm python-siteconfig-1.0-1.tis.src.rpm python-ryu-4.19-0.tis.5.src.rpm python-setuptools-38.5.1-1.el7.tis.2.src.rpm python-openstacksdk-0.36.0-1.tis.33.src.rpm python-psycopg2-2.5.1-3.el7.tis.2.src.rpm python-pankoclient-0.7.0-1.tis.2.src.rpm python-os-vif-1.9.1-1.el7.tis.2.src.rpm python-oslo-messaging-5.30.6-1.el7.tis.6.src.rpm python-openstackdocstheme-1.11.0-1.tis.2.src.rpm python-openstackclient-4.0.0-1.tis.18.src.rpm python-novaclient-15.1.0-1.tis.4.src.rpm python-keystoneclient-3.21.0-2.tis.2.src.rpm python-neutronclient-6.14.0-1.tis.4.src.rpm python-lefthandclient-2.1.0-0.tis.3.src.rpm python-kubernetes-8.0.0-8.el7.tis.1.src.rpm python-keystoneauth1-3.17.1-2.tis.2.src.rpm python-keyring-5.7.1-1.tis.6.src.rpm python-k8sapp-vault-20.06-27.tis.src.rpm python-k8sapp-portieris-1.0-37.tis.src.rpm python-k8sapp-snmp-1.0-9.tis.src.rpm python-k8sapp-rook-1.0-17.tis.src.rpm python-k8sapp-ptp-notification-1.0-57.tis.src.rpm python-k8sapp-platform-1.0-46.tis.src.rpm python-k8sapp-oidc-1.0-64.tis.src.rpm python-k8sapp-openstack-1.0-199.tis.src.rpm python-k8sapp-auditd-1.0-22.tis.src.rpm python-k8sapp-nginx-ingress-controller-1.0-14.tis.src.rpm python-k8sapp-istio-1.0-4.tis.src.rpm python-k8sapp-cert-manager-1.0-33.tis.src.rpm python-ironicclient-3.1.0-1.tis.2.src.rpm python-heatclient-1.18.0-1.tis.4.src.rpm python-gnocchiclient-7.0.4-1.tis.31.src.rpm python-daemon-2.2.3-7.el8.tis.4.src.rpm python-glanceclient-2.17.0-1.tis.4.src.rpm python-fmclient-1.0-35.tis.src.rpm python-docker-3.3.0-1.el7.tis.6.src.rpm python-django-horizon-15.1.0-1.tis.54.src.rpm python-cinderclient-5.0.0-1.tis.6.src.rpm python-cephclient-13.2.2.0-20.tis.src.rpm python-barbicanclient-4.9.0-1.tis.3.src.rpm puppet-sshd-1.0.0-9.tis.src.rpm python-aodhclient-1.3.0-1.tis.1.src.rpm python-3parclient-4.2.3-0.tis.3.src.rpm puppet-sysinv-1.0.0-43.tis.src.rpm puppet-stdlib-4.18.0-2.el7.tis.3.src.rpm puppet-staging-1.0.4-1.b466d93git.el7.tis.4.src.rpm puppet-smapi-1.0.0-7.tis.src.rpm puppet-rabbitmq-5.6.0-4.5ac45degit.el7.tis.2.src.rpm puppet-puppi-2.2.3-0.tis.4.src.rpm puppet-openstacklib-11.5.0-1.el7.tis.8.src.rpm puppet-postgresql-4.8.0-0.tis.5.src.rpm puppet-patching-1.0.0-13.tis.src.rpm puppet-oslo-11.3.0-1.el7.tis.2.src.rpm puppet-nslcd-0.0.1-0.tis.4.src.rpm puppet-nfv-1.0.0-19.tis.src.rpm puppet-network-1.0.2-0.tis.10.src.rpm puppet-ldap-0.2.4-0.tis.4.src.rpm puppet-mtce-1.0.0-14.tis.src.rpm puppet-manifests-1.0.0-1066.tis.src.rpm puppet-lvm-0.5.0-0.tis.4.src.rpm puppet-keystone-11.3.0-1.el7.tis.7.src.rpm puppet-horizon-11.5.0-1.el7.tis.4.src.rpm puppet-haproxy-1.5.0-4.6ffcb07git.el7.tis.5.src.rpm puppet-dnsmasq-1.1.0-0.tis.4.src.rpm puppet-fm-1.0.0-17.tis.src.rpm puppet-filemapper-1.1.3-0.tis.2.src.rpm puppet-drbd-0.3.1-rc0.tis.4.src.rpm puppet-dcorch-1.0.0-29.tis.src.rpm puppet-dcmanager-1.0.0-22.tis.src.rpm puppet-dcdbsync-1.0.0-14.tis.src.rpm portieris-helm-0.7.0-14.tis.src.rpm puppet-create_resources-0.0.1-0.tis.2.src.rpm puppet-ceph-2.4.1-1.el7.tis.9.src.rpm puppet-boolean-1.0.2-1.tis.2.src.rpm puppet-4.8.2-1.el7.tis.3.src.rpm playbookconfig-1.0-784.tis.src.rpm platform-util-1.0-89.tis.src.rpm platform-kickstarts-1.0.0-291.tis.src.rpm pam-config-1.0-10.tis.src.rpm pf-bb-config-21.6-0.tis.8.src.rpm pci-irq-affinity-agent-1.0-33.tis.src.rpm patch-alarm-1.0-26.tis.src.rpm openvswitch-config-1.0-5.tis.src.rpm openstack-ras-1.0.0-0.tis.3.src.rpm openstack-keystone-16.0.0-1.el7.tis.23.src.rpm opae-intel-fpga-driver-2.0.1-10.tis.55.src.rpm openstack-helm-infra-1.0-57.tis.src.rpm openstack-helm-1.0-59.tis.src.rpm openssh-config-1.0-11.tis.src.rpm oidcauthtools-1.0-5.tis.src.rpm openldap-config-1.0-17.tis.src.rpm ntp-config-1.0-4.tis.src.rpm nova-api-proxy-1.0-38.tis.src.rpm ntp-4.2.6p5-29.el7.centos.2.tis.9.src.rpm net-tools-2.0-0.24.20131004git.el7.tis.6.src.rpm nfv-1.0-233.tis.src.rpm nfs-utils-config-1.0-5.tis.src.rpm nfscheck-1.0-5.tis.src.rpm namespace-utils-1.0-4.tis.src.rpm multus-config-1.0-1.tis.src.rpm mtce-storage-1.0-11.tis.src.rpm monitor-helm-elastic-1.0-19.tis.src.rpm mtce-control-1.0-15.tis.src.rpm mtce-compute-1.0-17.tis.src.rpm mstflint-4.16.0-1.55103.tis.2.src.rpm monitor-tools-1.0-10.tis.src.rpm monitor-helm-1.0-25.tis.src.rpm mlnx-tools-5.2.0-0.55103.tis.21.src.rpm metrics-server-helm-1.0-1.tis.src.rpm logrotate-3.8.6-17.el7.tis.6.src.rpm memcached-custom-1.0-5.tis.src.rpm mechanize-0.4.5-1.el7.tis.3.src.rpm logrotate-config-1.0-5.tis.src.rpm logmgmt-1.0-18.tis.src.rpm lldpd-0.9.0-0.tis.9.src.rpm linuxptp-3.1.1-1.tis.5.src.rpm libtpms-0.6.0-2.tis.2.src.rpm lighttpd-config-1.0-9.tis.src.rpm lighttpd-1.4.54-1.el7.tis.12.src.rpm libvirt-python-4.7.0-1.tis.6.src.rpm libnftnl-1.1.5-4.tis.1.src.rpm libfdt-1.4.4-0.tis.5.src.rpm libevent-2.0.21-4.el7.tis.3.src.rpm kvm-timer-advance-1.0-3.tis.src.rpm libbpf-0.5.0-1.tis.1.src.rpm libbnxt_re-220.0.5.0-rhel7u9.tis.3.src.rpm ldapscripts-2.0.8-0.tis.8.src.rpm kubernetes-1.23.1-1.23.1-1.tis.4.src.rpm kube-memory-1.0-8.tis.src.rpm kube-cpusets-1.0-6.tis.src.rpm istio-helm-1.13.3-2.tis.src.rpm kmod-bnxt_en-1.10.2-220.0.13.0.tis.19.src.rpm kiali-helm-1.45.0-3.tis.src.rpm kexec-tools-2.0.21-1.tis.2.src.rpm keepalived-2.1.5-6.tis.1.src.rpm k8s-pod-recovery-1.0-0.tis.15.src.rpm k8s-cni-cache-cleanup-1.0-0.tis.1.src.rpm isolcpus-device-plugin-1.0-5.tis.src.rpm iscsi-initiator-utils-config-1.0-4.tis.src.rpm iptables-config-1.0-4.tis.src.rpm initscripts-config-1.0-12.tis.src.rpm iptables-1.8.4-21.tis.6.src.rpm iproute-5.12.0-4.tis.4.src.rpm io-scheduler-1.0-6.tis.src.rpm initscripts-9.49.46-1.el7.tis.15.src.rpm inih-44-0.tis.1.src.rpm igb_uio-kmod-21.02-0.tis.57.src.rpm html5lib-python-1.0.1-1.el7.tis.4.src.rpm ice-kmod-1.8.3-1.tis.25.src.rpm iavf-kmod-4.4.2-1.tis.23.src.rpm i40e-kmod-2.18.9-1.tis.23.src.rpm helm-3.2.1-0.tis.17.src.rpm haproxy-config-1.0-5.tis.src.rpm haproxy-1.5.18-8.el7.tis.12.src.rpm golang-1.17.5-1.17.5-1.tis.1.src.rpm grubby-8.28-25.el7.tis.5.src.rpm gpu-operator-1.8.1-0.tis.4.src.rpm golang-dep-0.5.0-4.tis.src.rpm golang-1.16.12-1.16.12-2.tis.3.src.rpm fm-rest-api-1.0-72.tis.src.rpm fm-mgr-1.0-25.tis.src.rpm EXAMPLE_SYSINV-1.0-2.tis.src.rpm fm-doc-1.0-52.tis.src.rpm fm-common-1.0-69.tis.src.rpm fm-api-1.0-46.tis.src.rpm filesystem-scripts-1.0-4.tis.src.rpm facter-2.4.4-4.el7.tis.7.src.rpm EXAMPLE_VIM-1.0-4.tis.src.rpm EXAMPLE_SERVICE-1.0-2.tis.src.rpm EXAMPLE_RR-1.0-2.tis.src.rpm EXAMPLE_MTCE-1.0-4.tis.src.rpm EXAMPLE_0001-1.0-2.tis.src.rpm EXAMPLE_KUBELET-1.0-1.tis.src.rpm EXAMPLE_DC-1.0-3.tis.src.rpm EXAMPLE_0002-1.0-2.tis.src.rpm etcd-3.3.15-1.tis.7.src.rpm engtools-1.0-37.tis.src.rpm enable-dev-patch-1.0-4.tis.src.rpm docker-distribution-2.7.1-1.tis.13.src.rpm dwarves-1.22-1.tis.1.src.rpm drbd-9.15.1-0.tis.11.src.rpm dpkg-1.18.24-0.tis.2.src.rpm docker-config-1.0-5.tis.src.rpm dnsmasq-config-1.0-4.tis.src.rpm dnsmasq-2.76-7.el7.tis.7.src.rpm dhcp-config-1.0-8.tis.src.rpm dmesg-config-1.0-1.tis.src.rpm distributedcloud-client-1.0.0-1.tis.65.src.rpm distributedcloud-1.0.0-1.tis.422.src.rpm dhcp-4.2.5-82.el7.centos.tis.13.src.rpm dex-helm-1.0-10.tis.src.rpm controllerconfig-1.0-327.tis.src.rpm collector-1.0-69.tis.src.rpm containernetworking-plugins-1.0.1-1.tis.9.src.rpm containerd-config-1.0-4.tis.src.rpm containerd-1.4.11-22.tis.src.rpm config-gate-1.0-13.tis.src.rpm collectd-extensions-1.0-0.tis.85.src.rpm cloud-init-0.7.9-24.el7.centos.1.tis.4.src.rpm cgts-client-1.0-295.tis.src.rpm chartmuseum-0.12.0-6.tis.src.rpm ceph-manager-1.0-28.tis.src.rpm cgcs-patch-1.0-105.tis.src.rpm cert-mon-1.0-7.tis.src.rpm cert-manager-helm-1.0-15.tis.src.rpm cert-alarm-1.0-5.tis.src.rpm centos-release-config-1.0-3.tis.src.rpm build-info-1.0-4.tis.src.rpm bond-cni-1.0-bff6422.tis.3.src.rpm audit-config-1.0-4.tis.src.rpm armada-0.2.0-0.tis.14.src.rpm armada-helm-toolkit-1.0-8.tis.src.rpm rabbitmq-server-3.6.5-1.el7.tis.9.src.rpm parted-3.1-29.el7.tis.7.src.rpm qat17-4.14.0-00031.tis.60.src.rpm libvirt-4.7.0-1.tis.31.src.rpm grub2-2.02-0.86.el7.centos.tis.14.src.rpm openvswitch-2.11.0-0.tis.13.src.rpm mtce-guest-1.0-146.tis.src.rpm mtce-common-1.0-142.tis.src.rpm mtce-1.0-217.tis.src.rpm mlnx-ofa_kernel-5.5-OFED.5.5.1.0.3.1.tis.26.src.rpm ceph-14.2.22-0.el7.tis.35.src.rpm sudo-1.8.23-10.el7_9.1.tis.10.src.rpm qemu-kvm-ev-3.0.0-0.tis.20.src.rpm openssh-7.4p1-21.el7_4.tis.9.src.rpm mariadb-10.1.28-1.el7.tis.8.src.rpm kubernetes-unversioned-1.0-1.tis.9.src.rpm kubernetes-1.22.5-1.22.5-1.tis.9.src.rpm kubernetes-1.21.8-1.21.8-1.tis.16.src.rpm openldap-2.4.44-20.el7.tis.11.src.rpm setup-2.8.71-10.el7.tis.11.src.rpm bash-4.2.46-34.el7.tis.10.src.rpm systemd-219-78.el7_9.3.tis.19.src.rpm python-2.7.5-89.el7.tis.8.src.rpm kernel-5.10.99-200.42.tis.el7.src.rpm pxe-network-installer-1.0-35.tis.src.rpm ######## Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 [stx_builder at 4df2aa3dafa0 toCOPY]$ Best regards, Mouli. -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed May 18 15:26:07 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 18 May 2022 11:26:07 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_helm_charts - Build # 2548 - Failure! Message-ID: <1808641508.30.1652887567958.JavaMail.javamailuser@localhost> Project: STX_build_helm_charts Build #: 2548 Status: Failure Timestamp: 20220518T152600Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220518T095404Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20220518T095404Z OS: centos DOCKER_BUILD_ID: jenkins-master-containers-20220518T095404Z-builder MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220518T095404Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20220518T095404Z/logs PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers From build.starlingx at gmail.com Wed May 18 15:26:09 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 18 May 2022 11:26:09 -0400 (EDT) Subject: [Starlingx-discuss] [stable] [build-report] master STX_build_docker_images_layered - Build # 292 - Still Failing! In-Reply-To: <2081721631.21.1652852396492.JavaMail.javamailuser@localhost> References: <2081721631.21.1652852396492.JavaMail.javamailuser@localhost> Message-ID: <488658401.33.1652887570151.JavaMail.javamailuser@localhost> Project: STX_build_docker_images_layered Build #: 292 Status: Still Failing Timestamp: 20220518T105157Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220518T095404Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-containers/20220518T095404Z OS: centos MUNGED_BRANCH: master MY_REPO: /localdisk/designer/jenkins/master-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220518T095404Z/logs MASTER_BUILD_NUMBER: 294 PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/containers/20220518T095404Z/logs MASTER_JOB_NAME: STX_build_layer_containers_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/containers PUBLISH_TIMESTAMP: 20220518T095404Z DOCKER_BUILD_ID: jenkins-master-containers-20220518T095404Z-builder TIMESTAMP: 20220518T095404Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20220518T095404Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/master/centos/containers/20220518T095404Z/outputs From build.starlingx at gmail.com Wed May 18 15:26:11 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 18 May 2022 11:26:11 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 294 - Still Failing! In-Reply-To: <865710934.24.1652852398577.JavaMail.javamailuser@localhost> References: <865710934.24.1652852398577.JavaMail.javamailuser@localhost> Message-ID: <1776481407.36.1652887572260.JavaMail.javamailuser@localhost> Project: STX_build_layer_containers_master_master Build #: 294 Status: Still Failing Timestamp: 20220518T095404Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220518T095404Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false From ildiko.vancsa at gmail.com Wed May 18 16:36:31 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 18 May 2022 09:36:31 -0700 Subject: [Starlingx-discuss] StarlingX TSC election update and results Message-ID: <4597522A-7C63-49A0-9792-CB74A329A999@gmail.com> Hi StarlingX Community, I?m reaching out to you with announcements about the recent StarlingX TSC election. As the number of candidates[2] doesn?t exceed the number of open seats and the new TSC group fulfills all criteria we have listed on the governance page[3] we will skip the voting period for this election and form the new TSC group. To follow the original election timeline the new and returning members? term starts on the week of May 30th and it is approximately one year long. To follow the above, hereby I would like to announce the electing of Mingyuan Qi who is a returning TSC member in this round. Please join the election officials[1] to congratulate him! Thanks and Best Regards, [1] https://docs.starlingx.io/election/#election-officials [2] https://opendev.org/starlingx/election/src/branch/master/candidates/2022_H1/tsc [3] https://docs.starlingx.io/governance/reference/tsc/stx_charter.html#elections From balendu.burla at intel.com Wed May 18 17:35:18 2022 From: balendu.burla at intel.com (Burla, Balendu) Date: Wed, 18 May 2022 17:35:18 +0000 Subject: [Starlingx-discuss] StarlingX build environment errors In-Reply-To: References: Message-ID: Thanks Greg. Hi Davlet, We are trying to bringup our local build environment, using steps described at https://docs.starlingx.io/developer_resources/build_guide.html And we see build-pkt failed for all the packages with same error that captured in the below email. It does appear like, some configuration is missing on our setup. Any clue what is missing on our setup. NOTE: Our lab are enabled with proxy configuration. I think, I have taken care of proxy configurations. Not sure is this problem is related to proxy configuration or not. Thanks in advance, Best regards, Mouli. From: Waines, Greg Sent: Wednesday, May 18, 2022 10:25 AM To: Burla, Balendu ; starlingx-discuss at lists.starlingx.io; Ho, Teresa Cc: Panech, Davlet ; Term Saracin, Mihnea ; Khalil, Ghada ; Shivashankara Belur, Nidhi ; Li, Baoqian Subject: RE: StarlingX build environment errors I would talk to Davlet Panech or Scott Little. Greg. From: Burla, Balendu > Sent: Tuesday, May 17, 2022 6:41 PM To: starlingx-discuss at lists.starlingx.io; Ho, Teresa > Cc: Panech, Davlet >; Term Saracin, Mihnea >; Khalil, Ghada >; Shivashankara Belur, Nidhi >; Li, Baoqian > Subject: Re: [Starlingx-discuss] StarlingX build environment errors [Please note: This e-mail is from an EXTERNAL e-mail address] Hi Ghada, We are blocked by the below error for preparing the build environment. Who can be the right contact to help us in resolving the below errors? Is there other email or, or specific location where I should post my request? Thank you, Best regards, Mouli. From: Burla, Balendu Sent: Sunday, May 15, 2022 10:52 PM To: starlingx-discuss at lists.starlingx.io; Ho, Teresa > Cc: davlet.panech at windriver.com; Saracin, Mihnea >; Khalil, Ghada >; Nidhi Shivashankara Belur (nidhi.shivashankara.belur at intel.com) >; Li, Baoqian >; Burla, Balendu > Subject: StarlingX build environment errors Hi, I was trying to prepare a StarlingX build environment by following the steps captured in the below link: https://docs.starlingx.io/developer_resources/build_guide.html#build-the-centos-mirror-repository while building the packages, I see below errors: (similar errors are observed for each package build). It seems, I am missing some basic configuration but not sure what it is. Spent decent amount of time to try to resolve the issue.. but no luck. Looking for your help. cd $MY_REPO_ROOT_DIR/stx-tools/toCOPY bash generate-centos-repo.sh /import/mirrors/CentOS/stx/CentOS/ build-pkgs 20:16:43 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... [899/1984] 20:16:43 Start: init plugins 20:16:43 INFO: selinux disabled 20:16:43 Finish: init plugins 20:16:43 Start: run 20:16:43 INFO: Start(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 20:16:43 Start: chroot init 20:16:43 INFO: calling preinit hooks 20:16:43 INFO: enabled root cache 20:16:43 INFO: enabled yum cache 20:16:43 Start: cleaning yum metadata 20:16:43 Finish: cleaning yum metadata 20:16:43 INFO: enabled HW Info plugin 20:16:43 Mock Version: 1.4.16 20:16:43 INFO: Mock Version: 1.4.16 20:16:43 Start: yum install 20:16:43 ERROR: Exception(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) Config(mock/b0) 0 minutes 0 seconds 20:16:43 INFO: Results and/or logs in: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/workerconfig-1.0-14.tis 20:16:43 ERROR: Command failed: 20:16:43 # /usr/bin/yum --installroot /localdisk/loadbuild/stx_builder/fec-operator/std/mock/b0/root/ --releasever 7 install @buildsys-build pigz lbzip2 bash yum python3 20:16:43 Failed to set locale, defaulting to C 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 To address this issue please refer to the below wiki article 20:16:43 20:16:43 https://wiki.centos.org/yum-errors 20:16:43 20:16:43 If above article doesn't help to resolve this issue please use https://bugs.centos.org/. 20:16:43 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 20:16:43 20:16:43 One of the configured repositories failed (Stx-Centos-7-Distro), 20:16:43 and yum doesn't have enough cached data to continue. At this point the only 20:16:43 safe thing yum can do is fail. There are a few ways to work "fix" this: 20:16:43 20:16:43 1. Contact the upstream for the repository and get them to fix the problem. 20:16:43 [859/1984] 20:16:43 2. Reconfigure the baseurl/etc. for the repository, to point to a working 20:16:43 upstream. This is most often useful if you are using a newer 20:16:43 distribution release than is supported by the repository (and the 20:16:43 packages for the previous distribution release still work). 20:16:43 20:16:43 3. Run the command with the repository temporarily disabled 20:16:43 yum --disablerepo=StxCentos7Distro ... 20:16:43 20:16:43 4. Disable the repository permanently, so yum won't use it by default. Yum 20:16:43 will then just ignore the repository until you permanently enable it 20:16:43 again or use --enablerepo for temporary usage: 20:16:43 20:16:43 yum-config-manager --disable StxCentos7Distro 20:16:43 or 20:16:43 subscription-manager repos --disable=StxCentos7Distro 20:16:43 20:16:43 5. Configure the failing repository to be skipped, if it is unavailable. 20:16:43 Note that yum will try to contact the repo. when it runs most commands, 20:16:43 so will have to try and fail each time (and thus. yum will be be much 20:16:43 slower). If it is a very temporary problem though, this is often a nice 20:16:43 compromise: 20:16:43 20:16:43 yum-config-manager --save --setopt=StxCentos7Distro.skip_if_unavailable=true 20:16:43 20:16:43 failure: repodata/repomd.xml from StxCentos7Distro: [Errno 256] No more mirrors to try. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml: [Errno 14] HTTP Error 403 - Forbidden 20:16:43 20:16:43 End build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm 20:16:43 Error building workerconfig-1.0-14.tis.src.rpm on 'b0'. 20:16:43 Will try to build again (if some other package will succeed). 20:16:43 schedule2: no unbuilt deps for 'worker-utils', searching at depth 3 20:16:43 Start build on 'b0': /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/worker-utils-1.0-27.tis.src.rpm 20:16:46 building worker-utils-1.0-27.tis.src.rpm 20:16:46 INFO: mock.py version 1.4.16 starting (python version = 3.6.8)... 20:16:46 Start: init plugins 20:16:46 INFO: selinux disabled 20:16:46 Finish: init plugins 20:16:46 Start: run 20:16:46 Start: chroot init 20:16:46 INFO: calling preinit hooks .... 20:16:47 20:16:47 Results out to: /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std 20:16:47 20:16:47 Pkgs built: 0 20:16:47 dirname: missing operand 20:16:47 Try 'dirname --help' for more information. 20:16:47 20:16:47 Auditing for obsolete srpms 20:16:47 waiting for srpm audit to complete 20:16:47 Auditing for obsolete rpms 20:16:47 waiting for rpm audit to complete 20:16:47 Audit complete 20:16:47 20:16:47 Recreate repodata 20:16:49 20:16:49 Failed to build packages: worker-utils-1.0-27.tis.src.rpm workerconfig-1.0-14.tis.src.rpm watchdog-5.13-12.el7.tis.8.src.rpm vm-topology-1.0-18.tis.src.rpm tuned-config-1.0-4.tis.src.rpm vault-helm-1.0-27.tis.src.rpm util-linux-config-1.0-5.tis.src.rpm update-motd-1.0-7.tis.src.rpm tzdata-2021e-1.el7.tis.1.src.rpm tss2-930-1.tis.2.src.rpm tsconfig-1.0-60.tis.src.rpm trident-installer-22.01.0-0.tis.8.src.rpm systemd-config-1.0-12.tis.src.rpm tpm2-tools-3.0.4-2.el7.tis.6.src.rpm tpm2-openssl-engine-1.0-3.tis.src.rpm tboot-1.9.6-3.el7.tis.5.src.rpm syslog-ng-config-1.0-34.tis.src.rpm sysinv-fpga-agent-1.0-13.tis.src.rpm sysinv-agent-1.0-15.tis.src.rpm stx-ssl-1.0.0-15.tis.src.rpm sysinv-1.0-2684.tis.src.rpm sudo-config-1.0-5.tis.src.rpm stx-vault-helm-1.0-27.tis.src.rpm stx-snmp-helm-1.0-32.tis.src.rpm stx-sdo-helm-1.0-6.tis.src.rpm stx-rook-ceph-1.0-17.tis.src.rpm stx-ptp-notification-helm-1.0-57.tis.src.rpm stx-ocf-scripts-1.0-11.tis.src.rpm stx-portieris-helm-1.0-37.tis.src.rpm stx-platform-helm-1.0-46.tis.src.rpm stx-openstack-helm-1.0-199.tis.src.rpm stx-oidc-auth-helm-1.0-64.tis.src.rpm stx-nginx-ingress-controller-helm-1.1-25.tis.src.rpm stx-monitor-helm-1.0-37.tis.src.rpm stx-metrics-server-helm-1.0-11.tis.src.rpm storageconfig-1.0-12.tis.src.rpm stx-istio-helm-1.0-4.tis.src.rpm stx-extensions-1.0-7.tis.src.rpm stx-cert-manager-helm-1.0-33.tis.src.rpm stx-audit-helm-1.0-22.tis.src.rpm starlingx-dashboard-1.0-307.tis.src.rpm spectre-meltdown-checker-0.37+-3.tis.src.rpm sm-tools-1.0-22.tis.src.rpm sm-api-1.0-49.tis.src.rpm sm-db-1.0.0-57.tis.src.rpm sm-common-1.0.0-32.tis.src.rpm sm-client-1.0-34.tis.src.rpm sm-1.0.0-55.tis.src.rpm shim-signed-15-1.tis.5.src.rpm shim-15-1.el7.tis.7.src.rpm rpm-4.14.0-1.tis.6.src.rpm shadow-utils-config-1.0-6.tis.src.rpm setup-config-1.0-4.tis.src.rpm rsync-config-1.0-5.tis.src.rpm resource-agents-4.1.1-12.el7_6.7.tis.21.src.rpm requests-toolbelt-0.9.1-0.tis.4.src.rpm registry-token-server-1.0.0-1.tis.15.src.rpm python-webencodings-0.5.1-1.el7.tis.4.src.rpm Redfishtool-1.1.0-.tis.3.src.rpm rdma-core-55mlnx37-1.55103.tis.21.src.rpm rabbitmq-server-config-1.0-6.tis.src.rpm python-wsme-0.9.2-1.el7.tis.5.src.rpm python-voluptuous-0.8.9-1.el7.tis.2.src.rpm python-siteconfig-1.0-1.tis.src.rpm python-ryu-4.19-0.tis.5.src.rpm python-setuptools-38.5.1-1.el7.tis.2.src.rpm python-openstacksdk-0.36.0-1.tis.33.src.rpm python-psycopg2-2.5.1-3.el7.tis.2.src.rpm python-pankoclient-0.7.0-1.tis.2.src.rpm python-os-vif-1.9.1-1.el7.tis.2.src.rpm python-oslo-messaging-5.30.6-1.el7.tis.6.src.rpm python-openstackdocstheme-1.11.0-1.tis.2.src.rpm python-openstackclient-4.0.0-1.tis.18.src.rpm python-novaclient-15.1.0-1.tis.4.src.rpm python-keystoneclient-3.21.0-2.tis.2.src.rpm python-neutronclient-6.14.0-1.tis.4.src.rpm python-lefthandclient-2.1.0-0.tis.3.src.rpm python-kubernetes-8.0.0-8.el7.tis.1.src.rpm python-keystoneauth1-3.17.1-2.tis.2.src.rpm python-keyring-5.7.1-1.tis.6.src.rpm python-k8sapp-vault-20.06-27.tis.src.rpm python-k8sapp-portieris-1.0-37.tis.src.rpm python-k8sapp-snmp-1.0-9.tis.src.rpm python-k8sapp-rook-1.0-17.tis.src.rpm python-k8sapp-ptp-notification-1.0-57.tis.src.rpm python-k8sapp-platform-1.0-46.tis.src.rpm python-k8sapp-oidc-1.0-64.tis.src.rpm python-k8sapp-openstack-1.0-199.tis.src.rpm python-k8sapp-auditd-1.0-22.tis.src.rpm python-k8sapp-nginx-ingress-controller-1.0-14.tis.src.rpm python-k8sapp-istio-1.0-4.tis.src.rpm python-k8sapp-cert-manager-1.0-33.tis.src.rpm python-ironicclient-3.1.0-1.tis.2.src.rpm python-heatclient-1.18.0-1.tis.4.src.rpm python-gnocchiclient-7.0.4-1.tis.31.src.rpm python-daemon-2.2.3-7.el8.tis.4.src.rpm python-glanceclient-2.17.0-1.tis.4.src.rpm python-fmclient-1.0-35.tis.src.rpm python-docker-3.3.0-1.el7.tis.6.src.rpm python-django-horizon-15.1.0-1.tis.54.src.rpm python-cinderclient-5.0.0-1.tis.6.src.rpm python-cephclient-13.2.2.0-20.tis.src.rpm python-barbicanclient-4.9.0-1.tis.3.src.rpm puppet-sshd-1.0.0-9.tis.src.rpm python-aodhclient-1.3.0-1.tis.1.src.rpm python-3parclient-4.2.3-0.tis.3.src.rpm puppet-sysinv-1.0.0-43.tis.src.rpm puppet-stdlib-4.18.0-2.el7.tis.3.src.rpm puppet-staging-1.0.4-1.b466d93git.el7.tis.4.src.rpm puppet-smapi-1.0.0-7.tis.src.rpm puppet-rabbitmq-5.6.0-4.5ac45degit.el7.tis.2.src.rpm puppet-puppi-2.2.3-0.tis.4.src.rpm puppet-openstacklib-11.5.0-1.el7.tis.8.src.rpm puppet-postgresql-4.8.0-0.tis.5.src.rpm puppet-patching-1.0.0-13.tis.src.rpm puppet-oslo-11.3.0-1.el7.tis.2.src.rpm puppet-nslcd-0.0.1-0.tis.4.src.rpm puppet-nfv-1.0.0-19.tis.src.rpm puppet-network-1.0.2-0.tis.10.src.rpm puppet-ldap-0.2.4-0.tis.4.src.rpm puppet-mtce-1.0.0-14.tis.src.rpm puppet-manifests-1.0.0-1066.tis.src.rpm puppet-lvm-0.5.0-0.tis.4.src.rpm puppet-keystone-11.3.0-1.el7.tis.7.src.rpm puppet-horizon-11.5.0-1.el7.tis.4.src.rpm puppet-haproxy-1.5.0-4.6ffcb07git.el7.tis.5.src.rpm puppet-dnsmasq-1.1.0-0.tis.4.src.rpm puppet-fm-1.0.0-17.tis.src.rpm puppet-filemapper-1.1.3-0.tis.2.src.rpm puppet-drbd-0.3.1-rc0.tis.4.src.rpm puppet-dcorch-1.0.0-29.tis.src.rpm puppet-dcmanager-1.0.0-22.tis.src.rpm puppet-dcdbsync-1.0.0-14.tis.src.rpm portieris-helm-0.7.0-14.tis.src.rpm puppet-create_resources-0.0.1-0.tis.2.src.rpm puppet-ceph-2.4.1-1.el7.tis.9.src.rpm puppet-boolean-1.0.2-1.tis.2.src.rpm puppet-4.8.2-1.el7.tis.3.src.rpm playbookconfig-1.0-784.tis.src.rpm platform-util-1.0-89.tis.src.rpm platform-kickstarts-1.0.0-291.tis.src.rpm pam-config-1.0-10.tis.src.rpm pf-bb-config-21.6-0.tis.8.src.rpm pci-irq-affinity-agent-1.0-33.tis.src.rpm patch-alarm-1.0-26.tis.src.rpm openvswitch-config-1.0-5.tis.src.rpm openstack-ras-1.0.0-0.tis.3.src.rpm openstack-keystone-16.0.0-1.el7.tis.23.src.rpm opae-intel-fpga-driver-2.0.1-10.tis.55.src.rpm openstack-helm-infra-1.0-57.tis.src.rpm openstack-helm-1.0-59.tis.src.rpm openssh-config-1.0-11.tis.src.rpm oidcauthtools-1.0-5.tis.src.rpm openldap-config-1.0-17.tis.src.rpm ntp-config-1.0-4.tis.src.rpm nova-api-proxy-1.0-38.tis.src.rpm ntp-4.2.6p5-29.el7.centos.2.tis.9.src.rpm net-tools-2.0-0.24.20131004git.el7.tis.6.src.rpm nfv-1.0-233.tis.src.rpm nfs-utils-config-1.0-5.tis.src.rpm nfscheck-1.0-5.tis.src.rpm namespace-utils-1.0-4.tis.src.rpm multus-config-1.0-1.tis.src.rpm mtce-storage-1.0-11.tis.src.rpm monitor-helm-elastic-1.0-19.tis.src.rpm mtce-control-1.0-15.tis.src.rpm mtce-compute-1.0-17.tis.src.rpm mstflint-4.16.0-1.55103.tis.2.src.rpm monitor-tools-1.0-10.tis.src.rpm monitor-helm-1.0-25.tis.src.rpm mlnx-tools-5.2.0-0.55103.tis.21.src.rpm metrics-server-helm-1.0-1.tis.src.rpm logrotate-3.8.6-17.el7.tis.6.src.rpm memcached-custom-1.0-5.tis.src.rpm mechanize-0.4.5-1.el7.tis.3.src.rpm logrotate-config-1.0-5.tis.src.rpm logmgmt-1.0-18.tis.src.rpm lldpd-0.9.0-0.tis.9.src.rpm linuxptp-3.1.1-1.tis.5.src.rpm libtpms-0.6.0-2.tis.2.src.rpm lighttpd-config-1.0-9.tis.src.rpm lighttpd-1.4.54-1.el7.tis.12.src.rpm libvirt-python-4.7.0-1.tis.6.src.rpm libnftnl-1.1.5-4.tis.1.src.rpm libfdt-1.4.4-0.tis.5.src.rpm libevent-2.0.21-4.el7.tis.3.src.rpm kvm-timer-advance-1.0-3.tis.src.rpm libbpf-0.5.0-1.tis.1.src.rpm libbnxt_re-220.0.5.0-rhel7u9.tis.3.src.rpm ldapscripts-2.0.8-0.tis.8.src.rpm kubernetes-1.23.1-1.23.1-1.tis.4.src.rpm kube-memory-1.0-8.tis.src.rpm kube-cpusets-1.0-6.tis.src.rpm istio-helm-1.13.3-2.tis.src.rpm kmod-bnxt_en-1.10.2-220.0.13.0.tis.19.src.rpm kiali-helm-1.45.0-3.tis.src.rpm kexec-tools-2.0.21-1.tis.2.src.rpm keepalived-2.1.5-6.tis.1.src.rpm k8s-pod-recovery-1.0-0.tis.15.src.rpm k8s-cni-cache-cleanup-1.0-0.tis.1.src.rpm isolcpus-device-plugin-1.0-5.tis.src.rpm iscsi-initiator-utils-config-1.0-4.tis.src.rpm iptables-config-1.0-4.tis.src.rpm initscripts-config-1.0-12.tis.src.rpm iptables-1.8.4-21.tis.6.src.rpm iproute-5.12.0-4.tis.4.src.rpm io-scheduler-1.0-6.tis.src.rpm initscripts-9.49.46-1.el7.tis.15.src.rpm inih-44-0.tis.1.src.rpm igb_uio-kmod-21.02-0.tis.57.src.rpm html5lib-python-1.0.1-1.el7.tis.4.src.rpm ice-kmod-1.8.3-1.tis.25.src.rpm iavf-kmod-4.4.2-1.tis.23.src.rpm i40e-kmod-2.18.9-1.tis.23.src.rpm helm-3.2.1-0.tis.17.src.rpm haproxy-config-1.0-5.tis.src.rpm haproxy-1.5.18-8.el7.tis.12.src.rpm golang-1.17.5-1.17.5-1.tis.1.src.rpm grubby-8.28-25.el7.tis.5.src.rpm gpu-operator-1.8.1-0.tis.4.src.rpm golang-dep-0.5.0-4.tis.src.rpm golang-1.16.12-1.16.12-2.tis.3.src.rpm fm-rest-api-1.0-72.tis.src.rpm fm-mgr-1.0-25.tis.src.rpm EXAMPLE_SYSINV-1.0-2.tis.src.rpm fm-doc-1.0-52.tis.src.rpm fm-common-1.0-69.tis.src.rpm fm-api-1.0-46.tis.src.rpm filesystem-scripts-1.0-4.tis.src.rpm facter-2.4.4-4.el7.tis.7.src.rpm EXAMPLE_VIM-1.0-4.tis.src.rpm EXAMPLE_SERVICE-1.0-2.tis.src.rpm EXAMPLE_RR-1.0-2.tis.src.rpm EXAMPLE_MTCE-1.0-4.tis.src.rpm EXAMPLE_0001-1.0-2.tis.src.rpm EXAMPLE_KUBELET-1.0-1.tis.src.rpm EXAMPLE_DC-1.0-3.tis.src.rpm EXAMPLE_0002-1.0-2.tis.src.rpm etcd-3.3.15-1.tis.7.src.rpm engtools-1.0-37.tis.src.rpm enable-dev-patch-1.0-4.tis.src.rpm docker-distribution-2.7.1-1.tis.13.src.rpm dwarves-1.22-1.tis.1.src.rpm drbd-9.15.1-0.tis.11.src.rpm dpkg-1.18.24-0.tis.2.src.rpm docker-config-1.0-5.tis.src.rpm dnsmasq-config-1.0-4.tis.src.rpm dnsmasq-2.76-7.el7.tis.7.src.rpm dhcp-config-1.0-8.tis.src.rpm dmesg-config-1.0-1.tis.src.rpm distributedcloud-client-1.0.0-1.tis.65.src.rpm distributedcloud-1.0.0-1.tis.422.src.rpm dhcp-4.2.5-82.el7.centos.tis.13.src.rpm dex-helm-1.0-10.tis.src.rpm controllerconfig-1.0-327.tis.src.rpm collector-1.0-69.tis.src.rpm containernetworking-plugins-1.0.1-1.tis.9.src.rpm containerd-config-1.0-4.tis.src.rpm containerd-1.4.11-22.tis.src.rpm config-gate-1.0-13.tis.src.rpm collectd-extensions-1.0-0.tis.85.src.rpm cloud-init-0.7.9-24.el7.centos.1.tis.4.src.rpm cgts-client-1.0-295.tis.src.rpm chartmuseum-0.12.0-6.tis.src.rpm ceph-manager-1.0-28.tis.src.rpm cgcs-patch-1.0-105.tis.src.rpm cert-mon-1.0-7.tis.src.rpm cert-manager-helm-1.0-15.tis.src.rpm cert-alarm-1.0-5.tis.src.rpm centos-release-config-1.0-3.tis.src.rpm build-info-1.0-4.tis.src.rpm bond-cni-1.0-bff6422.tis.3.src.rpm audit-config-1.0-4.tis.src.rpm armada-0.2.0-0.tis.14.src.rpm armada-helm-toolkit-1.0-8.tis.src.rpm rabbitmq-server-3.6.5-1.el7.tis.9.src.rpm parted-3.1-29.el7.tis.7.src.rpm qat17-4.14.0-00031.tis.60.src.rpm libvirt-4.7.0-1.tis.31.src.rpm grub2-2.02-0.86.el7.centos.tis.14.src.rpm openvswitch-2.11.0-0.tis.13.src.rpm mtce-guest-1.0-146.tis.src.rpm mtce-common-1.0-142.tis.src.rpm mtce-1.0-217.tis.src.rpm mlnx-ofa_kernel-5.5-OFED.5.5.1.0.3.1.tis.26.src.rpm ceph-14.2.22-0.el7.tis.35.src.rpm sudo-1.8.23-10.el7_9.1.tis.10.src.rpm qemu-kvm-ev-3.0.0-0.tis.20.src.rpm openssh-7.4p1-21.el7_4.tis.9.src.rpm mariadb-10.1.28-1.el7.tis.8.src.rpm kubernetes-unversioned-1.0-1.tis.9.src.rpm kubernetes-1.22.5-1.22.5-1.tis.9.src.rpm kubernetes-1.21.8-1.21.8-1.tis.16.src.rpm openldap-2.4.44-20.el7.tis.11.src.rpm setup-2.8.71-10.el7.tis.11.src.rpm bash-4.2.46-34.el7.tis.10.src.rpm systemd-219-78.el7_9.3.tis.19.src.rpm python-2.7.5-89.el7.tis.8.src.rpm kernel-5.10.99-200.42.tis.el7.src.rpm pxe-network-installer-1.0-35.tis.src.rpm ######## Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 [stx_builder at 4df2aa3dafa0 toCOPY]$ Best regards, Mouli. -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Wed May 18 22:34:51 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 18 May 2022 22:34:51 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - May 18/2022 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases stx.7.0 - Release/Feature Planning: https://docs.google.com/spreadsheets/d/171PJAu9SykXm9h9Ny2IsZ8YMbhvEwOzuTbISWUMQhiE/edit#gid=1107209846 - Release Verification: - Folder: https://drive.google.com/drive/folders/1szAP-xVZq7ebSyGJ-EHTVebMmsupSY7w - Feature Testing: https://docs.google.com/spreadsheets/d/1hXJJ4LvxhWwLIF_PHpCyxlGpfKdXtcPjlyclfKFVxKo/edit#gid=968103774 - Regression Testing: https://docs.google.com/spreadsheets/d/19OQmmo5OfD1eHS8rp5uBgnVaS7i8J-NhQuiKVqcyZ0Y/edit#gid=1717644237 - Release Updates - Debian Update - Debian Builds on CENGN - Debian builds are scheduled daily. - Status: May15: good / May 16: no change / May 17: stuck due to memory issue / May 18: in progress. results in 4hrs - Note: docker image builds are failing on the creation of helm charts. Last successful docker image build was a week ago - Debian Container Image Builds (including Base Image) - Debian docker images will not be used in stx.7.0. The basic framework exists now; will finish this off in stx.8.0 - Feature Update -- Features with mid-May Code Merge Date - Kubernetes custom configuration support (partial) - Fcst: May 10 >> Code Merged on May 4. There are some bugs to address. - Armada Deprecation / Replacement - FluxCD - Fcst: May 20 >> Re-forecasted to May 27 to cover app upgrades and Debian integration - K8S & Container Components Refresh - k8s 1.22/1.23. Fcst: May 10 >> k8s 1.23 is merged, but not the default in the load. Re-forecasted to May 30 - Container CNI Component Refresh. Fcst May 17 >> In progress, but need a few more day. Re-forecasted to May 24. - Platform Application Refresh - metric-server >> Need to follow-up on FluxCD conversion status before feature testing starts - Test Update - Feature Testing - Still need to update the Release Planning spreadsheet based on the minutes from the last meeting - Regression Testing - Action: Rob to confirm regression start and end dates. Currently forecasted for Jun 6 & Jun 27 respectively. - Sanity - Sanity reports are still reported as Red. 1 issue addressed. 1 on-going. 1 new. - Intel has extended their sanity contribtion to May 20. - They're trying to help us get to a green sanity before they move on. - WR Sanity - Still trying to get a working sanity env. Team stretched due to other priority. - TBD whether the same sanity cadence can be maintained. From Ghada.Khalil at windriver.com Wed May 18 23:21:29 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Wed, 18 May 2022 23:21:29 +0000 Subject: [Starlingx-discuss] Minutes: Community Call (May 18 2022) Message-ID: Etherpad: https://etherpad.opendev.org/p/stx-status Minutes from the community call May 18 2022 Standing Topics - Build - CentOS builds - No successful build for a week due to a hung build that was not noticed. Build restarted now. - Docker image builds are failing on the creation of helm charts. Last successful docker image build was a week ago. - Issue is under investigation - Shouldn't have a large impact because most images are statically tagged. - Debian builds - Debian builds are scheduled daily. - Status: May15: good / May 16: no change / May 17: stuck due to memory issue / May 18: in progress. results in 4hrs - Sanity - Sanity is remains Red: - Latest report: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/013026.html - https://bugs.launchpad.net/starlingx/+bug/1970645 - Issue under investigation, but the dev prime is still not able to reproduce. Suspect some kind of resource issue. - https://bugs.launchpad.net/starlingx/+bug/1971981 - Addressed as of May 17 - NEW: https://bugs.launchpad.net/starlingx/+bug/1973888 - Platform/flock team will start investigating - Gerrit Reviews in Need of Attention - Many doc reviews are pending: - https://review.opendev.org/q/project:starlingx%252Fdocs+status:open - Reference Links: - Active Branch (open): https://review.opendev.org/q/projects:starlingx+is:open+branch:+master - Active Branch (merged): https://review.opendev.org/q/projects:starlingx+is:merged+branch:master Topics for This Week - Sanity - Intel will stop running sanity on May 20 (already extended by 1wk) - WR team does not have a sanity env setup, so there will be a gap in coverage - TSC Elections - Nomination period closed yesterday. - Mingyuan put in a nomination to renew his term; approval/merge in progress - Checkpoint: Removal of pip's legacy resolver - Mailing list thread: - http://lists.starlingx.io/pipermail/starlingx-discuss/2022-March/012822.html - http://lists.starlingx.io/pipermail/starlingx-discuss/2022-March/012827.html - LPs tracking fixes - stx/integ: https://bugs.launchpad.net/starlingx/+bug/1964372 - Prime: Ram S - stx/audit-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966069 - Prime: Ghada Khalil - stx/openstack-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966070 - Prime: Douglas Pereira << Fixed - stx/platform-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966071 - Prime: Bob Church - stx/portieris-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966072 - Prime: Ghada Khalil - stx/ptp-notification-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966073 - Prime: Steve Webster - stx/snmp-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966075 - Prime: Gustavo Dobro - stx/vault-armada-app: https://bugs.launchpad.net/starlingx/+bug/1966076 - Prime: Ghada Khalil - Most LPs are still open. - Currently there is no hard deadline for this. - Doesn't appear that any other open-infra projects are actively addressing these issues either. - Park until after stx.7.0 and target for stx.8.0 ARs from Previous Meetings - None Open Requests for Help - A number of questions from OutbackDingo at gmail.com - Discussions are ongoing on the mailing list - Three questions/issues from Embedded Devel - Build: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/013011.html - Already answered by Scott: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/013014.html - pxeboot: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/013012.html - Appears to be the same topic sent by OutbackDingo at gmail.com previously - multi-AIO: http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/013013.html - Appears to be the same topic sent by OutbackDingo at gmail.com previously - Issue reported by balendu.burla at intel.com - Build Env Issue:http://lists.starlingx.io/pipermail/starlingx-discuss/2022-May/013022.html - Scott will review and respond today From alexandru.dimofte at intel.com Thu May 19 10:20:16 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 19 May 2022 10:20:16 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220518T074830Z Message-ID: Sanity Test from 2022-May-18 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220518T074830Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220518T074830Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All BARE-METAL configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1973888 - StarlingX provision failed for all bare-metal configurations At least virtual configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, OBS: All pods are fine now but stx-openstack apply still fails. I attached new logs. Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From bogdan-iulian.andrei at intel.com Thu May 19 14:03:07 2022 From: bogdan-iulian.andrei at intel.com (Andrei, Bogdan-Iulian) Date: Thu, 19 May 2022 14:03:07 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220519T032221Z Message-ID: Sanity Test from 2022-May-19 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220519T032221Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220519T032221Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All BARE-METAL configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1973888 - StarlingX provision failed for all bare-metal configurations At least virtual configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, OBS: All pods are fine now but stx-openstack apply still fails. I attached new logs. Kind regards, Validation team [Logo Description automatically generated] Andrei Bogdan-Iulian Software Engineer PMCE TEAM Personal Mobile: +40 754905864 bogdan-iulian.andrei at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From scott.little at windriver.com Thu May 19 16:08:13 2022 From: scott.little at windriver.com (Scott Little) Date: Thu, 19 May 2022 12:08:13 -0400 Subject: [Starlingx-discuss] StarlingX build environment errors In-Reply-To: References: Message-ID: <66d89c59-bb62-e79c-f38a-4c70ed8787eb@windriver.com> RE your error... 20:16:43 http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/repomd.xml : [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. 20:16:43 http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml : [Errno 14] HTTP Error 403 - Forbidden 20:16:43 Trying other mirror. I've never seen 'forbidden' before. From within your build environment can you send the output of these commands ... ?? id ?? ls -al /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/ ?? ls -al /localdisk/loadbuild/stx_builder/fec-operator/std/ ?? ls -al /localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata ?? ls -al /localdisk/designer/stx_builder/fec-operator/cgcs-root/ Also, are you using a proxy setup? On 2022-05-18 1:35 p.m., Burla, Balendu wrote: > > **[Please note: This e-mail is from an EXTERNAL e-mail address] > > Thanks Greg. > > Hi Davlet, > > We are trying to bringup our local build environment, using steps > described at > https://docs.starlingx.io/developer_resources/build_guide.html > > > And we see build-pkt failed for all the packages with same error that > captured in the below email. > > It does appear like, some configuration is missing on our setup. Any > clue what is missing on our setup. > > NOTE: > > Our lab are enabled with proxy configuration. > > I think, I have taken care of proxy configurations. Not sure is this > problem is related to proxy configuration or not. > > Thanks in advance, > > Best regards, > > Mouli. > > *From:* Waines, Greg > *Sent:* Wednesday, May 18, 2022 10:25 AM > *To:* Burla, Balendu ; > starlingx-discuss at lists.starlingx.io; Ho, Teresa > *Cc:* Panech, Davlet ; Term Saracin, > Mihnea ; Khalil, Ghada > ; Shivashankara Belur, Nidhi > ; Li, Baoqian > *Subject:* RE: StarlingX build environment errors > > I would talk to Davlet Panech or Scott Little. > > Greg. > > *From:* Burla, Balendu > > *Sent:* Tuesday, May 17, 2022 6:41 PM > *To:* starlingx-discuss at lists.starlingx.io > ; Ho, Teresa > > > *Cc:* Panech, Davlet >; Term Saracin, Mihnea > >; > Khalil, Ghada >; Shivashankara Belur, Nidhi > >; Li, Baoqian > > > *Subject:* Re: [Starlingx-discuss] StarlingX build environment errors > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Hi Ghada, > > We are blocked by the below error for preparing the build environment. > > Who can be the right contact to help us in resolving the below errors? > > Is there other email or, or specific location where I should post my > request? > > Thank you, > > Best regards, > > Mouli. > > *From:* Burla, Balendu > *Sent:* Sunday, May 15, 2022 10:52 PM > *To:* starlingx-discuss at lists.starlingx.io > ; Ho, Teresa > > > *Cc:* davlet.panech at windriver.com > ; Saracin, Mihnea > >; > Khalil, Ghada >; Nidhi Shivashankara Belur > (nidhi.shivashankara.belur at intel.com > ) > >; Li, Baoqian > >; Burla, Balendu > > > *Subject:* StarlingX build environment errors > > Hi, > > I was trying to prepare a StarlingX build environment by following the > steps captured in the below link: > > https://docs.starlingx.io/developer_resources/build_guide.html#build-the-centos-mirror-repository > > > while building the packages, I see below errors:? (similar errors are > observed for each package build). > > It seems, I am missing some basic configuration but not sure what it > is. ?Spent decent amount of time to try to resolve the issue.. but no > luck. Looking for your help. > > cd $MY_REPO_ROOT_DIR/stx-tools/toCOPY > > bash generate-centos-repo.sh > > /import/mirrors/CentOS/stx/CentOS/ > > build-pkgs > > 20:16:43 INFO: mock.py > > version 1.4.16 starting (python version = 3.6.8)... [899/1984] > > 20:16:43 Start: init plugins > > 20:16:43 INFO: selinux disabled > > 20:16:43 Finish: init plugins > > 20:16:43 Start: run > > 20:16:43 INFO: > Start(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) > Config(mock/b0) > > 20:16:43 Start: chroot init > > 20:16:43 INFO: calling preinit hooks > > 20:16:43 INFO: enabled root cache > > 20:16:43 INFO: enabled yum cache > > 20:16:43 Start: cleaning yum metadata > > 20:16:43 Finish: cleaning yum metadata > > 20:16:43 INFO: enabled HW Info plugin > > 20:16:43 Mock Version: 1.4.16 > > 20:16:43 INFO: Mock Version: 1.4.16 > > 20:16:43 Start: yum install > > 20:16:43 ERROR: > Exception(/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm) > Config(mock/b0) 0 minutes 0 seconds > > 20:16:43 INFO: Results and/or logs in: > /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/workerconfig-1.0-14.tis > > 20:16:43 ERROR: Command failed: > > 20:16:43? # /usr/bin/yum --installroot > /localdisk/loadbuild/stx_builder/fec-operator/std/mock/b0/root/ > --releasever 7 install @buildsys-build pigz lbzip2 bash yum python3 > > 20:16:43 Failed to set locale, defaulting to C > > 20:16:43 > http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std/repodata/repomd.xml > : > [Errno 14] HTTP Error 403 - Forbidden > > 20:16:43 Trying other mirror. > > 20:16:43 To address this issue please refer to the below wiki article > > 20:16:43 > > 20:16:43 https://wiki.centos.org/yum-errors > > > 20:16:43 > > 20:16:43 If above article doesn't help to resolve this issue please > use https://bugs.centos.org/ > . > > 20:16:43 > > 20:16:43 > http://127.0.0.1:8088/localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/repodata/repomd.xml > : > [Errno 14] HTTP Error 403 - Forbidden > > 20:16:43 Trying other mirror. > > 20:16:43 > http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml > : > [Errno 14] HTTP Error 403 - Forbidden > > 20:16:43 Trying other mirror. > > 20:16:43 > > 20:16:43 > > 20:16:43? One of the configured repositories failed (Stx-Centos-7-Distro), > > 20:16:43? and yum doesn't have enough cached data to continue. At this > point the only > > 20:16:43? safe thing yum can do is fail. There are a few ways to work > "fix" this: > > 20:16:43 > > 20:16:43????? 1. Contact the upstream for the repository and get them > to fix the problem. > > 20:16:43 [859/1984] > > 20:16:43????? 2. Reconfigure the baseurl/etc. for the repository, to > point to a working > > 20:16:43???????? upstream. This is most often useful if you are using > a newer > > 20:16:43???????? distribution release than is supported by the > repository (and the > > 20:16:43?????? ??packages for the previous distribution release still > work). > > 20:16:43 > > 20:16:43????? 3. Run the command with the repository temporarily disabled > > 20:16:43???????????? yum --disablerepo=StxCentos7Distro ... > > 20:16:43 > > 20:16:43????? 4. Disable the repository permanently, so yum won't use > it by default. Yum > > 20:16:43???????? will then just ignore the repository until you > permanently enable it > > 20:16:43?? ??????again or use --enablerepo for temporary usage: > > 20:16:43 > > 20:16:43 yum-config-manager --disable StxCentos7Distro > > 20:16:43???????? or > > 20:16:43 subscription-manager repos --disable=StxCentos7Distro > > 20:16:43 > > 20:16:43????? 5. Configure the failing repository to be skipped, if it > is unavailable. > > 20:16:43???????? Note that yum will try to contact the repo. when it > runs most commands, > > 20:16:43???????? so will have to try and fail each time (and thus. yum > will be be much > > 20:16:43????? ???slower). If it is a very temporary problem though, > this is often a nice > > 20:16:43???????? compromise: > > 20:16:43 > > 20:16:43 yum-config-manager --save > --setopt=StxCentos7Distro.skip_if_unavailable=true > > 20:16:43 > > 20:16:43 failure: repodata/repomd.xml from StxCentos7Distro: [Errno > 256] No more mirrors to try. > > 20:16:43 > http://127.0.0.1:8088/localdisk/designer/stx_builder/fec-operator/cgcs-root/centos-repo/Binary/repodata/repomd.xml > : > [Errno 14] HTTP Error 403 - Forbidden > > 20:16:43 > > 20:16:43 End build on 'b0': > /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/workerconfig-1.0-14.tis.src.rpm > > 20:16:43 Error building workerconfig-1.0-14.tis.src.rpm on 'b0'. > > 20:16:43 Will try to build again (if some other package will succeed). > > 20:16:43 schedule2: no unbuilt deps for 'worker-utils', searching at > depth 3 > > 20:16:43 Start build on 'b0': > /localdisk/loadbuild/stx_builder/fec-operator/std/rpmbuild/SRPMS/worker-utils-1.0-27.tis.src.rpm > > 20:16:46 building worker-utils-1.0-27.tis.src.rpm > > 20:16:46 INFO: mock.py > > version 1.4.16 starting (python version = 3.6.8)... > > 20:16:46 Start: init plugins > > 20:16:46 INFO: selinux disabled > > 20:16:46 Finish: init plugins > > 20:16:46 Start: run > > 20:16:46 Start: chroot init > > 20:16:46 INFO: calling preinit hooks > > ?. > > 20:16:47 > > 20:16:47 Results out to: > /localdisk/loadbuild/stx_builder/fec-operator/std/results/stx_builder-fec-operator-4.0-std > > 20:16:47 > > 20:16:47 Pkgs built: 0 > > 20:16:47 dirname: missing operand > > 20:16:47 Try 'dirname --help' for more information. > > 20:16:47 > > 20:16:47 Auditing for obsolete srpms > > 20:16:47 waiting for srpm audit to complete > > 20:16:47 Auditing for obsolete rpms > > 20:16:47 waiting for rpm audit to complete > > 20:16:47 Audit complete > > 20:16:47 > > 20:16:47 Recreate repodata > > 20:16:49 > > 20:16:49 Failed to build packages:? worker-utils-1.0-27.tis.src.rpm > workerconfig-1.0-14.tis.src.rpm watchdog-5.13-12.el7.tis.8.src.rpm > vm-topology-1.0-18.tis.src.rpm tuned-config-1.0-4.tis.src.rpm > vault-helm-1.0-27.tis.src.rpm util-linux-config-1.0-5.tis.src.rpm > update-motd-1.0-7.tis.src.rpm tzdata-2021e-1.el7.tis.1.src.rpm > tss2-930-1.tis.2.src.rpm? tsconfig-1.0-60.tis.src.rpm > trident-installer-22.01.0-0.tis.8.src.rpm > systemd-config-1.0-12.tis.src.rpm tpm2-tools-3.0.4-2.el7.tis.6.src.rpm > tpm2-openssl-engine-1.0-3.tis.src.rpm tboot-1.9.6-3.el7.tis.5.src.rpm > syslog-ng-config-1.0-34.tis.src.rpm > sysinv-fpga-agent-1.0-13.tis.src.rpm sysinv-agent-1.0-15.tis.src.rpm > stx-ssl-1.0.0-15.tis.src.rpm sysinv-1.0-2684.tis.src.rpm > sudo-config-1.0-5.tis.src.rpm stx-vault-helm-1.0-27.tis.src.rpm > stx-snmp-helm-1.0-32.tis.src.rpm stx-sdo-helm-1.0-6.tis.src.rpm > stx-rook-ceph-1.0-17.tis.src.rpm > stx-ptp-notification-helm-1.0-57.tis.src.rpm > stx-ocf-scripts-1.0-11.tis.src.rpm > stx-portieris-helm-1.0-37.tis.src.rpm > stx-platform-helm-1.0-46.tis.src.rpm > stx-openstack-helm-1.0-199.tis.src.rpm > stx-oidc-auth-helm-1.0-64.tis.src.rpm > stx-nginx-ingress-controller-helm-1.1-25.tis.src.rpm > stx-monitor-helm-1.0-37.tis.src.rpm > stx-metrics-server-helm-1.0-11.tis.src.rpm > storageconfig-1.0-12.tis.src.rpm stx-istio-helm-1.0-4.tis.src.rpm > stx-extensions-1.0-7.tis.src.rpm > stx-cert-manager-helm-1.0-33.tis.src.rpm > stx-audit-helm-1.0-22.tis.src.rpm > starlingx-dashboard-1.0-307.tis.src.rpm > spectre-meltdown-checker-0.37+-3.tis.src.rpm > sm-tools-1.0-22.tis.src.rpm? sm-api-1.0-49.tis.src.rpm > sm-db-1.0.0-57.tis.src.rpm sm-common-1.0.0-32.tis.src.rpm > sm-client-1.0-34.tis.src.rpm? sm-1.0.0-55.tis.src.rpm > shim-signed-15-1.tis.5.src.rpm shim-15-1.el7.tis.7.src.rpm? > rpm-4.14.0-1.tis.6.src.rpm shadow-utils-config-1.0-6.tis.src.rpm > setup-config-1.0-4.tis.src.rpm rsync-config-1.0-5.tis.src.rpm > resource-agents-4.1.1-12.el7_6.7.tis.21.src.rpm > requests-toolbelt-0.9.1-0.tis.4.src.rpm > registry-token-server-1.0.0-1.tis.15.src.rpm > python-webencodings-0.5.1-1.el7.tis.4.src.rpm > Redfishtool-1.1.0-.tis.3.src.rpm > rdma-core-55mlnx37-1.55103.tis.21.src.rpm > rabbitmq-server-config-1.0-6.tis.src.rpm > python-wsme-0.9.2-1.el7.tis.5.src.rpm > python-voluptuous-0.8.9-1.el7.tis.2.src.rpm > python-siteconfig-1.0-1.tis.src.rpm python-ryu-4.19-0.tis.5.src.rpm > python-setuptools-38.5.1-1.el7.tis.2.src.rpm > python-openstacksdk-0.36.0-1.tis.33.src.rpm > python-psycopg2-2.5.1-3.el7.tis.2.src.rpm > python-pankoclient-0.7.0-1.tis.2.src.rpm > python-os-vif-1.9.1-1.el7.tis.2.src.rpm > python-oslo-messaging-5.30.6-1.el7.tis.6.src.rpm > python-openstackdocstheme-1.11.0-1.tis.2.src.rpm > python-openstackclient-4.0.0-1.tis.18.src.rpm > python-novaclient-15.1.0-1.tis.4.src.rpm > python-keystoneclient-3.21.0-2.tis.2.src.rpm > python-neutronclient-6.14.0-1.tis.4.src.rpm > python-lefthandclient-2.1.0-0.tis.3.src.rpm > python-kubernetes-8.0.0-8.el7.tis.1.src.rpm > python-keystoneauth1-3.17.1-2.tis.2.src.rpm > python-keyring-5.7.1-1.tis.6.src.rpm > python-k8sapp-vault-20.06-27.tis.src.rpm > python-k8sapp-portieris-1.0-37.tis.src.rpm > python-k8sapp-snmp-1.0-9.tis.src.rpm > python-k8sapp-rook-1.0-17.tis.src.rpm > python-k8sapp-ptp-notification-1.0-57.tis.src.rpm > python-k8sapp-platform-1.0-46.tis.src.rpm > python-k8sapp-oidc-1.0-64.tis.src.rpm > python-k8sapp-openstack-1.0-199.tis.src.rpm > python-k8sapp-auditd-1.0-22.tis.src.rpm > python-k8sapp-nginx-ingress-controller-1.0-14.tis.src.rpm > python-k8sapp-istio-1.0-4.tis.src.rpm > python-k8sapp-cert-manager-1.0-33.tis.src.rpm > python-ironicclient-3.1.0-1.tis.2.src.rpm > python-heatclient-1.18.0-1.tis.4.src.rpm > python-gnocchiclient-7.0.4-1.tis.31.src.rpm > python-daemon-2.2.3-7.el8.tis.4.src.rpm > python-glanceclient-2.17.0-1.tis.4.src.rpm > python-fmclient-1.0-35.tis.src.rpm > python-docker-3.3.0-1.el7.tis.6.src.rpm > python-django-horizon-15.1.0-1.tis.54.src.rpm > python-cinderclient-5.0.0-1.tis.6.src.rpm > python-cephclient-13.2.2.0-20.tis.src.rpm > python-barbicanclient-4.9.0-1.tis.3.src.rpm > puppet-sshd-1.0.0-9.tis.src.rpm > python-aodhclient-1.3.0-1.tis.1.src.rpm > python-3parclient-4.2.3-0.tis.3.src.rpm > puppet-sysinv-1.0.0-43.tis.src.rpm > puppet-stdlib-4.18.0-2.el7.tis.3.src.rpm > puppet-staging-1.0.4-1.b466d93git.el7.tis.4.src.rpm > puppet-smapi-1.0.0-7.tis.src.rpm > puppet-rabbitmq-5.6.0-4.5ac45degit.el7.tis.2.src.rpm > puppet-puppi-2.2.3-0.tis.4.src.rpm > puppet-openstacklib-11.5.0-1.el7.tis.8.src.rpm > puppet-postgresql-4.8.0-0.tis.5.src.rpm > puppet-patching-1.0.0-13.tis.src.rpm > puppet-oslo-11.3.0-1.el7.tis.2.src.rpm > puppet-nslcd-0.0.1-0.tis.4.src.rpm puppet-nfv-1.0.0-19.tis.src.rpm > puppet-network-1.0.2-0.tis.10.src.rpm > puppet-ldap-0.2.4-0.tis.4.src.rpm puppet-mtce-1.0.0-14.tis.src.rpm > puppet-manifests-1.0.0-1066.tis.src.rpm > puppet-lvm-0.5.0-0.tis.4.src.rpm > puppet-keystone-11.3.0-1.el7.tis.7.src.rpm > puppet-horizon-11.5.0-1.el7.tis.4.src.rpm > puppet-haproxy-1.5.0-4.6ffcb07git.el7.tis.5.src.rpm > puppet-dnsmasq-1.1.0-0.tis.4.src.rpm puppet-fm-1.0.0-17.tis.src.rpm > puppet-filemapper-1.1.3-0.tis.2.src.rpm > puppet-drbd-0.3.1-rc0.tis.4.src.rpm puppet-dcorch-1.0.0-29.tis.src.rpm > puppet-dcmanager-1.0.0-22.tis.src.rpm > puppet-dcdbsync-1.0.0-14.tis.src.rpm > portieris-helm-0.7.0-14.tis.src.rpm > puppet-create_resources-0.0.1-0.tis.2.src.rpm > puppet-ceph-2.4.1-1.el7.tis.9.src.rpm > puppet-boolean-1.0.2-1.tis.2.src.rpm puppet-4.8.2-1.el7.tis.3.src.rpm > playbookconfig-1.0-784.tis.src.rpm platform-util-1.0-89.tis.src.rpm > platform-kickstarts-1.0.0-291.tis.src.rpm > pam-config-1.0-10.tis.src.rpm pf-bb-config-21.6-0.tis.8.src.rpm > pci-irq-affinity-agent-1.0-33.tis.src.rpm > patch-alarm-1.0-26.tis.src.rpm openvswitch-config-1.0-5.tis.src.rpm > openstack-ras-1.0.0-0.tis.3.src.rpm > openstack-keystone-16.0.0-1.el7.tis.23.src.rpm > opae-intel-fpga-driver-2.0.1-10.tis.55.src.rpm > openstack-helm-infra-1.0-57.tis.src.rpm > openstack-helm-1.0-59.tis.src.rpm openssh-config-1.0-11.tis.src.rpm > oidcauthtools-1.0-5.tis.src.rpm openldap-config-1.0-17.tis.src.rpm > ntp-config-1.0-4.tis.src.rpm nova-api-proxy-1.0-38.tis.src.rpm > ntp-4.2.6p5-29.el7.centos.2.tis.9.src.rpm > net-tools-2.0-0.24.20131004git.el7.tis.6.src.rpm > nfv-1.0-233.tis.src.rpm nfs-utils-config-1.0-5.tis.src.rpm > nfscheck-1.0-5.tis.src.rpm namespace-utils-1.0-4.tis.src.rpm > multus-config-1.0-1.tis.src.rpm mtce-storage-1.0-11.tis.src.rpm > monitor-helm-elastic-1.0-19.tis.src.rpm > mtce-control-1.0-15.tis.src.rpm mtce-compute-1.0-17.tis.src.rpm > mstflint-4.16.0-1.55103.tis.2.src.rpm monitor-tools-1.0-10.tis.src.rpm > monitor-helm-1.0-25.tis.src.rpm > mlnx-tools-5.2.0-0.55103.tis.21.src.rpm > metrics-server-helm-1.0-1.tis.src.rpm > logrotate-3.8.6-17.el7.tis.6.src.rpm > memcached-custom-1.0-5.tis.src.rpm mechanize-0.4.5-1.el7.tis.3.src.rpm > logrotate-config-1.0-5.tis.src.rpm logmgmt-1.0-18.tis.src.rpm? > lldpd-0.9.0-0.tis.9.src.rpm linuxptp-3.1.1-1.tis.5.src.rpm > libtpms-0.6.0-2.tis.2.src.rpm lighttpd-config-1.0-9.tis.src.rpm > lighttpd-1.4.54-1.el7.tis.12.src.rpm > libvirt-python-4.7.0-1.tis.6.src.rpm libnftnl-1.1.5-4.tis.1.src.rpm > libfdt-1.4.4-0.tis.5.src.rpm libevent-2.0.21-4.el7.tis.3.src.rpm > kvm-timer-advance-1.0-3.tis.src.rpm libbpf-0.5.0-1.tis.1.src.rpm > libbnxt_re-220.0.5.0-rhel7u9.tis.3.src.rpm > ldapscripts-2.0.8-0.tis.8.src.rpm > kubernetes-1.23.1-1.23.1-1.tis.4.src.rpm kube-memory-1.0-8.tis.src.rpm > kube-cpusets-1.0-6.tis.src.rpm istio-helm-1.13.3-2.tis.src.rpm > kmod-bnxt_en-1.10.2-220.0.13.0.tis.19.src.rpm > kiali-helm-1.45.0-3.tis.src.rpm kexec-tools-2.0.21-1.tis.2.src.rpm > keepalived-2.1.5-6.tis.1.src.rpm k8s-pod-recovery-1.0-0.tis.15.src.rpm > k8s-cni-cache-cleanup-1.0-0.tis.1.src.rpm > isolcpus-device-plugin-1.0-5.tis.src.rpm > iscsi-initiator-utils-config-1.0-4.tis.src.rpm > iptables-config-1.0-4.tis.src.rpm > initscripts-config-1.0-12.tis.src.rpm iptables-1.8.4-21.tis.6.src.rpm > iproute-5.12.0-4.tis.4.src.rpm io-scheduler-1.0-6.tis.src.rpm > initscripts-9.49.46-1.el7.tis.15.src.rpm inih-44-0.tis.1.src.rpm > igb_uio-kmod-21.02-0.tis.57.src.rpm > html5lib-python-1.0.1-1.el7.tis.4.src.rpm > ice-kmod-1.8.3-1.tis.25.src.rpm iavf-kmod-4.4.2-1.tis.23.src.rpm > i40e-kmod-2.18.9-1.tis.23.src.rpm helm-3.2.1-0.tis.17.src.rpm > haproxy-config-1.0-5.tis.src.rpm haproxy-1.5.18-8.el7.tis.12.src.rpm > golang-1.17.5-1.17.5-1.tis.1.src.rpm grubby-8.28-25.el7.tis.5.src.rpm > gpu-operator-1.8.1-0.tis.4.src.rpm golang-dep-0.5.0-4.tis.src.rpm > golang-1.16.12-1.16.12-2.tis.3.src.rpm fm-rest-api-1.0-72.tis.src.rpm > fm-mgr-1.0-25.tis.src.rpm EXAMPLE_SYSINV-1.0-2.tis.src.rpm > fm-doc-1.0-52.tis.src.rpm? fm-common-1.0-69.tis.src.rpm > fm-api-1.0-46.tis.src.rpm filesystem-scripts-1.0-4.tis.src.rpm > facter-2.4.4-4.el7.tis.7.src.rpm EXAMPLE_VIM-1.0-4.tis.src.rpm > EXAMPLE_SERVICE-1.0-2.tis.src.rpm EXAMPLE_RR-1.0-2.tis.src.rpm > EXAMPLE_MTCE-1.0-4.tis.src.rpm EXAMPLE_0001-1.0-2.tis.src.rpm > EXAMPLE_KUBELET-1.0-1.tis.src.rpm EXAMPLE_DC-1.0-3.tis.src.rpm > EXAMPLE_0002-1.0-2.tis.src.rpm etcd-3.3.15-1.tis.7.src.rpm > engtools-1.0-37.tis.src.rpm enable-dev-patch-1.0-4.tis.src.rpm > docker-distribution-2.7.1-1.tis.13.src.rpm > dwarves-1.22-1.tis.1.src.rpm drbd-9.15.1-0.tis.11.src.rpm > dpkg-1.18.24-0.tis.2.src.rpm docker-config-1.0-5.tis.src.rpm > dnsmasq-config-1.0-4.tis.src.rpm dnsmasq-2.76-7.el7.tis.7.src.rpm > dhcp-config-1.0-8.tis.src.rpm dmesg-config-1.0-1.tis.src.rpm > distributedcloud-client-1.0.0-1.tis.65.src.rpm > distributedcloud-1.0.0-1.tis.422.src.rpm > dhcp-4.2.5-82.el7.centos.tis.13.src.rpm dex-helm-1.0-10.tis.src.rpm > controllerconfig-1.0-327.tis.src.rpm collector-1.0-69.tis.src.rpm > containernetworking-plugins-1.0.1-1.tis.9.src.rpm > containerd-config-1.0-4.tis.src.rpm containerd-1.4.11-22.tis.src.rpm > config-gate-1.0-13.tis.src.rpm > collectd-extensions-1.0-0.tis.85.src.rpm > cloud-init-0.7.9-24.el7.centos.1.tis.4.src.rpm > cgts-client-1.0-295.tis.src.rpm chartmuseum-0.12.0-6.tis.src.rpm > ceph-manager-1.0-28.tis.src.rpm cgcs-patch-1.0-105.tis.src.rpm > cert-mon-1.0-7.tis.src.rpm cert-manager-helm-1.0-15.tis.src.rpm > cert-alarm-1.0-5.tis.src.rpm centos-release-config-1.0-3.tis.src.rpm > build-info-1.0-4.tis.src.rpm bond-cni-1.0-bff6422.tis.3.src.rpm > audit-config-1.0-4.tis.src.rpm armada-0.2.0-0.tis.14.src.rpm > armada-helm-toolkit-1.0-8.tis.src.rpm > rabbitmq-server-3.6.5-1.el7.tis.9.src.rpm > parted-3.1-29.el7.tis.7.src.rpm qat17-4.14.0-00031.tis.60.src.rpm > libvirt-4.7.0-1.tis.31.src.rpm > grub2-2.02-0.86.el7.centos.tis.14.src.rpm > openvswitch-2.11.0-0.tis.13.src.rpm mtce-guest-1.0-146.tis.src.rpm > mtce-common-1.0-142.tis.src.rpm mtce-1.0-217.tis.src.rpm > mlnx-ofa_kernel-5.5-OFED.5.5.1.0.3.1.tis.26.src.rpm > ceph-14.2.22-0.el7.tis.35.src.rpm > sudo-1.8.23-10.el7_9.1.tis.10.src.rpm > qemu-kvm-ev-3.0.0-0.tis.20.src.rpm > openssh-7.4p1-21.el7_4.tis.9.src.rpm > mariadb-10.1.28-1.el7.tis.8.src.rpm > kubernetes-unversioned-1.0-1.tis.9.src.rpm > kubernetes-1.22.5-1.22.5-1.tis.9.src.rpm > kubernetes-1.21.8-1.21.8-1.tis.16.src.rpm > openldap-2.4.44-20.el7.tis.11.src.rpm > setup-2.8.71-10.el7.tis.11.src.rpm bash-4.2.46-34.el7.tis.10.src.rpm > systemd-219-78.el7_9.3.tis.19.src.rpm > python-2.7.5-89.el7.tis.8.src.rpm > kernel-5.10.99-200.42.tis.el7.src.rpm > pxe-network-installer-1.0-35.tis.src.rpm > > ######## Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed > with rc=1 > > Sun May 15 20:16:49 UTC 2022: build-rpm-parallel --std failed with rc=1 > > [stx_builder at 4df2aa3dafa0 toCOPY]$ > > Best regards, > > Mouli. > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From erin at openstack.org Thu May 19 16:40:19 2022 From: erin at openstack.org (Erin Disney) Date: Thu, 19 May 2022 11:40:19 -0500 Subject: [Starlingx-discuss] Save the Date: PTG October 2022 Message-ID: We are very excited to announce our first in-person Project Teams Gathering (PTG) since Shanghai in 2019! Can?t wait to get everyone back together again this October 17-20th at the Hyatt Regency in lovely Columbus, Ohio. The venue is located in the heart of downtown, within walking distance of local sports arenas and the Short North Arts District that hosts dozens of restaurants, coffee shops, bars, art galleries and shops. Kendall Nelson will be reaching out soon to start collecting team sign ups so everyone knows who is planning on meeting in Columbus. We will also have registration, reduced rate hotel block, and sponsorship information coming soon, all of which will be posted to openinfra.dev/ptg once available. Stay tuned and we can?t wait to see you all in Columbus! Erin Disney Event Marketing Open Infrastructure Foundation -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu May 19 17:17:42 2022 From: scott.little at windriver.com (Scott Little) Date: Thu, 19 May 2022 13:17:42 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_containers_master_master - Build # 292 - Failure! In-Reply-To: <1261773806.16.1652816463311.JavaMail.javamailuser@localhost> References: <1261773806.16.1652816463311.JavaMail.javamailuser@localhost> Message-ID: <37507a26-9d1f-983f-42d0-52f5895067c3@windriver.com> CentOS Container Builds fixed in the May 18 build.? Thanks Davlet Scott On 2022-05-17 3:41 p.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_layer_containers_master_master > Build #: 292 > Status: Failure > Timestamp: 20220512T054038Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/containers/20220512T054038Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: true > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Ghada.Khalil at windriver.com Fri May 20 00:36:26 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Fri, 20 May 2022 00:36:26 +0000 Subject: [Starlingx-discuss] No StarlingX TSC/Community Call on May 25 Message-ID: Hello community members, There will be no StarlingX TSC/Community call next week (May 25) as Greg and myself have a conflict and cannot chair the call. We will reconvene on June 1st. Please update your calendars accordingly as I'm not the owner of the current invite. Regards, Ghada -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Sun May 22 19:45:34 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 22 May 2022 19:45:34 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220520T013752Z Message-ID: Sanity Test from 2022-May-20 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220520T013752Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220520T013752Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All BARE-METAL configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1973888 - StarlingX provision failed for all bare-metal configurations At least virtual configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready, OBS: All pods are fine now but stx-openstack apply still fails. I attached new logs. Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From alexandru.dimofte at intel.com Sun May 22 19:49:25 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 22 May 2022 19:49:25 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220521T032715Z Message-ID: Sanity Test from 2022-May-21 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220521T032715Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220521T032715Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From alexandru.dimofte at intel.com Sun May 22 19:51:08 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Sun, 22 May 2022 19:51:08 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220522T013803Z Message-ID: Sanity Test from 2022-May-22 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220522T013803Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220522T013803Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz At least virtual configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From build.starlingx at gmail.com Mon May 23 07:46:28 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 23 May 2022 03:46:28 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 1443 - Failure! Message-ID: <309817379.51.1653291991228.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 1443 Status: Failure Timestamp: 20220523T051650Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220523T043000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20220523T043000Z DOCKER_BUILD_ID: jenkins-master-20220523T043000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220523T043000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20220523T043000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon May 23 07:46:32 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 23 May 2022 03:46:32 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1296 - Failure! Message-ID: <439741037.54.1653291993317.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 1296 Status: Failure Timestamp: 20220523T043000Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220523T043000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From build.starlingx at gmail.com Tue May 24 04:49:53 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 24 May 2022 00:49:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 858 - Failure! Message-ID: <972794615.61.1653367795940.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 858 Status: Failure Timestamp: 20220524T034858Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220524T034858Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Linda.Wang at windriver.com Tue May 24 05:02:12 2022 From: Linda.Wang at windriver.com (Wang, Linda) Date: Tue, 24 May 2022 05:02:12 +0000 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS meeting Minutes: May 11, 2022 Message-ID: 05/11/2022 Attendees: Frank Miller, Davlet Penech, Scott Little, Lucas Medeiros Cavalcante, Mark Asselstine, Steve Geary, Linda Wang 1. Build system * Dev build system Architecture Debian src repository ------\ \ fetch src package \ \ -- > build system <----- (produce debs) / /. push build debs v repository manager. very transient \ images \ ^ \ use / \------ LAT ---------------/ * How to improve the build time? * "Full build avoidance" * to help with build time, don't build everything from source: * LAT can use the pre-existing build binaries, and not rebuild if there is no change. * Nightly builds can pick up the only changes, use full build avoidance * Continue integration: code checked in at mater branch * 7.5 hours to a full build * Like to have pulp transition to Aptly 2. Kernel Features * Kenrel Patch review * Outstanding Build patch * https://review.opendev.org/c/starlingx/tools/+/837501 (-1 reviews, need to Haiqing to redo the patch) ?Next Meeting: May 24, 2022 -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue May 24 16:40:50 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 24 May 2022 12:40:50 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 859 - Still Failing! In-Reply-To: <2131905391.59.1653367792214.JavaMail.javamailuser@localhost> References: <2131905391.59.1653367792214.JavaMail.javamailuser@localhost> Message-ID: <1586149011.67.1653410452002.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 859 Status: Still Failing Timestamp: 20220524T154217Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220524T154217Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Wed May 25 02:51:29 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 24 May 2022 22:51:29 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 2803 - Failure! Message-ID: <1968829158.73.1653447090012.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 2803 Status: Failure Timestamp: 20220525T021152Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20220525T013203Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20220525T013203Z DOCKER_BUILD_ID: jenkins-master-distro-20220525T013203Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20220525T013203Z/logs BUILD_IMG: false FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20220525T013203Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Wed May 25 02:51:31 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 24 May 2022 22:51:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 873 - Failure! Message-ID: <1005723701.76.1653447092057.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 873 Status: Failure Timestamp: 20220525T013203Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20220525T013203Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Wed May 25 08:20:26 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 25 May 2022 04:20:26 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 1445 - Failure! Message-ID: <49292360.79.1653466827653.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 1445 Status: Failure Timestamp: 20220525T051215Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220525T043000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20220525T043000Z DOCKER_BUILD_ID: jenkins-master-20220525T043000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220525T043000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20220525T043000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed May 25 08:20:29 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 25 May 2022 04:20:29 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1298 - Failure! Message-ID: <39692217.82.1653466830301.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 1298 Status: Failure Timestamp: 20220525T043000Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220525T043000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From scott.little at windriver.com Wed May 25 15:35:44 2022 From: scott.little at windriver.com (Scott Little) Date: Wed, 25 May 2022 11:35:44 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 873 - Failure! In-Reply-To: <1005723701.76.1653447092057.JavaMail.javamailuser@localhost> References: <1005723701.76.1653447092057.JavaMail.javamailuser@localhost> Message-ID: Distro layer build failed on package puppet-ceph. Probable cause: https://review.opendev.org/c/starlingx/integ/+/837011 Launchpad: https://bugs.launchpad.net/starlingx/+bug/1975725 Scott On 2022-05-24 22:51, build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_layer_distro_master_master > Build #: 873 > Status: Failure > Timestamp: 20220525T013203Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20220525T013203Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed May 25 15:38:11 2022 From: scott.little at windriver.com (Scott Little) Date: Wed, 25 May 2022 11:38:11 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1298 - Failure! In-Reply-To: <39692217.82.1653466830301.JavaMail.javamailuser@localhost> References: <39692217.82.1653466830301.JavaMail.javamailuser@localhost> Message-ID: Monolithic build failed on package puppet-ceph. Probable cause: https://review.opendev.org/c/starlingx/integ/+/837011 Launchpad: https://bugs.launchpad.net/starlingx/+bug/1975725 Scott On 2022-05-25 04:20, build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_master_master > Build #: 1298 > Status: Failure > Timestamp: 20220525T043000Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220525T043000Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From scott.little at windriver.com Wed May 25 15:41:00 2022 From: scott.little at windriver.com (Scott Little) Date: Wed, 25 May 2022 11:41:00 -0400 Subject: [Starlingx-discuss] mirror.starlingx.cengn.ca reports 504 gateway timeout Message-ID: <35b566f6-2e2d-0c14-34cc-5e340efe4d33@windriver.com> mirror.starlingx.cengn.ca is currently unavailable due to '504 gateway timeout'. I have reported the problem to CENGN, and await their response. Scott From scott.little at windriver.com Wed May 25 17:10:31 2022 From: scott.little at windriver.com (Scott Little) Date: Wed, 25 May 2022 13:10:31 -0400 Subject: [Starlingx-discuss] mirror.starlingx.cengn.ca reports 504 gateway timeout In-Reply-To: <35b566f6-2e2d-0c14-34cc-5e340efe4d33@windriver.com> References: <35b566f6-2e2d-0c14-34cc-5e340efe4d33@windriver.com> Message-ID: mirror.starlingx.cengn.ca is available again. Scott On 2022-05-25 11:41 a.m., Scott Little wrote: > mirror.starlingx.cengn.ca is currently unavailable due to '504 gateway > timeout'. > > I have reported the problem to CENGN, and await their response. > > Scott > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu May 26 02:47:51 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 26 May 2022 02:47:51 +0000 Subject: [Starlingx-discuss] Canceled: Bi-weekly StarlingX Release Meeting (new time) Message-ID: Cancelling as I am out-of-office and cannot chair this meeting. New meeting series for the StarlingX Release Meeting Bi-weekly meeting on Wednesday 06:30AM PT / 09:30AM ET / 02:30PM UTC Zoom Link: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3258 bytes Desc: not available URL: From Ghada.Khalil at windriver.com Thu May 26 12:48:32 2022 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 26 May 2022 12:48:32 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Networking Sub-Project Meeting (bi-weekly) Message-ID: My apologies for the late message, but need to cancel the stx networking meeting today as neither Steve nor I are available. Re-sending with new zoom link Bi-weekly on Thursday 0615 PT / 0915 ET Zoom Link: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-networking Networking team wiki: https://wiki.openstack.org/wiki/StarlingX/Networking -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3523 bytes Desc: not available URL: From alexandru.dimofte at intel.com Thu May 26 15:15:27 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 26 May 2022 15:15:27 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220526T031741Z Message-ID: Sanity Test from 2022-May-26 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220526T031741Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220526T031741Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From Juanita.Balaraj at windriver.com Thu May 26 15:55:22 2022 From: Juanita.Balaraj at windriver.com (Balaraj, Juanita) Date: Thu, 26 May 2022 15:55:22 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 25-May-22 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST. [1] Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings [2] Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation Thanks, Juanita Balaraj ============ 25-May-22 * Open Gerrit Reviews; https://review.opendev.org/q/%2509starlingx/docs+status:open - Action, the Doc team to ensure all stakeholders review and provide feedback and close open reviews for the Stx 7.0 Release. * Debian updates; See https://docs.google.com/spreadsheets/d/171PJAu9SykXm9h9Ny2IsZ8YMbhvEwOzuTbISWUMQhiE/edit#gid=1107209846 - Doc team to refer to these stories to add details and create new Gerrit review for Debian updates. * Verified Hardware updates - Discussed H/W details, needs to be validated and updated for Stx 7.0 https://docs.starlingx.io/planning/kubernetes/verified-commercial-hardware.html * Using inclusive language was discussed with reference to https://review.opendev.org/c/starlingx/docs/+/842559 Details captured in earlier discussions: https://etherpad.opendev.org/p/divisivelanguage * Operations Guide Archive - On Hold until further clarifications are discussed - https://review.opendev.org/c/starlingx/docs/+/822030 _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Fri May 27 10:54:40 2022 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 27 May 2022 10:54:40 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20220527T041457Z Message-ID: Sanity Test from 2022-May-27 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220527T041457Z/outputs/iso/) Status: RED Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220527T041457Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz All configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1970645 - Stx-openstack apply timeout because some pods are not ready Virtual Standard and Virtual Standard EXT configurations are affected by: https://bugs.launchpad.net/starlingx/+bug/1975921 - Provision failed: IndexError: Given index 0 is out of the range 0--1 Kind regards, Validation team [Logo Description automatically generated] Dimofte Alexandru Software Engineer PMCE TEAM Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Str. Iancu Fotea nr. 38, mun. Galati, jud. Galati, 800017, ROMANIA Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 3903 bytes Desc: image001.png URL: From build.starlingx at gmail.com Mon May 30 00:21:31 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 29 May 2022 20:21:31 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 2818 - Failure! Message-ID: <1650759717.108.1653870092842.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 2818 Status: Failure Timestamp: 20220530T000501Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220529T230602Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20220529T230602Z DOCKER_BUILD_ID: jenkins-master-flock-20220529T230602Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220529T230602Z/logs BUILD_IMG: true FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20220529T230602Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master LAYER: flock MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock BUILD_ISO: true From build.starlingx at gmail.com Mon May 30 00:21:34 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Sun, 29 May 2022 20:21:34 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 864 - Failure! Message-ID: <1308710218.111.1653870094674.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 864 Status: Failure Timestamp: 20220529T230602Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220529T230602Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Mon May 30 09:05:23 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 30 May 2022 05:05:23 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 1450 - Failure! Message-ID: <2024090964.114.1653901524595.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 1450 Status: Failure Timestamp: 20220530T051212Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220530T043000Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20220530T043000Z DOCKER_BUILD_ID: jenkins-master-20220530T043000Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220530T043000Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20220530T043000Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Mon May 30 09:05:27 2022 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 30 May 2022 05:05:27 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 1303 - Failure! Message-ID: <1767512066.117.1653901527916.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 1303 Status: Failure Timestamp: 20220530T043000Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20220530T043000Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From Davlet.Panech at windriver.com Mon May 30 17:14:11 2022 From: Davlet.Panech at windriver.com (Panech, Davlet) Date: Mon, 30 May 2022 17:14:11 +0000 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 864 - Failure! In-Reply-To: <1308710218.111.1653870094674.JavaMail.javamailuser@localhost> References: <1308710218.111.1653870094674.JavaMail.javamailuser@localhost> Message-ID: A fix for this problem was merged earlier today: https://review.opendev.org/c/starlingx/platform-armada-app/+/843852 I expect the next build to succeed. ________________________________ From: build.starlingx at gmail.com Sent: May 29, 2022 8:21 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 864 - Failure! [Please note: This e-mail is from an EXTERNAL e-mail address] Project: STX_build_layer_flock_master_master Build #: 864 Status: Failure Timestamp: 20220529T230602Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20220529T230602Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue May 31 15:20:43 2022 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 31 May 2022 08:20:43 -0700 Subject: [Starlingx-discuss] KubeCon NA CFP deadline is approaching! Message-ID: Hi, I wanted to draw your attention to the CFP for KubeCon NA that is closing soon! You can find more information about the event here: https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/ The CFP deadline is __this Friday (June 3) at 11:59pm PDT__: https://events.linuxfoundation.org/kubecon-cloudnativecon-north-america/program/cfp/ Since StarlingX is integrating Kubernetes along with various other components from CNCF I think it would be great to gain more visibility in that ecosystem. If you need any help with writing up your abstract or need someone to review please feel to reach out to me. Best Regards, Ildik? ??? Ildik? V?ncsa Senior Manager, Community & Ecosystem Open Infrastructure Foundation From kennelson11 at gmail.com Tue May 31 15:53:07 2022 From: kennelson11 at gmail.com (Kendall Nelson) Date: Tue, 31 May 2022 10:53:07 -0500 Subject: [Starlingx-discuss] Fwd: [Forum] Meet the Projects Session! In-Reply-To: References: Message-ID: Hello :) I wanted to take a moment to invite project maintainers, core contributors, PTLs, governance officials, etc to a meet the projects (projects being the OIF top level ones- Kata, Zuul, OpenStack, StarlingX, and Airship) session during the Forum at the upcoming Summit in Berlin. It will take place Tue, June 7, 11:20am - 12:30pm local in A - A06. The idea is to gather in a single place so that newer members of the community or people unfamiliar with the project can come and meet some of the active participants and ask questions. It's all really informal so there is no need to prepare anything ahead of time. I realize it is pretty late in everyone's planning their schedules for the summit, but if you have a few minutes to spare to come hang out and meet some people and network, please do! We would love to see you there. -Kendall -------------- next part -------------- An HTML attachment was scrubbed... URL: