From austin.sun at intel.com Thu Jul 1 01:14:15 2021 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 1 Jul 2021 01:14:15 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Distro-OpenStack: Bi-weekly Project Meeting(Summer Time) Message-ID: Hi folks, This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack. Your participation to this meeting and/or other offline contribution by all means are highly appreciated! Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings The Summer Time Slot for this meeting : CST: 9:00 PM (China, Shanghai ) PST: 7:00 AM (US West , US, Oregon) EST: 9:00 AM (East Canada , Canada Ottawa) Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3559 bytes Desc: not available URL: From austin.sun at intel.com Thu Jul 1 01:17:29 2021 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 1 Jul 2021 01:17:29 +0000 Subject: [Starlingx-discuss] Cancel StarlingX Distro-OpenStack: Bi-weekly Project Meeting -- 07/06 Message-ID: Hi All: Cancel 07/06 openstack distro meeting due to personal affairs. Thanks. BR Austin Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From lists at optimcloud.com Thu Jul 1 02:39:10 2021 From: lists at optimcloud.com (Embedded Devel) Date: Thu, 01 Jul 2021 02:39:10 +0000 Subject: [Starlingx-discuss] rook-ceph deployment failure Message-ID: <1625106963669.939601506.1886162123@optimcloud.com> stx 5.0 simplex bare metal fais so 1) how to resolv, or 2) how to back out and just use normal ceph kube-system rook-ceph-crashcollector-controller-0-96bb8b6df-cz5qf 1/1 Running 1 45h kube-system rook-ceph-mgr-a-7b84cc65d6-bt5mq 1/1 Running 55 45h kube-system rook-ceph-mon-a-65b58876db-98jg5 0/1 CrashLoopBackOff 520 45h kube-system rook-ceph-operator-79fb8559-k7w8z 1/1 Running 1 45h kube-system rook-ceph-tools-5778d7f6c-d5pwp 1/1 Running 1 45h kube-system rook-discover-djv7r 1/1 Running 1 45h controller-0:~$ cat /var/log/armada/rook-ceph-apps-apply_2021-06-30-12-41-10.log 2021-06-30 12:41:11.316 68 DEBUG armada.handlers.document [-] Resolving reference /tmp/manifests/rook-ceph-apps/1.0-5/rook-ceph-apps-manifest.yaml. resolve_reference /usr/local/lib/python3.6/dist-packages/armada/handlers/document.py:49 2021-06-30 12:41:11.327 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-30 12:41:11.328 68 DEBUG armada.handlers.tiller [-] Using Tiller host port: 24134 _get_tiller_port /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:174 2021-06-30 12:41:11.328 68 DEBUG armada.handlers.tiller [-] Tiller getting gRPC insecure channel at 127.0.0.1:24134 with options: [grpc.max_send_message_length=429496729, grpc.max_receive_message_length=429496729] get_channel /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:127 2021-06-30 12:41:11.334 68 DEBUG armada.handlers.tiller [-] Armada is using Tiller at: 127.0.0.1:24134, namespace=kube-system, timeout=300 __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:107 2021-06-30 12:41:11.334 68 INFO armada.handlers.lock [-] Acquiring lock 2021-06-30 12:41:11.350 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-operator validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph-provisioner validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] starlingx-rook-charts validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] rook-ceph-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.352 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-operator validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.352 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.352 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph-provisioner validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.353 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] starlingx-rook-charts validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.353 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] rook-ceph-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.353 68 INFO armada.handlers.armada [-] Performing pre-flight operations. 2021-06-30 12:41:11.353 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-30 12:41:11.353 68 DEBUG armada.handlers.tiller [-] Getting Tiller Status: Tiller exists tiller_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:186 2021-06-30 12:41:11.353 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/stx-platform/rook-operator-0.1.0.tgz 2021-06-30 12:41:11.354 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-30 12:41:11.359 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/stx-platform/rook-ceph-0.1.0.tgz 2021-06-30 12:41:11.359 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-30 12:41:11.362 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/stx-platform/rook-ceph-provisioner-0.1.0.tgz 2021-06-30 12:41:11.362 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-30 12:41:11.365 68 DEBUG armada.handlers.tiller [-] Tiller ListReleases() with timeout=300, request=limit: 32 status_codes: UNKNOWN status_codes: DEPLOYED status_codes: DELETED status_codes: DELETING status_codes: FAILED status_codes: PENDING_INSTALL status_codes: PENDING_UPGRADE status_codes: PENDING_ROLLBACK get_results /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:215 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-rook-ceph, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-rook-operator, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-mariadb, version 1, status: PENDING_INSTALL list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-nginx-ports-control, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-cephfs-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-ceph-pools-audit, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-rbd-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release ic-nginx-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.403 68 INFO armada.handlers.armada [-] Processing ChartGroup: starlingx-rook-charts (StarlingX Rook Ceph Charts), sequenced=True 2021-06-30 12:41:11.403 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Processing Chart, release=stx-rook-operator 2021-06-30 12:41:11.403 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: Resolved `wait.resources` list: [{'type': 'pod', 'labels': {'app': 'rook-ceph-operator'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-30 12:41:11.404 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Existing release stx-rook-operator found in namespace kube-system 2021-06-30 12:41:11.406 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Checking for updates to chart release inputs. 2021-06-30 12:41:11.425 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Found no updates to chart release inputs 2021-06-30 12:41:11.425 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-operator]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-operator required=True for 1800s 2021-06-30 12:41:11.425 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-operator), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:11.435 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: pod rook-ceph-operator-79fb8559-k7w8z is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:11.436 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-30 12:41:11.436 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Processing Chart, release=stx-rook-ceph 2021-06-30 12:41:11.436 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Resolved `wait.resources` list: [{'type': 'pod', 'labels': {'app': 'rook-ceph-mgr'}}, {'type': 'pod', 'labels': {'app': 'rook-ceph-mon'}}, {'type': 'pod', 'labels': {'app': 'rook-ceph-osd'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-30 12:41:11.437 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Existing release stx-rook-ceph found in namespace kube-system 2021-06-30 12:41:11.439 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Checking for updates to chart release inputs. 2021-06-30 12:41:11.448 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Found no updates to chart release inputs 2021-06-30 12:41:11.448 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-mgr required=True for 1800s 2021-06-30 12:41:11.448 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-mgr), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:11.476 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mgr-a-7b84cc65d6-bt5mq is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:11.476 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-30 12:41:11.476 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-mon required=True for 1800s 2021-06-30 12:41:11.476 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-mon), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:11.483 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 not ready: Waiting for pod rook-ceph-mon-a-65b58876db-98jg5 to be ready... handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:260 2021-06-30 12:41:22.131 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Watch event: type=MODIFIED, name=rook-ceph-mon-a-65b58876db-98jg5, namespace=kube-system,resource_version=2005320 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:412 2021-06-30 12:41:22.131 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 not ready: Waiting for pod rook-ceph-mon-a-65b58876db-98jg5 to be ready... handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:260 2021-06-30 12:41:35.357 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Watch event: type=MODIFIED, name=rook-ceph-mon-a-65b58876db-98jg5, namespace=kube-system,resource_version=2005430 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:412 2021-06-30 12:41:35.358 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:35.358 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Found modified resources: ['rook-ceph-mon-a-65b58876db-98jg5'] wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:299 2021-06-30 12:41:35.358 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Continuing to wait: 0 consecutive attempts without modified resources of 1 required. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:310 2021-06-30 12:41:36.359 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-mon), timeout=1775 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:36.367 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:36.367 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-30 12:41:36.367 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-osd required=True for 1775s 2021-06-30 12:41:36.367 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-osd), timeout=1775 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:42:11.404 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:43:11.466 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:44:11.528 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:45:11.589 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:46:11.651 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:47:11.712 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:48:11.774 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:49:11.835 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:50:11.897 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:51:11.958 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:52:12.019 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:53:12.081 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:54:12.144 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:55:12.205 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:56:12.268 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:57:12.329 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:58:12.392 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:59:12.454 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:00:12.517 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:01:12.578 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:02:12.639 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:03:12.701 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:04:12.762 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:05:12.824 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:06:12.886 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:07:12.946 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:08:13.007 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:09:13.069 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:10:13.132 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:11:11.371 68 ERROR armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Timed out waiting for pods (namespace=kube-system, labels=(app=rook-ceph-osd)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada [-] Chart deploy [kube-system-rook-ceph] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=kube-system, labels=(app=rook-ceph-osd)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada Traceback (most recent call last): 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 225, in handle_result 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada result = get_result() 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 236, in 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada if (handle_result(chart, lambda: deploy_chart(chart))): 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 214, in deploy_chart 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada chart, cg_test_all_charts, prefix, known_releases) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 248, in execute 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada chart_wait.wait(timer) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 134, in wait 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada wait.wait(timeout=timeout) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 294, in wait 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada modified = self._wait(deadline) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 354, in _wait 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=kube-system, labels=(app=rook-ceph-osd)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada 2021-06-30 13:11:11.373 68 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['kube-system-rook-ceph'] 2021-06-30 13:11:12.193 68 INFO armada.handlers.lock [-] Releasing lock 2021-06-30 13:11:12.196 68 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kube-system-rook-ceph'] 2021-06-30 13:11:12.196 68 ERROR armada.cli Traceback (most recent call last): 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2021-06-30 13:11:12.196 68 ERROR armada.cli self.invoke() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke 2021-06-30 13:11:12.196 68 ERROR armada.cli resp = self.handle(documents, tiller) 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2021-06-30 13:11:12.196 68 ERROR armada.cli return future.result() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2021-06-30 13:11:12.196 68 ERROR armada.cli return self.__get_result() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2021-06-30 13:11:12.196 68 ERROR armada.cli raise self._exception 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2021-06-30 13:11:12.196 68 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 256, in handle 2021-06-30 13:11:12.196 68 ERROR armada.cli return armada.sync() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 252, in sync 2021-06-30 13:11:12.196 68 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2021-06-30 13:11:12.196 68 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kube-system-rook-ceph'] 2021-06-30 13:11:12.196 68 ERROR armada.cli command terminated with exit code 1 -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com From alexandru.dimofte at intel.com Fri Jul 2 11:58:10 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 2 Jul 2021 11:58:10 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210701T015742Z Message-ID: Sanity Test from 2021-July-01 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210701T015742Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210701T015742Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 88 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 5155 bytes Desc: image002.png URL: From haochuan.z.chen at intel.com Sun Jul 4 23:48:41 2021 From: haochuan.z.chen at intel.com (Chen, Haochuan Z) Date: Sun, 4 Jul 2021 23:48:41 +0000 Subject: [Starlingx-discuss] Starlingx-discuss Digest, Vol 38, Issue 2 In-Reply-To: References: Message-ID: HI You can check what's the failure by this command firstly. $ Kubectl describe po rook-ceph-mon-a-65b58876db-98jg5 -n kube-system BR! Martin, Chen IOTG, Software Engineer 021-61164330 -----Original Message----- From: starlingx-discuss-request at lists.starlingx.io Sent: Thursday, July 1, 2021 8:30 PM To: starlingx-discuss at lists.starlingx.io Subject: Starlingx-discuss Digest, Vol 38, Issue 2 Send Starlingx-discuss mailing list submissions to starlingx-discuss at lists.starlingx.io To subscribe or unsubscribe via the World Wide Web, visit http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss or, via email, send a message with subject or body 'help' to starlingx-discuss-request at lists.starlingx.io You can reach the person managing the list at starlingx-discuss-owner at lists.starlingx.io When replying, please edit your Subject line so it is more specific than "Re: Contents of Starlingx-discuss digest..." Today's Topics: 1. rook-ceph deployment failure (Embedded Devel) ---------------------------------------------------------------------- Message: 1 Date: Thu, 01 Jul 2021 02:39:10 +0000 From: Embedded Devel To: Starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] rook-ceph deployment failure Message-ID: <1625106963669.939601506.1886162123 at optimcloud.com> Content-Type: text/plain; charset=utf-8; format=flowed stx 5.0 simplex bare metal fais so 1) how to resolv, or 2) how to back out and just use normal ceph kube-system rook-ceph-crashcollector-controller-0-96bb8b6df-cz5qf 1/1 Running 1 45h kube-system rook-ceph-mgr-a-7b84cc65d6-bt5mq 1/1 Running 55 45h kube-system rook-ceph-mon-a-65b58876db-98jg5 0/1 CrashLoopBackOff 520 45h kube-system rook-ceph-operator-79fb8559-k7w8z 1/1 Running 1 45h kube-system rook-ceph-tools-5778d7f6c-d5pwp 1/1 Running 1 45h kube-system rook-discover-djv7r 1/1 Running 1 45h controller-0:~$ cat /var/log/armada/rook-ceph-apps-apply_2021-06-30-12-41-10.log 2021-06-30 12:41:11.316 68 DEBUG armada.handlers.document [-] Resolving reference /tmp/manifests/rook-ceph-apps/1.0-5/rook-ceph-apps-manifest.yaml. resolve_reference /usr/local/lib/python3.6/dist-packages/armada/handlers/document.py:49 2021-06-30 12:41:11.327 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-30 12:41:11.328 68 DEBUG armada.handlers.tiller [-] Using Tiller host port: 24134 _get_tiller_port /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:174 2021-06-30 12:41:11.328 68 DEBUG armada.handlers.tiller [-] Tiller getting gRPC insecure channel at 127.0.0.1:24134 with options: [grpc.max_send_message_length=429496729, grpc.max_receive_message_length=429496729] get_channel /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:127 2021-06-30 12:41:11.334 68 DEBUG armada.handlers.tiller [-] Armada is using Tiller at: 127.0.0.1:24134, namespace=kube-system, timeout=300 __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:107 2021-06-30 12:41:11.334 68 INFO armada.handlers.lock [-] Acquiring lock 2021-06-30 12:41:11.350 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-operator validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph-provisioner validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] starlingx-rook-charts validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.351 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] rook-ceph-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.352 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-operator validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.352 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.352 68 DEBUG armada.utils.validate [-] Validating document [armada/Chart/v1] kube-system-rook-ceph-provisioner validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.353 68 DEBUG armada.utils.validate [-] Validating document [armada/ChartGroup/v1] starlingx-rook-charts validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.353 68 DEBUG armada.utils.validate [-] Validating document [armada/Manifest/v1] rook-ceph-manifest validate_armada_document /usr/local/lib/python3.6/dist-packages/armada/utils/validate.py:108 2021-06-30 12:41:11.353 68 INFO armada.handlers.armada [-] Performing pre-flight operations. 2021-06-30 12:41:11.353 68 DEBUG armada.handlers.tiller [-] Using Tiller host IP: 127.0.0.1 _get_tiller_ip /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:165 2021-06-30 12:41:11.353 68 DEBUG armada.handlers.tiller [-] Getting Tiller Status: Tiller exists tiller_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:186 2021-06-30 12:41:11.353 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/stx-platform/rook-operator-0.1.0.tgz 2021-06-30 12:41:11.354 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-30 12:41:11.359 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/stx-platform/rook-ceph-0.1.0.tgz 2021-06-30 12:41:11.359 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-30 12:41:11.362 68 INFO armada.handlers.armada [-] Downloading tarball from: http://192.168.206.1:8080/helm_charts/stx-platform/rook-ceph-provisioner-0.1.0.tgz 2021-06-30 12:41:11.362 68 WARNING armada.handlers.armada [-] Disabling server validation certs to extract charts 2021-06-30 12:41:11.365 68 DEBUG armada.handlers.tiller [-] Tiller ListReleases() with timeout=300, request=limit: 32 status_codes: UNKNOWN status_codes: DEPLOYED status_codes: DELETED status_codes: DELETING status_codes: FAILED status_codes: PENDING_INSTALL status_codes: PENDING_UPGRADE status_codes: PENDING_ROLLBACK get_results /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:215 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-rook-ceph, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-rook-operator, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-mariadb, version 1, status: PENDING_INSTALL list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-nginx-ports-control, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release osh-openstack-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-cephfs-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-ceph-pools-audit, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release stx-rbd-provisioner, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager-psp-rolebinding, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release cm-cert-manager, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.402 68 DEBUG armada.handlers.tiller [-] Found release ic-nginx-ingress, version 1, status: DEPLOYED list_releases /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:276 2021-06-30 12:41:11.403 68 INFO armada.handlers.armada [-] Processing ChartGroup: starlingx-rook-charts (StarlingX Rook Ceph Charts), sequenced=True 2021-06-30 12:41:11.403 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Processing Chart, release=stx-rook-operator 2021-06-30 12:41:11.403 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: Resolved `wait.resources` list: [{'type': 'pod', 'labels': {'app': 'rook-ceph-operator'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-30 12:41:11.404 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Existing release stx-rook-operator found in namespace kube-system 2021-06-30 12:41:11.406 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Checking for updates to chart release inputs. 2021-06-30 12:41:11.425 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-operator]: Found no updates to chart release inputs 2021-06-30 12:41:11.425 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-operator]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-operator required=True for 1800s 2021-06-30 12:41:11.425 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-operator), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:11.435 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: pod rook-ceph-operator-79fb8559-k7w8z is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:11.436 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-operator]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-30 12:41:11.436 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Processing Chart, release=stx-rook-ceph 2021-06-30 12:41:11.436 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Resolved `wait.resources` list: [{'type': 'pod', 'labels': {'app': 'rook-ceph-mgr'}}, {'type': 'pod', 'labels': {'app': 'rook-ceph-mon'}}, {'type': 'pod', 'labels': {'app': 'rook-ceph-osd'}}] __init__ /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:89 2021-06-30 12:41:11.437 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Existing release stx-rook-ceph found in namespace kube-system 2021-06-30 12:41:11.439 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Checking for updates to chart release inputs. 2021-06-30 12:41:11.448 68 INFO armada.handlers.chart_deploy [-] [chart=kube-system-rook-ceph]: Found no updates to chart release inputs 2021-06-30 12:41:11.448 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-mgr required=True for 1800s 2021-06-30 12:41:11.448 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-mgr), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:11.476 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mgr-a-7b84cc65d6-bt5mq is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:11.476 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-30 12:41:11.476 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-mon required=True for 1800s 2021-06-30 12:41:11.476 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-mon), timeout=1800 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:11.483 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 not ready: Waiting for pod rook-ceph-mon-a-65b58876db-98jg5 to be ready... handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:260 2021-06-30 12:41:22.131 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Watch event: type=MODIFIED, name=rook-ceph-mon-a-65b58876db-98jg5, namespace=kube-system,resource_version=2005320 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:412 2021-06-30 12:41:22.131 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 not ready: Waiting for pod rook-ceph-mon-a-65b58876db-98jg5 to be ready... handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:260 2021-06-30 12:41:35.357 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Watch event: type=MODIFIED, name=rook-ceph-mon-a-65b58876db-98jg5, namespace=kube-system,resource_version=2005430 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:412 2021-06-30 12:41:35.358 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:35.358 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Found modified resources: ['rook-ceph-mon-a-65b58876db-98jg5'] wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:299 2021-06-30 12:41:35.358 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Continuing to wait: 0 consecutive attempts without modified resources of 1 required. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:310 2021-06-30 12:41:36.359 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-mon), timeout=1775 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:41:36.367 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: pod rook-ceph-mon-a-65b58876db-98jg5 is ready! handle_resource /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:258 2021-06-30 12:41:36.367 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Found no modified resources. wait /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:302 2021-06-30 12:41:36.367 68 INFO armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Waiting for resource type=pod, namespace=kube-system labels=app=rook-ceph-osd required=True for 1775s 2021-06-30 12:41:36.367 68 DEBUG armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Starting to wait on: namespace=kube-system, resource type=pod, label_selector=(app=rook-ceph-osd), timeout=1775 _watch_resource_completions /usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py:367 2021-06-30 12:42:11.404 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:43:11.466 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:44:11.528 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:45:11.589 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:46:11.651 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:47:11.712 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:48:11.774 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:49:11.835 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:50:11.897 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:51:11.958 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:52:12.019 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:53:12.081 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:54:12.144 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:55:12.205 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:56:12.268 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:57:12.329 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:58:12.392 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 12:59:12.454 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:00:12.517 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:01:12.578 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:02:12.639 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:03:12.701 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:04:12.762 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:05:12.824 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:06:12.886 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:07:12.946 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:08:13.007 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:09:13.069 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:10:13.132 68 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:176 2021-06-30 13:11:11.371 68 ERROR armada.handlers.wait [-] [chart=kube-system-rook-ceph]: Timed out waiting for pods (namespace=kube-system, labels=(app=rook-ceph-osd)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada [-] Chart deploy [kube-system-rook-ceph] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=kube-system, labels=(app=rook-ceph-osd)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada Traceback (most recent call last): 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 225, in handle_result 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada result = get_result() 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 236, in 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada if (handle_result(chart, lambda: deploy_chart(chart))): 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 214, in deploy_chart 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada chart, cg_test_all_charts, prefix, known_releases) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 248, in execute 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada chart_wait.wait(timer) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 134, in wait 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada wait.wait(timeout=timeout) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 294, in wait 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada modified = self._wait(deadline) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 354, in _wait 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada raise k8s_exceptions.KubernetesWatchTimeoutException(error) 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=kube-system, labels=(app=rook-ceph-osd)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`? 2021-06-30 13:11:11.371 68 ERROR armada.handlers.armada 2021-06-30 13:11:11.373 68 ERROR armada.handlers.armada [-] Chart deploy(s) failed: ['kube-system-rook-ceph'] 2021-06-30 13:11:12.193 68 INFO armada.handlers.lock [-] Releasing lock 2021-06-30 13:11:12.196 68 ERROR armada.cli [-] Caught internal exception: armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kube-system-rook-ceph'] 2021-06-30 13:11:12.196 68 ERROR armada.cli Traceback (most recent call last): 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/__init__.py", line 38, in safe_invoke 2021-06-30 13:11:12.196 68 ERROR armada.cli self.invoke() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 213, in invoke 2021-06-30 13:11:12.196 68 ERROR armada.cli resp = self.handle(documents, tiller) 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py", line 81, in func_wrapper 2021-06-30 13:11:12.196 68 ERROR armada.cli return future.result() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result 2021-06-30 13:11:12.196 68 ERROR armada.cli return self.__get_result() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result 2021-06-30 13:11:12.196 68 ERROR armada.cli raise self._exception 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run 2021-06-30 13:11:12.196 68 ERROR armada.cli result = self.fn(*self.args, **self.kwargs) 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/cli/apply.py", line 256, in handle 2021-06-30 13:11:12.196 68 ERROR armada.cli return armada.sync() 2021-06-30 13:11:12.196 68 ERROR armada.cli File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 252, in sync 2021-06-30 13:11:12.196 68 ERROR armada.cli raise armada_exceptions.ChartDeployException(failures) 2021-06-30 13:11:12.196 68 ERROR armada.cli armada.exceptions.armada_exceptions.ChartDeployException: Exception deploying charts: ['kube-system-rook-ceph'] 2021-06-30 13:11:12.196 68 ERROR armada.cli command terminated with exit code 1 -- Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com ------------------------------ Subject: Digest Footer _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss ------------------------------ End of Starlingx-discuss Digest, Vol 38, Issue 2 ************************************************ From openinfradn at gmail.com Mon Jul 5 05:49:04 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 5 Jul 2021 11:19:04 +0530 Subject: [Starlingx-discuss] =?utf-8?q?=28no_subject=29?= Message-ID: Hi, As per the hardware requirements, we should disable following settings in the BIOS. I would like to know what's the reason behind these settings. - CPU C state control disabled - Plug & play BMC detection disabled [1] https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage_hardware.html Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Mon Jul 5 13:54:58 2021 From: scott.little at windriver.com (Scott Little) Date: Mon, 5 Jul 2021 09:54:58 -0400 Subject: [Starlingx-discuss] Issues with mirror.starlingx.cengn.ca Message-ID: <18ddcf25-684c-2189-8b9f-333e0abc95b0@windriver.com> The /mirror/centos/epel/dl.fedoraproject.org subdirectory of mirror.starlingx.cengn.ca is damaged. I along with CENGN will be investigating and attempting to restore the missing content. This might take a bit of time... Scot From Barton.Wensley at windriver.com Mon Jul 5 16:01:06 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Mon, 5 Jul 2021 16:01:06 +0000 Subject: [Starlingx-discuss] Moving on Message-ID: Hey everyone - I am moving on from my current role and will no longer be participating in the StarlingX project. As of Friday (July 9) I am resigning my position as the Technical Lead for the Distributed Cloud and Flock Services projects and as a core in the StarlingX repos. I wish you all the best in the future. Bart Wensley -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jul 6 13:01:57 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 6 Jul 2021 13:01:57 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210705T232640Z Message-ID: Sanity Test from 2021-July-06 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210705T232640Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210705T232640Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Alexander.Williams at commscope.com Tue Jul 6 14:06:56 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Tue, 6 Jul 2021 14:06:56 +0000 Subject: [Starlingx-discuss] SQL Database Access Message-ID: Hi All, I've seen that there is a SQL database responsible for storing all inventory details (hosts, nodes, etc). Is there any way to access this database either through python or the shell to look at/edit entries by hand? Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jul 6 14:44:29 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 6 Jul 2021 10:44:29 -0400 Subject: [Starlingx-discuss] Issues with mirror.starlingx.cengn.ca In-Reply-To: <18ddcf25-684c-2189-8b9f-333e0abc95b0@windriver.com> References: <18ddcf25-684c-2189-8b9f-333e0abc95b0@windriver.com> Message-ID: CENGN is now reporting that they don't have a backup.   It will take me several days to restore manually. Sorry for the inconvenience. Scott On 2021-07-05 9:54 a.m., Scott Little wrote: > The /mirror/centos/epel/dl.fedoraproject.org subdirectory of > mirror.starlingx.cengn.ca is damaged. > > I along with CENGN will be investigating and attempting to restore the > missing content. > > This might take a bit of time... > > > Scot > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From yvonne.ding at windriver.com Tue Jul 6 14:49:04 2021 From: yvonne.ding at windriver.com (Yvonne Ding) Date: Tue, 6 Jul 2021 10:49:04 -0400 Subject: [Starlingx-discuss] Goodbye from Yvonne Message-ID: Hi, I am moving on and will no longer be working on starlingx project. My last day at Wind River will be Friday July 9th. Thank you all for the support and I wish you all the best. Regards, Yvonne Ding -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jul 6 21:02:47 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 6 Jul 2021 14:02:47 -0700 Subject: [Starlingx-discuss] Moving on In-Reply-To: References: Message-ID: Hi Bart, It is sad to see you moving on. I would like to use the opportunity to say thank you for all your contributions you’ve made to the project over the past few years! I wish you all the best with the new project you are taking on. Best Regards, Ildikó > On Jul 5, 2021, at 09:01, Wensley, Barton wrote: > > Hey everyone - I am moving on from my current role and will no longer be participating in the StarlingX project. As of Friday (July 9) I am resigning my position as the Technical Lead for the Distributed Cloud and Flock Services projects and as a core in the StarlingX repos. > > I wish you all the best in the future. > > Bart Wensley > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From ildiko.vancsa at gmail.com Tue Jul 6 21:04:04 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 6 Jul 2021 14:04:04 -0700 Subject: [Starlingx-discuss] Goodbye from Yvonne In-Reply-To: References: Message-ID: <5560C09E-5710-45D6-A044-8FEB19F67CE5@gmail.com> Hi Yvonne, It is very sad to see you leaving the project. Thank you for all your contributions! I wish you all the best with your future endeavors. Best Regards, Ildikó > On Jul 6, 2021, at 07:49, Yvonne Ding wrote: > > Hi, > > I am moving on and will no longer be working on starlingx project. My last day at Wind River will be Friday July 9th. > > Thank you all for the support and I wish you all the best. > > > > Regards, > > Yvonne Ding > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ramaswamy.Subramanian at windriver.com Tue Jul 6 21:45:22 2021 From: Ramaswamy.Subramanian at windriver.com (Subramanian, Ramaswamy) Date: Tue, 6 Jul 2021 21:45:22 +0000 Subject: [Starlingx-discuss] Self nomination for Project Lead Message-ID: Hi, Over the last few months, I have worked closely with many StarlingX team members to understand and learn more about the project. In particular, I have focused on the following StarlingX sub-projects to understand objectives, feature planning, project management, etc. * DistCloud * Flock Services * Config With this email, I would like to nominate myself as a project lead for the above sub-projects. I am looking forward to working together and contributing to the StarlingX journey. Thanks. Regards, Ram -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Wed Jul 7 00:44:32 2021 From: austin.sun at intel.com (Sun, Austin) Date: Wed, 7 Jul 2021 00:44:32 +0000 Subject: [Starlingx-discuss] Moving on In-Reply-To: References: Message-ID: Hi Bart: Thank you for all you contributions and your suggestion/advice for code/solution reviews . Wish you all the best for your new journey. BR Austin sun. From: Wensley, Barton Sent: Tuesday, July 6, 2021 12:01 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Moving on Hey everyone - I am moving on from my current role and will no longer be participating in the StarlingX project. As of Friday (July 9) I am resigning my position as the Technical Lead for the Distributed Cloud and Flock Services projects and as a core in the StarlingX repos. I wish you all the best in the future. Bart Wensley -------------- next part -------------- An HTML attachment was scrubbed... URL: From zhipengs.liu at intel.com Wed Jul 7 01:46:04 2021 From: zhipengs.liu at intel.com (Liu, ZhipengS) Date: Wed, 7 Jul 2021 01:46:04 +0000 Subject: [Starlingx-discuss] Moving on In-Reply-To: References: Message-ID: Hi Bart, Thanks for your great help and wish you all the best in the future! Regard, Zhipeng From: Wensley, Barton Sent: 2021年7月6日 0:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Moving on Hey everyone - I am moving on from my current role and will no longer be participating in the StarlingX project. As of Friday (July 9) I am resigning my position as the Technical Lead for the Distributed Cloud and Flock Services projects and as a core in the StarlingX repos. I wish you all the best in the future. Bart Wensley -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Wed Jul 7 05:55:31 2021 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Wed, 7 Jul 2021 05:55:31 +0000 Subject: [Starlingx-discuss] Moving on In-Reply-To: References: Message-ID: Hi Bart, It's sad to see you leave. I want to thank you for your contribution to the community, you have been playing the key role since the community ramping up. Your review comments have always been solid and helpful. I wish you all the best in your new journey. Mingyuan From: Wensley, Barton Sent: Tuesday, July 6, 2021 0:01 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Moving on Hey everyone - I am moving on from my current role and will no longer be participating in the StarlingX project. As of Friday (July 9) I am resigning my position as the Technical Lead for the Distributed Cloud and Flock Services projects and as a core in the StarlingX repos. I wish you all the best in the future. Bart Wensley -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Wed Jul 7 09:16:09 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 7 Jul 2021 14:46:09 +0530 Subject: [Starlingx-discuss] Issues with mirror.starlingx.cengn.ca In-Reply-To: References: <18ddcf25-684c-2189-8b9f-333e0abc95b0@windriver.com> Message-ID: I also experience the same while building the base docker image. Is there any alternative way of get the patch server up and running without this step/ This system is not registered with an entitlement server. You can use subscription-manager to register. Cannot open: http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-1.4.16-1.el7.noarch.rpm. Skipping. Cannot open: http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-core-configs-31.6-1.el7.noarch.rpm. Skipping. Error: Nothing to do The command '/bin/sh -c groupadd -g 751 cgts && echo "mock:x:751:root" >> /etc/group && echo "mockbuild:x:9001:" >> /etc/group && yum install -y http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-1.4.16-1.el7.noarch.rpm http://mirror.starlingx.cengn.ca/mirror/centos/epel/dl.fedoraproject.org/pub/epel/7/x86_64/Packages/m/mock-core-configs-31.6-1.el7.noarch.rpm' returned a non-zero code: 1 On Tue, Jul 6, 2021 at 8:18 PM Scott Little wrote: > CENGN is now reporting that they don't have a backup. It will take me > several days to restore manually. > > Sorry for the inconvenience. > > Scott > > > On 2021-07-05 9:54 a.m., Scott Little wrote: > > The /mirror/centos/epel/dl.fedoraproject.org subdirectory of > > mirror.starlingx.cengn.ca is damaged. > > > > I along with CENGN will be investigating and attempting to restore the > > missing content. > > > > This might take a bit of time... > > > > > > Scot > > > > > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Wed Jul 7 10:48:13 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 7 Jul 2021 10:48:13 +0000 Subject: [Starlingx-discuss] Self nomination for Project Lead In-Reply-To: References: Message-ID: +1 thanks for stepping up Ram, Greg. From: Subramanian, Ramaswamy Sent: Tuesday, July 6, 2021 5:45 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Self nomination for Project Lead Hi, Over the last few months, I have worked closely with many StarlingX team members to understand and learn more about the project. In particular, I have focused on the following StarlingX sub-projects to understand objectives, feature planning, project management, etc. * DistCloud * Flock Services * Config With this email, I would like to nominate myself as a project lead for the above sub-projects. I am looking forward to working together and contributing to the StarlingX journey. Thanks. Regards, Ram -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 7 13:07:35 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 7 Jul 2021 13:07:35 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 7, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up soon. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210707T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Barton.Wensley at windriver.com Wed Jul 7 14:23:06 2021 From: Barton.Wensley at windriver.com (Wensley, Barton) Date: Wed, 7 Jul 2021 14:23:06 +0000 Subject: [Starlingx-discuss] SQL Database Access In-Reply-To: References: Message-ID: Alex, You can use psql (PostgreSQL interactive terminal) to interact with the database as follows: sudo -u postgres psql Bart From: Williams, Alexander Sent: Tuesday, July 6, 2021 10:07 AM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] SQL Database Access [Please note: This e-mail is from an EXTERNAL e-mail address] Hi All, I've seen that there is a SQL database responsible for storing all inventory details (hosts, nodes, etc). Is there any way to access this database either through python or the shell to look at/edit entries by hand? Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 7 14:26:03 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 7 Jul 2021 14:26:03 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 7, 2021) In-Reply-To: References: Message-ID: >From today's meeting... * Standing Topics * Build/Sanity * sanity has been green, but we've only seen 2 sanities since last week - not sure why not more than that * issues with mirror.starlingx.cengn.ca * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011717.html * per Scott, this should only impact people that are trying to set up new work environments, Scott's working on resolution * Scott also noted that CENGN hasn't been doing backups - AR: Anthony Nowell, new community member taking over build management from Frank, will follow up on this * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * nothing this week * ARs from Previous Meetings * DockerHub * Ildiko has reached out to Docker about our application for OSS status * 3-admin limit - per Scott, this is an issue now - AR: Bill will ask Docker about this * (Ildiko)The open source program doesn't seem to say anything about the admin-limit, it might be separate: https://www.docker.com/blog/expanded-support-for-open-source-software-projects/ * Open Requests for Help * rook-ceph deployment failure * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011713.html * Mingyuan will ask Martin to respond * bios settings * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011716.html * Bill will ask Chris Friesen to respond * SQL Database Access * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011720.html * Bill was going to ask Al Bailey or John Kung to respond, but Bart already did * Build Matters (if required) * see above re: issues with mirror.starlingx.cengn.ca -----Original Message----- From: Zvonar, Bill Sent: Wednesday, July 7, 2021 9:08 AM To: starlingx-discuss at lists.starlingx.io Subject: Community (& TSC) Call (July 7, 2021) Hi all, reminder of the weekly TSC/Community calls coming up soon. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210707T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From chris.friesen at windriver.com Wed Jul 7 19:57:25 2021 From: chris.friesen at windriver.com (Chris Friesen) Date: Wed, 7 Jul 2021 13:57:25 -0600 Subject: [Starlingx-discuss] BIOS settings (was: no subject) In-Reply-To: References: Message-ID: <4ddd3fa2-1ab2-d8fb-2c8f-0ccac51772f9@windriver.com> I believe that the CPU C-state recommendation is to default to a minimum-latency configuration since that is what most users want and it's required to hit the published latency numbers. The OS can override the default setting, and we will do so for nodes configured as low-latency (essentially we limit it to C1). If the latency is not a concern, then allowing deeper C-states shouldn't break anything. I'm not sure about the "Plug & play BMC detection" setting, but it's still recommended in the R6 docs at https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/dedicated_storage_hardware.html Chris On 7/4/2021 11:49 PM, open infra wrote: > > Hi, > > As per the hardware requirements, we should disable following settings > in the BIOS. I would like to know what's the reason behind these settings. > > * CPU C state control disabled > > * Plug & play BMC detection disabled > > [1] > https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage_hardware.html > > > Regards, > Danishka > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From openinfradn at gmail.com Thu Jul 8 08:06:32 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 8 Jul 2021 13:36:32 +0530 Subject: [Starlingx-discuss] Image upload failed Message-ID: Hi I was supposed to upload Windows (qcow2) image, but it failed. I was able to upload both fedora and centos images. Windows image is around 100GB. http://paste.openstack.org/show/807238/ controller-1:/images$ openstack endpoint list | grep glance | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | public | http://glance.openstack.svc.cluster.local/ | | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | internal | http://glance.openstack.svc.cluster.local/ | | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | admin | http://glance.openstack.svc.cluster.local/ | controller-1:/images$ $ openstack image list +--------------------------------------+-----------+--------+ | ID | Name | Status | +--------------------------------------+-----------+--------+ | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | I appreciate if I can get any hint or guide to resolve this. Regard Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From mingyuan.qi at intel.com Thu Jul 8 08:21:06 2021 From: mingyuan.qi at intel.com (Qi, Mingyuan) Date: Thu, 8 Jul 2021 08:21:06 +0000 Subject: [Starlingx-discuss] Image upload failed In-Reply-To: References: Message-ID: Danishka, You will need to extend kubelet-lv to enlarge glance pod’s disk cache. For example: system host-fs-modify controller-0 kubelet=100 system host-fs-modify controller-1 kubelet=100 Mingyuan From: open infra Sent: Thursday, July 8, 2021 16:07 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Image upload failed Hi I was supposed to upload Windows (qcow2) image, but it failed. I was able to upload both fedora and centos images. Windows image is around 100GB. http://paste.openstack.org/show/807238/ controller-1:/images$ openstack endpoint list | grep glance | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | public | http://glance.openstack.svc.cluster.local/ | | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | internal | http://glance.openstack.svc.cluster.local/ | | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | admin | http://glance.openstack.svc.cluster.local/ | controller-1:/images$ $ openstack image list +--------------------------------------+-----------+--------+ | ID | Name | Status | +--------------------------------------+-----------+--------+ | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | I appreciate if I can get any hint or guide to resolve this. Regard Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jul 8 08:28:05 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 8 Jul 2021 13:58:05 +0530 Subject: [Starlingx-discuss] BIOS settings (was: no subject) In-Reply-To: <4ddd3fa2-1ab2-d8fb-2c8f-0ccac51772f9@windriver.com> References: <4ddd3fa2-1ab2-d8fb-2c8f-0ccac51772f9@windriver.com> Message-ID: Thanks you Chris! On Thu, Jul 8, 2021 at 1:30 AM Chris Friesen wrote: > > I believe that the CPU C-state recommendation is to default to a > minimum-latency configuration since that is what most users want and > it's required to hit the published latency numbers. The OS can override > the default setting, and we will do so for nodes configured as > low-latency (essentially we limit it to C1). If the latency is not a > concern, then allowing deeper C-states shouldn't break anything. > > I'm not sure about the "Plug & play BMC detection" setting, but it's > still recommended in the R6 docs at > > https://docs.starlingx.io/deploy_install_guides/r6_release/bare_metal/dedicated_storage_hardware.html > > Chris > > On 7/4/2021 11:49 PM, open infra wrote: > > > > Hi, > > > > As per the hardware requirements, we should disable following settings > > in the BIOS. I would like to know what's the reason behind these > settings. > > > > * CPU C state control disabled > > > > * Plug & play BMC detection disabled > > > > [1] > > > https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage_hardware.html > > < > https://urldefense.com/v3/__https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/dedicated_storage_hardware.html__;!!AjveYdw8EvQ!OBRG6R8mijUCXJMVYKMUsSFSivC7CkbRz3zxIbfr1iQg17M6fo95aMBKfOfbaB_2VHEC$ > > > > > > Regards, > > Danishka > > > > _______________________________________________ > > Starlingx-discuss mailing list > > Starlingx-discuss at lists.starlingx.io > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jul 8 08:51:28 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 8 Jul 2021 14:21:28 +0530 Subject: [Starlingx-discuss] Image upload failed In-Reply-To: References: Message-ID: Hi Mingyuan, I have increased kubelet size from 10 to 200. But still no luck. http://paste.openstack.org/show/807252/ tried with --debug option http://paste.openstack.org/show/807253/ BTW I found errors inthe openstack_glace-api- log. http://paste.openstack.org/show/807250/ http://paste.openstack.org/show/807254/ Regards, Danishka On Thu, Jul 8, 2021 at 1:51 PM Qi, Mingyuan wrote: > Danishka, > > > > You will need to extend kubelet-lv to enlarge glance pod’s disk cache. > > > > For example: > > system host-fs-modify controller-0 kubelet=100 > > system host-fs-modify controller-1 kubelet=100 > > > > Mingyuan > > > > *From:* open infra > *Sent:* Thursday, July 8, 2021 16:07 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Image upload failed > > > > Hi > > > > I was supposed to upload Windows (qcow2) image, but it failed. > > I was able to upload both fedora and centos images. Windows image is > around 100GB. > > http://paste.openstack.org/show/807238/ > > > > > > controller-1:/images$ openstack endpoint list | grep glance > | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | > public | http://glance.openstack.svc.cluster.local/ > > | > | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | > internal | http://glance.openstack.svc.cluster.local/ > > | > | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | > admin | http://glance.openstack.svc.cluster.local/ > > | > controller-1:/images$ > > > > $ openstack image list > +--------------------------------------+-----------+--------+ > | ID | Name | Status | > +--------------------------------------+-----------+--------+ > | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | > | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | > > > > > > I appreciate if I can get any hint or guide to resolve this. > > > > Regard > > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jul 8 12:14:41 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 8 Jul 2021 17:44:41 +0530 Subject: [Starlingx-discuss] Image upload failed In-Reply-To: References: Message-ID: I was able to upload CentOS image which is around 1GB. http://paste.openstack.org/show/807260/ Is there a quota related to image upload? Where exactly I can alter if such limit exist. On Thu, Jul 8, 2021 at 2:21 PM open infra wrote: > Hi Mingyuan, > > I have increased kubelet size from 10 to 200. But still no luck. > http://paste.openstack.org/show/807252/ > tried with --debug option http://paste.openstack.org/show/807253/ > > BTW I found errors inthe openstack_glace-api- log. > http://paste.openstack.org/show/807250/ > http://paste.openstack.org/show/807254/ > > Regards, > Danishka > > On Thu, Jul 8, 2021 at 1:51 PM Qi, Mingyuan wrote: > >> Danishka, >> >> >> >> You will need to extend kubelet-lv to enlarge glance pod’s disk cache. >> >> >> >> For example: >> >> system host-fs-modify controller-0 kubelet=100 >> >> system host-fs-modify controller-1 kubelet=100 >> >> >> >> Mingyuan >> >> >> >> *From:* open infra >> *Sent:* Thursday, July 8, 2021 16:07 >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Image upload failed >> >> >> >> Hi >> >> >> >> I was supposed to upload Windows (qcow2) image, but it failed. >> >> I was able to upload both fedora and centos images. Windows image is >> around 100GB. >> >> http://paste.openstack.org/show/807238/ >> >> >> >> >> >> controller-1:/images$ openstack endpoint list | grep glance >> | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | >> public | http://glance.openstack.svc.cluster.local/ >> >> | >> | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | >> internal | http://glance.openstack.svc.cluster.local/ >> >> | >> | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | >> admin | http://glance.openstack.svc.cluster.local/ >> >> | >> controller-1:/images$ >> >> >> >> $ openstack image list >> +--------------------------------------+-----------+--------+ >> | ID | Name | Status | >> +--------------------------------------+-----------+--------+ >> | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | >> | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | >> >> >> >> >> >> I appreciate if I can get any hint or guide to resolve this. >> >> >> >> Regard >> >> Danishka >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jul 8 15:43:29 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 8 Jul 2021 21:13:29 +0530 Subject: [Starlingx-discuss] Image upload failed In-Reply-To: References: Message-ID: Is it possible to update the value of *image_size_cap* in Glance config under StarlingX (stx-openstack)? On Thu, Jul 8, 2021 at 5:44 PM open infra wrote: > I was able to upload CentOS image which is around 1GB. > http://paste.openstack.org/show/807260/ > > > Is there a quota related to image upload? > Where exactly I can alter if such limit exist. > > > > On Thu, Jul 8, 2021 at 2:21 PM open infra wrote: > >> Hi Mingyuan, >> >> I have increased kubelet size from 10 to 200. But still no luck. >> http://paste.openstack.org/show/807252/ >> tried with --debug option http://paste.openstack.org/show/807253/ >> >> BTW I found errors inthe openstack_glace-api- log. >> http://paste.openstack.org/show/807250/ >> http://paste.openstack.org/show/807254/ >> >> Regards, >> Danishka >> >> On Thu, Jul 8, 2021 at 1:51 PM Qi, Mingyuan >> wrote: >> >>> Danishka, >>> >>> >>> >>> You will need to extend kubelet-lv to enlarge glance pod’s disk cache. >>> >>> >>> >>> For example: >>> >>> system host-fs-modify controller-0 kubelet=100 >>> >>> system host-fs-modify controller-1 kubelet=100 >>> >>> >>> >>> Mingyuan >>> >>> >>> >>> *From:* open infra >>> *Sent:* Thursday, July 8, 2021 16:07 >>> *To:* starlingx-discuss at lists.starlingx.io >>> *Subject:* [Starlingx-discuss] Image upload failed >>> >>> >>> >>> Hi >>> >>> >>> >>> I was supposed to upload Windows (qcow2) image, but it failed. >>> >>> I was able to upload both fedora and centos images. Windows image is >>> around 100GB. >>> >>> http://paste.openstack.org/show/807238/ >>> >>> >>> >>> >>> >>> controller-1:/images$ openstack endpoint list | grep glance >>> | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | >>> public | http://glance.openstack.svc.cluster.local/ >>> >>> | >>> | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | >>> internal | http://glance.openstack.svc.cluster.local/ >>> >>> | >>> | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | >>> admin | http://glance.openstack.svc.cluster.local/ >>> >>> | >>> controller-1:/images$ >>> >>> >>> >>> $ openstack image list >>> +--------------------------------------+-----------+--------+ >>> | ID | Name | Status | >>> +--------------------------------------+-----------+--------+ >>> | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | >>> | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | >>> >>> >>> >>> >>> >>> I appreciate if I can get any hint or guide to resolve this. >>> >>> >>> >>> Regard >>> >>> Danishka >>> >> -------------- next part -------------- An HTML attachment was scrubbed... URL: From ashlee at openstack.org Thu Jul 8 15:42:04 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Thu, 8 Jul 2021 10:42:04 -0500 Subject: [Starlingx-discuss] PTG October 2021 Team Signup Message-ID: <231B4E30-ABA6-4A04-83E7-CFAECC9985EE@openstack.org> Hi everyone, Last week, we announced the next PTG will held virtually from Monday, October 18 to Friday, October 22, 2021. We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by end of day July 21. We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions with 3 rules/guidelines: 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first. 2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but it's important that you sign up to let us know you'll be attending because that's how you'll receive event details, passwords, and other relevant information about the PTG. Continue to visit openstack.org/ptg for updates. Ashlee [1] Team Survey: https://openinfrafoundation.formstack.com/forms/oct2021_vptg_survey [2] Ethercalc Signup: https://ethercalc.openstack.org/8tum5yl1bx43 [3] PTG Registration: https://openinfra-ptg.eventbrite.com From nicolae.jascanu at intel.com Thu Jul 8 16:42:21 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Thu, 8 Jul 2021 16:42:21 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210708T020055Z Message-ID: Sanity Test from 2021-July-08 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210708T020055Z/outputs/iso/ ) Status: GREEN Executed on BARE METAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210708T020055Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Fri Jul 9 12:39:18 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 9 Jul 2021 12:39:18 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210709T015731Z Message-ID: Sanity Test from 2021-July-09 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210709T015731Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL STANDARD Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210709T015731Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Sat Jul 10 08:35:32 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sat, 10 Jul 2021 08:35:32 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210710T020121Z Message-ID: Sanity Test from 2021-July-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL SIMPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Sun Jul 11 07:13:16 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sun, 11 Jul 2021 07:13:16 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210710T020121Z Message-ID: Sanity Test from 2021-July-10 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Mon Jul 12 12:17:18 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 12 Jul 2021 12:17:18 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210710T020121Z Message-ID: Sanity Test from 2021-July-12 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Thiago.Brito at windriver.com Mon Jul 12 20:38:01 2021 From: Thiago.Brito at windriver.com (Brito, Thiago) Date: Mon, 12 Jul 2021 20:38:01 +0000 Subject: [Starlingx-discuss] Image upload failed In-Reply-To: References: , Message-ID: Hi Mingyuan, I think you would be able to do that with something like system helm-override-update glance openstack --set conf.glance.DEFAULT.image_size_cap=2 With that said, the default size of the image is 1TB, so I don't think that's your problem: https://docs.openstack.org/glance/ussuri/configuration/configuring.html#configuring-glance-image-size-limit >From the logs you provided, it doesn't look like a quota issue. Your DB is gone. Can you provide more details about your environment and also check the status of the mariadb pods? Thiago ________________________________ From: open infra Sent: Thursday, July 8, 2021 12:43 PM To: Qi, Mingyuan Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Image upload failed [Please note: This e-mail is from an EXTERNAL e-mail address] Is it possible to update the value of image_size_cap in Glance config under StarlingX (stx-openstack)? On Thu, Jul 8, 2021 at 5:44 PM open infra > wrote: I was able to upload CentOS image which is around 1GB. http://paste.openstack.org/show/807260/ Is there a quota related to image upload? Where exactly I can alter if such limit exist. On Thu, Jul 8, 2021 at 2:21 PM open infra > wrote: Hi Mingyuan, I have increased kubelet size from 10 to 200. But still no luck. http://paste.openstack.org/show/807252/ tried with --debug option http://paste.openstack.org/show/807253/ BTW I found errors inthe openstack_glace-api- log. http://paste.openstack.org/show/807250/ http://paste.openstack.org/show/807254/ Regards, Danishka On Thu, Jul 8, 2021 at 1:51 PM Qi, Mingyuan > wrote: Danishka, You will need to extend kubelet-lv to enlarge glance pod’s disk cache. For example: system host-fs-modify controller-0 kubelet=100 system host-fs-modify controller-1 kubelet=100 Mingyuan From: open infra > Sent: Thursday, July 8, 2021 16:07 To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Image upload failed Hi I was supposed to upload Windows (qcow2) image, but it failed. I was able to upload both fedora and centos images. Windows image is around 100GB. http://paste.openstack.org/show/807238/ controller-1:/images$ openstack endpoint list | grep glance | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | public | http://glance.openstack.svc.cluster.local/ | | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | internal | http://glance.openstack.svc.cluster.local/ | | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | admin | http://glance.openstack.svc.cluster.local/ | controller-1:/images$ $ openstack image list +--------------------------------------+-----------+--------+ | ID | Name | Status | +--------------------------------------+-----------+--------+ | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | I appreciate if I can get any hint or guide to resolve this. Regard Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Mon Jul 12 22:30:13 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Mon, 12 Jul 2021 15:30:13 -0700 Subject: [Starlingx-discuss] PTG October 2021 Team Signup In-Reply-To: <231B4E30-ABA6-4A04-83E7-CFAECC9985EE@openstack.org> References: <231B4E30-ABA6-4A04-83E7-CFAECC9985EE@openstack.org> Message-ID: <5D6AD755-11E1-4AEF-80CB-319E823A17D3@gmail.com> Hi StarlingX Community, It is a friendly reminder that we need to decide on how much time we would like to reserve for StarlingX discussions at the upcoming PTG. I created a new etherpad to capture time slot preferences as well as discussion topics: https://etherpad.opendev.org/p/stx-ptg-planning-october-2021 Please add your thoughts and ideas to the etherpad or reply to this mail thread. I also added the PTG topic to this Wednesday’s TSC call so we can check on the preferences to allow enough time to submit the requests by July 21st. Please let me know if you have any questions. Thanks, Ildikó > On Jul 8, 2021, at 08:42, Ashlee Ferguson wrote: > > Hi everyone, > > Last week, we announced the next PTG will held virtually from Monday, October 18 to Friday, October 22, 2021. > > We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. > > To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by end of day July 21. > > We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions with 3 rules/guidelines: > > 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first. > 2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. > 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. > > Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. > > If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! > > Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but it's important that you sign up to let us know you'll be attending because that's how you'll receive event details, passwords, and other relevant information about the PTG. > > Continue to visit openstack.org/ptg for updates. > > Ashlee > > > [1] Team Survey: https://openinfrafoundation.formstack.com/forms/oct2021_vptg_survey > [2] Ethercalc Signup: https://ethercalc.openstack.org/8tum5yl1bx43 > [3] PTG Registration: https://openinfra-ptg.eventbrite.com > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From build.starlingx at gmail.com Mon Jul 12 23:20:40 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 12 Jul 2021 19:20:40 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 1832 - Failure! Message-ID: <1374206642.275.1626132051263.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 1832 Status: Failure Timestamp: 20210712T231220Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210712T230217Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210712T230217Z DOCKER_BUILD_ID: jenkins-master-distro-20210712T230217Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210712T230217Z/logs BUILD_IMG: false FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210712T230217Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Mon Jul 12 23:20:52 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Mon, 12 Jul 2021 19:20:52 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 557 - Failure! Message-ID: <1273675474.278.1626132053942.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 557 Status: Failure Timestamp: 20210712T230217Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210712T230217Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Tue Jul 13 06:38:48 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Jul 2021 02:38:48 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 1151 - Failure! Message-ID: <1281388477.281.1626158329338.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 1151 Status: Failure Timestamp: 20210713T044111Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210713T043006Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20210713T043006Z DOCKER_BUILD_ID: jenkins-master-20210713T043006Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210713T043006Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210713T043006Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Tue Jul 13 06:38:50 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Jul 2021 02:38:50 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 982 - Failure! Message-ID: <1660423179.284.1626158331608.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 982 Status: Failure Timestamp: 20210713T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210713T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From openinfradn at gmail.com Tue Jul 13 12:04:40 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 13 Jul 2021 17:34:40 +0530 Subject: [Starlingx-discuss] Image upload failed In-Reply-To: References: Message-ID: Hi Thiago, Thanks for your reply. I have a Starlingx release 5, bare-metal deployment with dedicated storage. Currently, I have only one worker node and nova-local size of 133GB. I can see allocated storage for nova-local is visible as the space available at hypervisor. I am expecting to run several Windows VMs with 100-200GB of C drive and 1TB as a shared D: drive. Over 23TB is available in Ceph storage. 4 OSDs and 2 storage hosts. http://paste.openstack.org/show/807427/ On Tue, Jul 13, 2021 at 2:08 AM Brito, Thiago wrote: > Hi Mingyuan, > > I think you would be able to do that with something like > > system helm-override-update glance openstack --set conf.glance.DEFAULT.*image_size_cap*=2 > > With that said, the default size of the image is 1TB, so I don't think > that's your problem: > https://docs.openstack.org/glance/ussuri/configuration/configuring.html#configuring-glance-image-size-limit > > From the logs you provided, it doesn't look like a quota issue. Your DB is > gone. Can you provide more details about your environment and also check > the status of the mariadb pods? > > Thiago > ------------------------------ > *From:* open infra > *Sent:* Thursday, July 8, 2021 12:43 PM > *To:* Qi, Mingyuan > *Cc:* starlingx-discuss at lists.starlingx.io < > starlingx-discuss at lists.starlingx.io> > *Subject:* Re: [Starlingx-discuss] Image upload failed > > > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Is it possible to update the value of *image_size_cap* in Glance config > under StarlingX (stx-openstack)? > > On Thu, Jul 8, 2021 at 5:44 PM open infra wrote: > > I was able to upload CentOS image which is around 1GB. > http://paste.openstack.org/show/807260/ > > > Is there a quota related to image upload? > Where exactly I can alter if such limit exist. > > > > On Thu, Jul 8, 2021 at 2:21 PM open infra wrote: > > Hi Mingyuan, > > I have increased kubelet size from 10 to 200. But still no luck. > http://paste.openstack.org/show/807252/ > > tried with --debug option http://paste.openstack.org/show/807253/ > > > BTW I found errors inthe openstack_glace-api- log. > http://paste.openstack.org/show/807250/ > > http://paste.openstack.org/show/807254/ > > > Regards, > Danishka > > On Thu, Jul 8, 2021 at 1:51 PM Qi, Mingyuan wrote: > > Danishka, > > > > You will need to extend kubelet-lv to enlarge glance pod’s disk cache. > > > > For example: > > system host-fs-modify controller-0 kubelet=100 > > system host-fs-modify controller-1 kubelet=100 > > > > Mingyuan > > > > *From:* open infra > *Sent:* Thursday, July 8, 2021 16:07 > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Image upload failed > > > > Hi > > > > I was supposed to upload Windows (qcow2) image, but it failed. > > I was able to upload both fedora and centos images. Windows image is > around 100GB. > > http://paste.openstack.org/show/807238/ > > > > > > > controller-1:/images$ openstack endpoint list | grep glance > | 6c7c6662b5274e61afec25a67e2f13a8 | RegionOne | glance | image | True | > public | http://glance.openstack.svc.cluster.local/ > > | > | 855c1127607749faa5896417a559a47a | RegionOne | glance | image | True | > internal | http://glance.openstack.svc.cluster.local/ > > | > | cc5ca27d344c48e287657feee449916b | RegionOne | glance | image | True | > admin | http://glance.openstack.svc.cluster.local/ > > | > controller-1:/images$ > > > > $ openstack image list > +--------------------------------------+-----------+--------+ > | ID | Name | Status | > +--------------------------------------+-----------+--------+ > | 0079f06a-498c-453c-a8fd-0ade7ac2f258 | CentOS-8 | active | > | dad6dfdc-b281-4592-928b-cf64b47f5b3a | Fedora-34 | active | > > > > > > I appreciate if I can get any hint or guide to resolve this. > > > > Regard > > Danishka > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jul 13 13:19:31 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 13 Jul 2021 06:19:31 -0700 Subject: [Starlingx-discuss] Fwd: Your application has been approved References: Message-ID: <5E79DAAA-5D57-416D-84A1-28634B86546B@gmail.com> Hi StarlingX Community, I’m reaching out to you as I’ve just received this email from the DockerHub team letting me know that they approved our application! From the email it looks like that there is still some time till the changes take affect, but hopefully the limits will go away soon now. Thanks, Ildikó > Begin forwarded message: > > Subject: Your application has been approved > Date: July 13, 2021 at 06:16:07 PDT > To: ildiko at openinfra.dev > > Hello Ildiko , > > Welcome to the Docker Open Source Program! We are very excited to have you as a part of our great community. We have whitelisted your namespace "starlingx" and this should come into effect in the next week or so. With this whitelisting, the Docker data pull rate policies that went into effect last November, will not apply to the users pulling images from your namespace. > > Thank you for your support for Docker and your open source contributions. > > Many thanks, > > Aurelien Suarez > > Docker Marketing Team. > > > > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jul 13 13:29:15 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 13 Jul 2021 13:29:15 +0000 Subject: [Starlingx-discuss] Fwd: Your application has been approved In-Reply-To: <5E79DAAA-5D57-416D-84A1-28634B86546B@gmail.com> References: <5E79DAAA-5D57-416D-84A1-28634B86546B@gmail.com> Message-ID: Great news, thanks for following up on this Ildiko… From: Ildiko Vancsa Sent: Tuesday, July 13, 2021 9:20 AM To: StarlingX ML Subject: [Starlingx-discuss] Fwd: Your application has been approved [Please note: This e-mail is from an EXTERNAL e-mail address] Hi StarlingX Community, I’m reaching out to you as I’ve just received this email from the DockerHub team letting me know that they approved our application! From the email it looks like that there is still some time till the changes take affect, but hopefully the limits will go away soon now. Thanks, Ildikó Begin forwarded message: Subject: Your application has been approved Date: July 13, 2021 at 06:16:07 PDT To: ildiko at openinfra.dev Hello Ildiko , Welcome to the Docker Open Source Program! We are very excited to have you as a part of our great community. We have whitelisted your namespace "starlingx" and this should come into effect in the next week or so. With this whitelisting, the Docker data pull rate policies that went into effect last November, will not apply to the users pulling images from your namespace. Thank you for your support for Docker and your open source contributions. Many thanks, Aurelien Suarez Docker Marketing Team. -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Tue Jul 13 13:38:45 2021 From: scott.little at windriver.com (Scott Little) Date: Tue, 13 Jul 2021 09:38:45 -0400 Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 1832 - Failure! In-Reply-To: <1374206642.275.1626132051263.JavaMail.javamailuser@localhost> References: <1374206642.275.1626132051263.JavaMail.javamailuser@localhost> Message-ID: Package 'python-docker' failes to build after update https://review.opendev.org/c/starlingx/integ/+/799708 a candidate fix has been posted ... https://review.opendev.org/c/starlingx/integ/+/800523 Scott On 2021-07-12 7:20 p.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_pre_installer_layered > Build #: 1832 > Status: Failure > Timestamp: 20210712T231220Z > Branch: > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210712T230217Z/logs > -------------------------------------------------------------------------------- > Parameters > > MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210712T230217Z > DOCKER_BUILD_ID: jenkins-master-distro-20210712T230217Z-builder > OS: centos > MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root > PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210712T230217Z/logs > BUILD_IMG: false > FULL_BUILD: false > PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210712T230217Z/logs > MASTER_JOB_NAME: STX_build_layer_distro_master_master > LAYER: distro > MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro > BUILD_ISO: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Tue Jul 13 14:22:11 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Jul 2021 10:22:11 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 558 - Still Failing! In-Reply-To: <2062353349.276.1626132051891.JavaMail.javamailuser@localhost> References: <2062353349.276.1626132051891.JavaMail.javamailuser@localhost> Message-ID: <1859189517.289.1626186132800.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 558 Status: Still Failing Timestamp: 20210713T141304Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210713T141304Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From alexandru.dimofte at intel.com Tue Jul 13 14:35:11 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 13 Jul 2021 14:35:11 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210710T020121Z Message-ID: Sanity Test from 2021-July-13 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From openinfradn at gmail.com Tue Jul 13 15:54:09 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 13 Jul 2021 21:24:09 +0530 Subject: [Starlingx-discuss] Installing packages and making persistent changes in nodes Message-ID: Lets, say I need to run a script in several (*existing*) worker nodes or/and controller nodes. The script is supposed to install rpms (rpm packages not included in CentOS or StarlingX repo) and execute several commands. At the end, I am expecting persistent changes in the nodes where I have executed in worker/controller nodes. What is the best and easiest way to deploy such a script/package, etc. Since I already deployed StarlingX, I am not expecting to have the script inside the boot image. Regards, -- Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jul 14 01:41:19 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Tue, 13 Jul 2021 21:41:19 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 559 - Still Failing! In-Reply-To: <531393395.287.1626186131041.JavaMail.javamailuser@localhost> References: <531393395.287.1626186131041.JavaMail.javamailuser@localhost> Message-ID: <1452151895.294.1626226879931.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 559 Status: Still Failing Timestamp: 20210714T013205Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210714T013205Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From build.starlingx at gmail.com Wed Jul 14 04:41:05 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Jul 2021 00:41:05 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_download_mirror - Build # 1306 - Failure! Message-ID: <635627684.297.1626237666682.JavaMail.javamailuser@localhost> Project: STX_download_mirror Build #: 1306 Status: Failure Timestamp: 20210714T043541Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210714T043006Z/logs -------------------------------------------------------------------------------- Parameters BRANCH: master DOCKER_BUILD_ID: jenkins-master-20210714T043006Z-builder PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210714T043006Z/logs MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210714T043006Z/logs MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Wed Jul 14 04:41:08 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 14 Jul 2021 00:41:08 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 983 - Still Failing! In-Reply-To: <391105773.282.1626158329857.JavaMail.javamailuser@localhost> References: <391105773.282.1626158329857.JavaMail.javamailuser@localhost> Message-ID: <228712348.300.1626237668596.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 983 Status: Still Failing Timestamp: 20210714T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210714T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From openinfradn at gmail.com Wed Jul 14 09:00:49 2021 From: openinfradn at gmail.com (open infra) Date: Wed, 14 Jul 2021 14:30:49 +0530 Subject: [Starlingx-discuss] [Docs] Discrepancy in number of workers Message-ID: Hi, I found discrepancy in the documentation . Number of worker noes 200 in [1], nimber of worker noes 10 in [2]. [1] https://docs.starlingx.io/deploy/deployment-and-configuration-options-standard-configuration-with-controller-storage.html [2] https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/controller_storage.html Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 14 12:37:48 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 14 Jul 2021 12:37:48 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 14, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210714T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From anyrude10 at gmail.com Wed Jul 14 13:31:07 2021 From: anyrude10 at gmail.com (Anirudh Gupta) Date: Wed, 14 Jul 2021 19:01:07 +0530 Subject: [Starlingx-discuss] [Kolla] [ Kolla-Ansible] Existing Workloads getting Impacted on Upgrade Message-ID: Hi Team, We have deployed the Openstack Victoria release using Multinode Kolla Ansible deployment. There are 2 nodes - one for compute and other for Controller+Network Following the below link, we also upgraded the setup from Victoria to wallaby https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html During the Upgrade process, we figured out that the VM which was spawned before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again. Ideally we were expecting that the existing Workloads would remain unimpacted in this process. Can someone please suggest if our observation is correct or any steps that needs to be taken in order to avoid this? Regards Anirudh Gupta -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Jul 14 14:24:49 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 14 Jul 2021 07:24:49 -0700 Subject: [Starlingx-discuss] PTG October 2021 Team Signup In-Reply-To: <5D6AD755-11E1-4AEF-80CB-319E823A17D3@gmail.com> References: <231B4E30-ABA6-4A04-83E7-CFAECC9985EE@openstack.org> <5D6AD755-11E1-4AEF-80CB-319E823A17D3@gmail.com> Message-ID: <58450613-2D78-4846-A348-994A044DC872@gmail.com> Hi, On the TSC/Community call today we came up with a proposal to look into making the same time slot booking as last time. For it upcoming PTG it would translate to the following: * Tuesday, October 19 - 1300 UTC - 1700 UTC * Wednesday, October 20 - 1300 UTC - 1700 UTC * Thursday, October 21 - 1300 UTC - 1700 UTC Would this work for the community to go ahead and book? Thanks, Ildikó > On Jul 12, 2021, at 15:30, Ildiko Vancsa wrote: > > Hi StarlingX Community, > > It is a friendly reminder that we need to decide on how much time we would like to reserve for StarlingX discussions at the upcoming PTG. > > I created a new etherpad to capture time slot preferences as well as discussion topics: https://etherpad.opendev.org/p/stx-ptg-planning-october-2021 > > Please add your thoughts and ideas to the etherpad or reply to this mail thread. > > I also added the PTG topic to this Wednesday’s TSC call so we can check on the preferences to allow enough time to submit the requests by July 21st. > > Please let me know if you have any questions. > > Thanks, > Ildikó > > >> On Jul 8, 2021, at 08:42, Ashlee Ferguson wrote: >> >> Hi everyone, >> >> Last week, we announced the next PTG will held virtually from Monday, October 18 to Friday, October 22, 2021. >> >> We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. >> >> To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by end of day July 21. >> >> We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions with 3 rules/guidelines: >> >> 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first. >> 2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. >> 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. >> >> Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. >> >> If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! >> >> Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but it's important that you sign up to let us know you'll be attending because that's how you'll receive event details, passwords, and other relevant information about the PTG. >> >> Continue to visit openstack.org/ptg for updates. >> >> Ashlee >> >> >> [1] Team Survey: https://openinfrafoundation.formstack.com/forms/oct2021_vptg_survey >> [2] Ethercalc Signup: https://ethercalc.openstack.org/8tum5yl1bx43 >> [3] PTG Registration: https://openinfra-ptg.eventbrite.com >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > From cboylan at sapwetik.org Wed Jul 14 14:55:46 2021 From: cboylan at sapwetik.org (Clark Boylan) Date: Wed, 14 Jul 2021 07:55:46 -0700 Subject: [Starlingx-discuss] Take care when deleting email addresses in Gerrit Message-ID: <9846d528-aeb1-468c-960b-09ebaa69664e@www.fastmail.com> Hello everyone, We have discovered a bug in Gerrit [0] that allows you to delete your account's Ubuntu One OpenID association if you delete the email from your account associated with that OpenID. When you do this the next time you log in Gerrit will create a new account for you. Fixing this after the fact is currently difficult. We are working to correct this but the required corrections take time. We would like to avoid these issues in the first place. Please avoid deleting email addresses from your accounts unless you are sure they are not associated with your OpenID. If you would like help double checking that for you please let us know and one of our Gerrit admins can double check. We are working to make this fixable after the fact, and will try to work with the Gerrit upstream to prevent you from deleting Ubuntu One OpenID associations from your account in the first place. Until then we apologize for the inconvenience. Finally, this message and others like it are sent to service-announce at lists.opendev.org. I am sending this email to a much broader set of mailing lists because the service-announce list membership is quite small. All of our users should be subscribed to that list to get important notices like this one. We keep the traffic low and only send mail there when we feel it is important for you to see it. [0] https://bugs.chromium.org/p/gerrit/issues/detail?id=14776 Clark From alexandru.dimofte at intel.com Wed Jul 14 15:09:31 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 14 Jul 2021 15:09:31 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210710T020121Z Message-ID: Sanity Test from 2021-July-14 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210710T020121Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Wed Jul 14 15:25:20 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 14 Jul 2021 15:25:20 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 14, 2021) In-Reply-To: References: Message-ID: >From today's meeting... * Standing Topics * Build/Sanity * some build issues this week, could have been avoided if developers had done a fresh build, AR: Bill will follow up on root cause * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * Stopping the meeting recordings (Ildiko) - http://lists.starlingx.io/pipermail/starlingx-discuss/2021-June/011702.html * Ildiko will stop recording after today * ARs from Previous Meetings * DockerHub Open Source designation - we have it now (!) http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011754.html * CENGN Backups: Scott/Anthony working with CENGN on this, fallback is to do our own backups if CENGN cost to do them is too high * Open Requests for Help * Discrepancy in number of workers * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011763.html * Danishka to raise a Launchpad to the Docs team for this * Installing packages and making persistent changes in nodes * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011759.html * Danishka & Greg discussed... Danishka to look into containerizing the packages * Build Matters (if required) * nothing besides the comments above -----Original Message----- From: Zvonar, Bill Sent: Wednesday, July 14, 2021 8:38 AM To: StarlingX ML Subject: Community (& TSC) Call (July 14, 2021) Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210714T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From Linda.Wang at windriver.com Wed Jul 14 16:33:19 2021 From: Linda.Wang at windriver.com (Linda Wang) Date: Wed, 14 Jul 2021 09:33:19 -0700 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS Meeting Minutes: July 7, 2021 Message-ID: <86eb9bfe-4d10-3e3e-0661-aacd7a16272f@windriver.com> 07/07/2021 Agenda items: kernel repo size/clone options alternatives Attendees: Scott Little, Bart Wensley, Jason Norton, Bill Zvonar, Charles Short, Mark Asselstine 1. Kernel Repo Size * Temporary measures to deal with the larger kernel download size seem to be holding (Thanks Scott) * Discussions underway to determine best approach to use as a perm solution * Ideally using a safe transport (https) * Ideally includes git history * Could review again as part of Debian transition (separate repotool manifest) 2. Be aware that parts of BPF and CONFIG_PREEMPT_RT are incompatible * https://lwn.net/Articles/802884/ * STX will have to make a recommendation to users regarding this 3. OS Distro * Work continues and related reviews will be available on Gerrit * Switch to Aptly instead of Pulp mostly done. A review of the state of the Aptly project shows that the project is not 'healthy' (https://github.com/aptly-dev/aptly/issues/920)so Aptly is definitely just a temporary move until Pulp has the needed features added * Kernel v5.10 work continues, many reviews now on Gerrit 4. Python Status * Work continues Next Meeting: July 21, 2021 Meeting Minutes: https://etherpad.opendev.org/p/stx-distro-other -------------- next part -------------- An HTML attachment was scrubbed... URL: From Michiel.Seuren at windriver.com Wed Jul 14 17:01:37 2021 From: Michiel.Seuren at windriver.com (Seuren, Michiel) Date: Wed, 14 Jul 2021 17:01:37 +0000 Subject: [Starlingx-discuss] Self nomination for Project Lead Test Message-ID: Hi, I'd like to nominate myself as the Project Lead for Test. I have recently started as the Cloud Platform test automation manager at Wind River. In this capacity I am working closely with Product Verification and other groups. One of my areas of focus is the StarlingX 6.0 feature/regression testing and automation. Looking forward to our collaboration on the StarlingX project. Regards, Michiel -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Wed Jul 14 18:11:14 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 14 Jul 2021 11:11:14 -0700 Subject: [Starlingx-discuss] Etherpad reconnects Message-ID: <1B8341FF-3F18-47D8-B86A-6515E96D7EC8@gmail.com> Hi, I had a chat with the OpenDev team on the #opendev IRC channel[1] about the IRC challenges a couple of you are facing. I learned about the following things that can cause etherpad to trigger a reconnect: * The supported browsers are Chrome and Firefox! (In other browsers it will face issues that can force you to reconnect the pad frequently) * Having multiple instances open in the same browser (even if it is in separate browser windows!) * Network issues * Etherpad cannot handle network issues and flaky connections, if it gets out of sync it will force you to reconnect * You can try to have the etherpad open in two different browsers and see if they both get disconnected at the same time which could prove the network issue on your end If you experience an issue despite of eliminating all the factors listed above please reach out to the OpenDev team directly. You can do that on their IRC channel #opendev on OFTC or on their mailing list: service-discuss at lists.opendev.org (subscribe here: http://lists.opendev.org/cgi-bin/mailman/listinfo/service-discuss) The OpenDev team has limited resources therefore the more information you have about the issue is the better to be able to debug the problem. You can also see if there’s someone around on their IRC channel at the time when your reconnects happen that way they can help you debug by checking the errors on the server side. It is hard to do retrospectively due to how etherpad creates its log entries. I hope this is helpful. Please let me know if I can help with anything further. Thanks, Ildikó [1] https://meetings.opendev.org/irclogs/%23opendev/%23opendev.2021-07-14.log.html#t2021-07-14T16:19:54 From Alexander.Williams at commscope.com Wed Jul 14 20:52:52 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Wed, 14 Jul 2021 20:52:52 +0000 Subject: [Starlingx-discuss] Modifying host-add for bare metal edge cloud Message-ID: Hi all, I'm looking into modifying the host add endpoint for personal use, and wanted to ask a few clarifying questions: If I understand it correctly, the POST v1/ihosts endpoint is how hosts are added, and corresponds to the post/ _do_post commands in this file: https://github.com/starlingx-staging/stx-config/blob/master/sysinv/sysinv/sysinv/sysinv/api/controllers/v1/host.py Related to this particular file: * When the host is set to action state HAS_REINSTALLING, is this to reinstall the base .iso (unconfigured, but of the correct kind)? If not, what is this responsible for doing? If so, when does this trigger on the added host? * Where can I find this file/other API related files in the filesystem? More generally: * If I'm adding controller-1 on an edge cloud, the pxeboot step handles both unconfigured .iso installation, followed immediately by the bootstrap (using the same overrides as controller-0(?)). Are these two separate in any way, or handled by a single config? Is this different from the central cloud, since the bootstrap override file isn't ever present on an edge cloud machine * Is there a mechanism to avoid pxe-booting while still having a host as part of the inventory (is this just host-add)? * Is there a way to run the bootstrap on the second host without pxebooting/ reinstalling the base .iso in the process? I'm aware that this is not supported/ tested, but any help on any of the above questions would be greatly appreciated. (I am also aware of the INSTALL_UUID potentially presenting an issue, but if any of the above are possible that would be great to know.) Thanks! Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Jul 15 11:04:57 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 15 Jul 2021 11:04:57 +0000 Subject: [Starlingx-discuss] Self nomination for Project Lead Test In-Reply-To: References: Message-ID: +1 From: Seuren, Michiel Sent: Wednesday, July 14, 2021 1:02 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Self nomination for Project Lead Test Hi, I'd like to nominate myself as the Project Lead for Test. I have recently started as the Cloud Platform test automation manager at Wind River. In this capacity I am working closely with Product Verification and other groups. One of my areas of focus is the StarlingX 6.0 feature/regression testing and automation. Looking forward to our collaboration on the StarlingX project. Regards, Michiel -------------- next part -------------- An HTML attachment was scrubbed... URL: From radoslaw.piliszek at gmail.com Thu Jul 15 18:36:12 2021 From: radoslaw.piliszek at gmail.com (=?UTF-8?Q?Rados=C5=82aw_Piliszek?=) Date: Thu, 15 Jul 2021 20:36:12 +0200 Subject: [Starlingx-discuss] [Kolla] [ Kolla-Ansible] Existing Workloads getting Impacted on Upgrade In-Reply-To: References: Message-ID: Hello Anirudh, This is not the correct mailing list for Kolla nor Kolla Ansible. The correct is: openstack-discuss at lists.openstack.org Please retry there. As for your question, the observed behaviour should not be happening. Perhaps the VM was under resource pressure and got killed? Anyway, please restart this thread on the proper mailing list. Kind regards, -yoctozepto On Wed, Jul 14, 2021 at 3:31 PM Anirudh Gupta wrote: > > Hi Team, > > We have deployed the Openstack Victoria release using Multinode Kolla Ansible deployment. > There are 2 nodes - one for compute and other for Controller+Network > > Following the below link, we also upgraded the setup from Victoria to wallaby > > https://docs.openstack.org/kolla-ansible/latest/user/operating-kolla.html > > During the Upgrade process, we figured out that the VM which was spawned before Upgrade went to shut off during the middle of Upgrade process and post completion of the upgrade process, we had to manually start the VM again. > > Ideally we were expecting that the existing Workloads would remain unimpacted in this process. > > Can someone please suggest if our observation is correct or any steps that needs to be taken in order to avoid this? > > Regards > Anirudh Gupta > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss From Ghada.Khalil at windriver.com Thu Jul 15 19:55:04 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 15 Jul 2021 19:55:04 +0000 Subject: [Starlingx-discuss] Minutes: StarlingX Release Meeting - July 14/2021 Message-ID: Agenda/Minutes are posted at: https://etherpad.openstack.org/p/stx-releases Release Team Meeting - Jul 14 2021 stx.5.0 - stx.5.0 Docs - Complete. The r/stx.5.0 branch has been tagged with the doc content. TBC with Mary Camp once back from vacation stx.6.0 - Milestone-1 is this week - Milestone Criteria - https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones - Release priorities and major features defined. - High level resourcing secured. - Team agreed that the milestone is met. Action: Ghada to send the announcement to the mailing list - Release Planning Spreadsheet: https://docs.google.com/spreadsheets/d/13p0BMlBgJXUVForOFsblAJq9jA1-FMBlmhV5TIc70IE/edit#gid=1107209846 - Most features are resourced and are updated with target dates - New Features - One small enhancement added this week: https://storyboard.openstack.org/#!/story/2009036 - Prep features in support of the Debian OS transition - Target end of July - Verification Plans for the release - Introducing Michiel Seuren - WR Test Manager - Michiel will help coordinate the verification activities for the stx.6.0 release - Should Michiel nominate himself as the new PL for the test team? Bill to verify who the current test PL is (Yang or Nick?) - stx.6.0 Testing Docs Created: https://drive.google.com/drive/folders/1lfm073XaFGxHZSAf0RPS1DGK6QCReO9u - Intel Test Responsibilities - Intel will continue to provide sanity testing - For regression testing, Intel is still working through their plans, including migration from robot to pyTest From Ghada.Khalil at windriver.com Thu Jul 15 19:58:18 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 15 Jul 2021 19:58:18 +0000 Subject: [Starlingx-discuss] stx.6.0 Milestone-1 Declared Message-ID: Hello all, Based on the stx.6.0 release/feature reviews during the StarlingX bi-weekly release planning meeting, we are declaring Milestone-1 with a date of July 12/2020. Milestone-1 Criteria: https://wiki.openstack.org/wiki/StarlingX/Release_Plan#Release_Milestones - Release priorities and major features defined. - High level resourcing secured. A list of proposed features is available at: https://docs.google.com/spreadsheets/d/13p0BMlBgJXUVForOFsblAJq9jA1-FMBlmhV5TIc70IE/edit#gid=1107209846 Thank you to all community members who helped achieve this milestone. Regards, Ghada & the stx release team From Ghada.Khalil at windriver.com Thu Jul 15 19:59:25 2021 From: Ghada.Khalil at windriver.com (Khalil, Ghada) Date: Thu, 15 Jul 2021 19:59:25 +0000 Subject: [Starlingx-discuss] Canceled: Bi-weekly StarlingX Release Meeting (new time) Message-ID: Cancelling due to vacation. New meeting series for the StarlingX Release Meeting Bi-weekly meeting on Wednesday 06:30AM PT / 09:30AM ET / 02:30PM UTC Zoom Link: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 Meeting agenda/minutes: https://etherpad.openstack.org/p/stx-releases -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 2445 bytes Desc: not available URL: From nicolae.jascanu at intel.com Fri Jul 16 12:37:26 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 16 Jul 2021 12:37:26 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210715T013533Z Message-ID: Sanity Test from 2021-July-15 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210715T013533Z/outputs/iso/ ) Status: GREEN Executed on BARE METAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210715T013533Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Fri Jul 16 12:39:37 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 16 Jul 2021 12:39:37 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210716T015847Z Message-ID: Sanity Test from 2021-July-16 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210716T015847Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL STANDARD Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210716T015847Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Sat Jul 17 14:29:56 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Sat, 17 Jul 2021 14:29:56 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210717T022719Z Message-ID: Sanity Test from 2021-July-17 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210717T022719Z/outputs/iso/ ) Status: GREEN Executed on VIRTUAL SIMPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210717T022719Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Mon Jul 19 02:16:16 2021 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 19 Jul 2021 02:16:16 +0000 Subject: [Starlingx-discuss] Cancel StarlingX Distro-OpenStack: Bi-weekly Project Meeting -- 07/20 Message-ID: Hi All: Cancel 07/20 openstack distro meeting due to personal affairs as no new bug and other topic. Thanks. BR Austin Sun -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Mon Jul 19 02:15:26 2021 From: austin.sun at intel.com (Sun, Austin) Date: Mon, 19 Jul 2021 02:15:26 +0000 Subject: [Starlingx-discuss] Canceled: StarlingX Distro-OpenStack: Bi-weekly Project Meeting(Summer Time) Message-ID: Hi folks, This is a new series of bi-weekly project meeting on StarlingX Distro-OpenStack. Your participation to this meeting and/or other offline contribution by all means are highly appreciated! Project Team Etherpad: https://etherpad.openstack.org/p/stx-distro-openstack-meetings The Summer Time Slot for this meeting : CST: 9:00 PM (China, Shanghai ) PST: 7:00 AM (US West , US, Oregon) EST: 9:00 AM (East Canada , Canada Ottawa) Thanks. BR Austin Sun. -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: not available Type: text/calendar Size: 3678 bytes Desc: not available URL: From alexandru.dimofte at intel.com Mon Jul 19 19:28:52 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 19 Jul 2021 19:28:52 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210717T022719Z Message-ID: Sanity Test from 2021-July-19 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210717T022719Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210717T022719Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image002.png Type: image/png Size: 5155 bytes Desc: image002.png URL: From ashlee at openstack.org Mon Jul 19 20:46:00 2021 From: ashlee at openstack.org (Ashlee Ferguson) Date: Mon, 19 Jul 2021 15:46:00 -0500 Subject: [Starlingx-discuss] PTG October 2021 Team Signup In-Reply-To: <231B4E30-ABA6-4A04-83E7-CFAECC9985EE@openstack.org> References: <231B4E30-ABA6-4A04-83E7-CFAECC9985EE@openstack.org> Message-ID: <992B0762-600C-4193-84D0-21A9B3282642@openstack.org> Hi everyone, Don't forget to sign your team up for the next Project Teams Gathering (PTG), which will be held virtually from Monday, October 18 to Friday, October 22, 2021! If you haven't already done so, please complete *BOTH the survey[1] AND reserve time in the ethercalc[2] by end of day July 21.* Then make sure to register[3] for the PTG because that's how you'll receive event details, passwords, and other relevant information about the PTG. Thanks! Ashlee [1] Team Survey: https://openinfrafoundation.formstack.com/forms/oct2021_vptg_survey [2] Ethercalc Signup: https://ethercalc.openstack.org/8tum5yl1bx43 [3] PTG Registration: https://openinfra-ptg.eventbrite.com > On Jul 8, 2021, at 10:42 AM, Ashlee Ferguson wrote: > > Hi everyone, > > Last week, we announced the next PTG will held virtually from Monday, October 18 to Friday, October 22, 2021. > > We will have the same schedule set up available as last time with three windows of time spread across the day to cover all timezones with breaks in between. > > To signup your team, you must complete BOTH the survey[1] AND reserve time in the ethercalc[2] by end of day July 21. > > We ask that the PTL/SIG Chair/Team lead sign up for time to have their discussions with 3 rules/guidelines: > > 1. Cross project discussions (like SIGs or support project teams) should be scheduled towards the start of the week so that any discussions that might shape those of other teams happen first. > 2. No team should sign up for more than 4 hours per UTC day to help keep participants actively engaged. > 3. No team should sign up for more than 16 hours across all time slots to avoid burning out our contributors and to enable participation in multiple teams discussions. > > Again, you need to fill out BOTH the ethercalc AND the survey to complete your team's sign up. > > If you have any issues with signing up your team, due to conflict or otherwise, please let me know! While we are trying to empower you to make your own decisions as to when you meet and for how long (after all, you know your needs and teams timezones better than we do), we are here to help! > > Once your team is signed up, please register[3]! And remind your team to register! Registration is free, but it's important that you sign up to let us know you'll be attending because that's how you'll receive event details, passwords, and other relevant information about the PTG. > > Continue to visit openstack.org/ptg for updates. > > Ashlee > > > [1] Team Survey: https://openinfrafoundation.formstack.com/forms/oct2021_vptg_survey > [2] Ethercalc Signup: https://ethercalc.openstack.org/8tum5yl1bx43 > [3] PTG Registration: https://openinfra-ptg.eventbrite.com From alexandru.dimofte at intel.com Tue Jul 20 07:58:39 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 20 Jul 2021 07:58:39 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210719T230603Z Message-ID: Sanity Test from 2021-July-20 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210719T230603Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210719T230603Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Michiel.Seuren at windriver.com Tue Jul 20 14:31:56 2021 From: Michiel.Seuren at windriver.com (Seuren, Michiel) Date: Tue, 20 Jul 2021 14:31:56 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Test meeting In-Reply-To: References: Message-ID: Hi, I am cancelling the StarlingX Test meeting today (20 July 2021) due to a parallel corporate event. Please let me know if you have any concerns or questions. Thank you, Michiel Seuren StarlingX Project Test Lead @ Wind River -----Original Appointment----- From: Liu, Yang (YOW) Sent: Monday, June 21, 2021 7:29 AM To: Liu, Yang (YOW); Seuren, Michiel; starlingx-discuss at lists.starlingx.io Cc: Jones, Bruce E; Castelino, Jessica; Eslimi, Dariush; Cobbley, David A; Ansari, Sabeel; Trica, Mihail-Laurentiu; Albescu, Cosmin; Jascanu, Nicolae; Waines, Greg; Bittner, Bernd; Stock, Ruediger; Hellmann, Gil; Kopec, Gerald (Gerry); Winnicki, Chris; Lovaszi, Oliver; Young, Ken; Yao, Le; Bailey, Henry Albert (Al); Dale, Kristal; Nehls, Daniel; Kumar, Sharath; Song, Gongjun; Fiuczynski, Marc; Camp, MaryX; Church, Robert; Ramirez Martinez, Mawrer A; Richard, Joseph; Mukherjee, Sanjay K; Zhu, Vivian; Ding, Jian-feng; Dobranici, Emilia; Pestek, Haris; Bujold, Kristine; 张志国; MacDonald, Eric; Arjun Sundararajan; Fisher, Charlie; Little, Scott; Bhat, Gopalkrishna; Pike, Jason; Wold, Saul Subject: [Starlingx-discuss] Weekly StarlingX Test meeting When: Tuesday, July 20, 2021 11:00 AM-12:00 PM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -----Original Appointment----- From: Liu, Yang (YOW) > Sent: Tuesday, March 30, 2021 9:04 AM To: Liu, Yang (YOW); starlingx-discuss at lists.starlingx.io Cc: Jones, Bruce E; Castelino, Jessica; Eslimi, Dariush; Cobbley, David A; Ansari, Sabeel; Trica, Mihail-Laurentiu; Albescu, Cosmin; Jascanu, Nicolae; Waines, Greg; Bittner, Bernd; Stock, Ruediger; Hellmann, Gil; Kopec, Gerald (Gerry); Winnicki, Chris; Lovaszi, Oliver; Young, Ken; Yao, Le; Bailey, Henry Albert (Al); Dale, Kristal; Nehls, Daniel; Kumar, Sharath; Song, Gongjun; Fiuczynski, Marc; Camp, MaryX; Church, Robert; Ramirez Martinez, Mawrer A; Richard, Joseph; Mukherjee, Sanjay K; Zhu, Vivian; Ding, Jian-feng; Dobranici, Emilia; Pestek, Haris; Bujold, Kristine; 张志国; MacDonald, Eric; Arjun Sundararajan; Fisher, Charlie; Little, Scott; Bhat, Gopalkrishna; Pike, Jason; Wold, Saul Subject: [Starlingx-discuss] Weekly StarlingX Test meeting When: Occurs every Tuesday effective 3/10/2020 from 11:00 AM to 12:00 PM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 8am Pacific - Test Team Call Call details • https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 • Dialing in from phone: o Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 o Meeting ID: 342 730 236 o Passcode: 419405 o International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes • The agenda and notes for each call are here: https://etherpad.openstack.org/p/stx-test • Call recordings: https://wiki.openstack.org/wiki/Starlingx/Meeting_Logs#Test_Team_Call StarlingX Meeting schedules: https://wiki.openstack.org/wiki/Starlingx/Meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jul 20 15:05:16 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 20 Jul 2021 15:05:16 +0000 Subject: [Starlingx-discuss] Weekly StarlingX Test meeting In-Reply-To: References: Message-ID: Hi, Ok Michiel, it is fine. Anyway we don’t have any bad news. The sanity is GREEN. BR, Alex From: Seuren, Michiel Sent: Tuesday, July 20, 2021 5:32 PM To: starlingx-discuss at lists.starlingx.io Cc: Jascanu, Nicolae ; Dimofte, Alexandru Subject: RE: [Starlingx-discuss] Weekly StarlingX Test meeting Hi, I am cancelling the StarlingX Test meeting today (20 July 2021) due to a parallel corporate event. Please let me know if you have any concerns or questions. Thank you, Michiel Seuren StarlingX Project Test Lead @ Wind River -----Original Appointment----- From: Liu, Yang (YOW) Sent: Monday, June 21, 2021 7:29 AM To: Liu, Yang (YOW); Seuren, Michiel; starlingx-discuss at lists.starlingx.io Cc: Jones, Bruce E; Castelino, Jessica; Eslimi, Dariush; Cobbley, David A; Ansari, Sabeel; Trica, Mihail-Laurentiu; Albescu, Cosmin; Jascanu, Nicolae; Waines, Greg; Bittner, Bernd; Stock, Ruediger; Hellmann, Gil; Kopec, Gerald (Gerry); Winnicki, Chris; Lovaszi, Oliver; Young, Ken; Yao, Le; Bailey, Henry Albert (Al); Dale, Kristal; Nehls, Daniel; Kumar, Sharath; Song, Gongjun; Fiuczynski, Marc; Camp, MaryX; Church, Robert; Ramirez Martinez, Mawrer A; Richard, Joseph; Mukherjee, Sanjay K; Zhu, Vivian; Ding, Jian-feng; Dobranici, Emilia; Pestek, Haris; Bujold, Kristine; 张志国; MacDonald, Eric; Arjun Sundararajan; Fisher, Charlie; Little, Scott; Bhat, Gopalkrishna; Pike, Jason; Wold, Saul Subject: [Starlingx-discuss] Weekly StarlingX Test meeting When: Tuesday, July 20, 2021 11:00 AM-12:00 PM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 -----Original Appointment----- From: Liu, Yang (YOW) > Sent: Tuesday, March 30, 2021 9:04 AM To: Liu, Yang (YOW); starlingx-discuss at lists.starlingx.io Cc: Jones, Bruce E; Castelino, Jessica; Eslimi, Dariush; Cobbley, David A; Ansari, Sabeel; Trica, Mihail-Laurentiu; Albescu, Cosmin; Jascanu, Nicolae; Waines, Greg; Bittner, Bernd; Stock, Ruediger; Hellmann, Gil; Kopec, Gerald (Gerry); Winnicki, Chris; Lovaszi, Oliver; Young, Ken; Yao, Le; Bailey, Henry Albert (Al); Dale, Kristal; Nehls, Daniel; Kumar, Sharath; Song, Gongjun; Fiuczynski, Marc; Camp, MaryX; Church, Robert; Ramirez Martinez, Mawrer A; Richard, Joseph; Mukherjee, Sanjay K; Zhu, Vivian; Ding, Jian-feng; Dobranici, Emilia; Pestek, Haris; Bujold, Kristine; 张志国; MacDonald, Eric; Arjun Sundararajan; Fisher, Charlie; Little, Scott; Bhat, Gopalkrishna; Pike, Jason; Wold, Saul Subject: [Starlingx-discuss] Weekly StarlingX Test meeting When: Occurs every Tuesday effective 3/10/2020 from 11:00 AM to 12:00 PM (UTC-05:00) Eastern Time (US & Canada). Where: https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 8am Pacific - Test Team Call Call details · https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 · Dialing in from phone: · Dial(for higher quality, dial a number based on your current location): US: +1 669 900 6833 or +1 646 876 9923 · Meeting ID: 342 730 236 · Passcode: 419405 · International numbers available: https://zoom.us/u/ed95sU7aQ Agenda and meeting minutes · The agenda and notes for each call are here: https://etherpad.openstack.org/p/stx-test · Call recordings: https://wiki.openstack.org/wiki/Starlingx/Meeting_Logs#Test_Team_Call StarlingX Meeting schedules: https://wiki.openstack.org/wiki/Starlingx/Meetings -------------- next part -------------- An HTML attachment was scrubbed... URL: From ildiko.vancsa at gmail.com Tue Jul 20 16:04:59 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 20 Jul 2021 09:04:59 -0700 Subject: [Starlingx-discuss] Stop recording the meetings In-Reply-To: <1A210C2E-5A39-4369-BB25-5AD5CCE45E7F@gmail.com> References: <1A210C2E-5A39-4369-BB25-5AD5CCE45E7F@gmail.com> Message-ID: Hi, I just wanted to give a heads up that based on our former discussions and my earlier emails I now stopped the meeting recordings, starting this Monday Zoom will __not__ automatically record the weekly meetings. Thanks, Ildikó > On Jun 30, 2021, at 07:35, Ildiko Vancsa wrote: > > Hi, > > We’ve discussed this briefly on the Community Call today and there were no objections to this approach. > > The plan we outlined is the following: > * Stop recording meetings next Monday (July 5) > * Mark the current recordings page archived and point to the meeting notes > > If you have any comments or objections please reply to this thread or reach out to me __before the end of this week (July 4)__. > > Thanks, > Ildikó > > >> On Jun 10, 2021, at 13:14, Ildiko Vancsa wrote: >> >> Hi, >> >> I’m reaching out to you about the StarlingX meeting recordings. >> >> If you take a look at the meeting wiki[1] you will see that the most recent links to recordings are over a year old. During this time I haven’t received any requests or complaints until very recently. But this recent outreach was also about to check on the recordings in general just to understand if the meetings are still happening or not and not to listen back on either of them. >> >> Following the mailing list you can also see that most teams are posting their meeting logs that are usually on their meeting etherpads which gives everyone a chance to catch up on what was discussed and is a primary way to keep meeting history. >> >> Based on the above I would like to propose to stop recording the meetings. >> >> Please respond to this thread by the end of next week (June 20) if you have any questions or concerns to take into account before taking action. >> >> Thanks and Best Regards, >> Ildikó >> >> [1] https://wiki.openstack.org/wiki/Starlingx/Meetings >> >> > From ildiko.vancsa at gmail.com Tue Jul 20 17:56:15 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Tue, 20 Jul 2021 10:56:15 -0700 Subject: [Starlingx-discuss] Project teams structure Message-ID: <83669306-7933-4D61-BCE2-27D8EF31DD41@gmail.com> Hi, During the last PTG we touched on the topic of maybe re-structuring the project teams compared to how tasks and people are organized currently. For reference you can find the current list of projects under StarlingX here: https://docs.starlingx.io/governance/reference/tsc/projects/index.html Is that structure still accurate? Does the community want to still discuss if we can organize teams in a way that better suits the activities within StarlingX? Thanks, Ildikó From Bill.Zvonar at windriver.com Wed Jul 21 11:04:10 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 21 Jul 2021 11:04:10 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 21, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210721T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From alexandru.dimofte at intel.com Wed Jul 21 14:03:04 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 21 Jul 2021 14:03:04 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210721T013651Z Message-ID: Sanity Test from 2021-July-21 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210721T013651Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210721T013651Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Bill.Zvonar at windriver.com Wed Jul 21 14:48:47 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 21 Jul 2021 14:48:47 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 21, 2021) In-Reply-To: References: Message-ID: >From today's meeting... * Standing Topics * Build/Sanity * sanity all green since last week * per Scott, nothing significant to report this week - some blips w/ container downloads (retries happen later) * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * Core Reviewers * cores should check their set of core reviewers & adjust as appropriate - remove those who are no longer active on the project & add new reviewers as required (and follow the governance rules for adding new cores - it's up to the PL/TL and the team to decide) * Project Teams * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011790.html * which projects are active & which are not? * PLs think about which projects to combine or retire * target to close on this by the Sep 8 community call * DockerHub - any issues of late? * haven't heard any issues of late, but Scott's still looking to sort out the admin list - AR: Bill to escalate with DockerHub * ARs from Previous Meetings * nothing to discuss this week * Open Requests for Help * Modifying host-add for bare metal edge cloud * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011773.html * Ram will follow up on this * Build Matters (if required) * nothing to discuss this week -----Original Message----- From: Zvonar, Bill Sent: Wednesday, July 21, 2021 7:04 AM To: StarlingX ML Subject: Community (& TSC) Call (July 21, 2021) Hi all, reminder of the weekly TSC/Community calls coming up later today. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210721T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From ildiko.vancsa at gmail.com Wed Jul 21 17:50:04 2021 From: ildiko.vancsa at gmail.com (Ildiko Vancsa) Date: Wed, 21 Jul 2021 10:50:04 -0700 Subject: [Starlingx-discuss] Project teams structure In-Reply-To: <83669306-7933-4D61-BCE2-27D8EF31DD41@gmail.com> References: <83669306-7933-4D61-BCE2-27D8EF31DD41@gmail.com> Message-ID: <4F667849-47DA-48BD-83ED-79D19C36D6D9@gmail.com> Hi, As an update we touched on this topic a bit on the community call today and agreed to keep discussing and exploring what would be the best way forward. As a reminder I would just mention that the goal here is to make it easier for people in the community as well as for newcomers to understand how we organize people and distribute work items. This way if someone has a question about an area or would like to get involved they will know which team to look for and who to turn to. The Summer vacation period is on now therefore we don’t plan to make a decision before the end of August. If you have a suggestion, question or opinion please respond to this thread so we can get a better view about what changes to make. Thanks, Ildikó > On Jul 20, 2021, at 10:56, Ildiko Vancsa wrote: > > Hi, > > During the last PTG we touched on the topic of maybe re-structuring the project teams compared to how tasks and people are organized currently. > > For reference you can find the current list of projects under StarlingX here: https://docs.starlingx.io/governance/reference/tsc/projects/index.html > > Is that structure still accurate? Does the community want to still discuss if we can organize teams in a way that better suits the activities within StarlingX? > > Thanks, > Ildikó > > From John.Kung at windriver.com Wed Jul 21 19:02:31 2021 From: John.Kung at windriver.com (Kung, John) Date: Wed, 21 Jul 2021 19:02:31 +0000 Subject: [Starlingx-discuss] Modifying host-add for bare metal edge Message-ID: Alex, host-add will add a host of the specified personality (e.g. controller, worker, experimental edgeworker ...) to the system/cluster. The host may also be detected automatically and added to inventory when it attempts to pxeboot. The added host will attempt to pxeboot the active image, which may be served from the controller; or in the case of edgeworker, the image would need to be already installed. After a host has been added, it is possible to reinstall the host, however, this will lead to the host's disk being wiped. host-reinstall is required only for certain exceptional host configuration changes as specified here: https://docs.starlingx.io/deploy_install_guides/r5_release/bare_metal/reinstalling-a-system-or-a-host.html https://docs.starlingx.io/cli_ref/system.html#host-configuration Thus, 'system host-reinstall' will trigger the reinstall of the host (sets the HAS_REINSTALLING state). Otherwise, a regular host-lock/unlock/reboot will boot from disk the already installed load (from the host-add). Experimentally, there is also an 'edgeworker' personality where the host OS is not installed/reinstalled by the controller: https://docs.starlingx.io/deploy/deploy-edgeworker-nodes.html regards, John ----------------------------------------------------------------------------------------- Message: 2 Date: Wed, 14 Jul 2021 20:52:52 +0000 From: "Williams, Alexander" > To: "starlingx-discuss at lists.starlingx.io" > Subject: [Starlingx-discuss] Modifying host-add for bare metal edge cloud Message-ID: > Content-Type: text/plain; charset="utf-8" Hi all, I'm looking into modifying the host add endpoint for personal use, and wanted to ask a few clarifying questions: If I understand it correctly, the POST v1/ihosts endpoint is how hosts are added, and corresponds to the post/ _do_post commands in this file: https://github.com/starlingx-staging/stx-config/blob/master/sysinv/sysinv/sysinv/sysinv/api/controllers/v1/host.py Related to this particular file: * When the host is set to action state HAS_REINSTALLING, is this to reinstall the base .iso (unconfigured, but of the correct kind)? If not, what is this responsible for doing? If so, when does this trigger on the added host? * Where can I find this file/other API related files in the filesystem? JK> This ihost.py file is installed in the filesystem under /usr/lib64/python2.7/site-packages/sysinv/api/controllers/v1/ . More generally: * If I'm adding controller-1 on an edge cloud, the pxeboot step handles both unconfigured .iso installation, followed immediately by the bootstrap (using the same overrides as controller-0(?)). Are these two separate in any way, or handled by a single config? Is this different from the central cloud, since the bootstrap override file isn't ever present on an edge cloud machine JK> Additional configuration would be required for controller-1. E.g. https://docs.starlingx.io/deploy_install_guides/r5_release/virtual/aio_duplex_install_kubernetes.html#install-software-on-controller-1-node * Is there a mechanism to avoid pxe-booting while still having a host as part of the inventory (is this just host-add)? JK> the host-add will typically enable the host for pxeboot based on the mgmt_mac address; unless it is an edgeworker personality. * Is there a way to run the bootstrap on the second host without pxebooting/ reinstalling the base .iso in the process? JK> There is an experimental feature for 'edgeworker' personality, where the OS of the node is not installed by the StarlingX controller; currently supports only Ubuntu: https://docs.starlingx.io/deploy/deploy-edgeworker-nodes.html I'm aware that this is not supported/ tested, but any help on any of the above questions would be greatly appreciated. (I am also aware of the INSTALL_UUID potentially presenting an issue, but if any of the above are possible that would be great to know.) Thanks! Best, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- An HTML attachment was scrubbed... URL: From maryx.camp at intel.com Wed Jul 21 21:15:02 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 21 Jul 2021 21:15:02 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 21-July-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 21-July-21 All -- reviews merged since last meeting: 18 Status/questions/opens WRO - the team chatted with Greg about WRO features/updates. Decision: WRO reviews will not be cherry picked into R5 but will be done in Master branch. Upcoming WR releases - 3 bug fix releases next week, will need to be cherry picked. Cherry picking needs to be done again before they stack up too high. Add to regular agenda so we can tackle some each week. Notes from Release Team Meeting - Jul 14 2021 - Complete. The r/stx.5.0 branch has been tagged with the doc content. TBC with Mary Camp once back from vacation We think Scott has completed his tasks and there are no more ARs for anyone on this. AR Mary send email to community - asking for R5 release retrospective info (docs specific). Cherry picking We cherry picked 4 reviews to R5 branch. Stopped on this one: https://review.opendev.org/c/starlingx/docs/+/800308 AR Ron to follow up with Greg to confirm R5 files need to be merged into R5 branch (by starting new review). From a2zhariprasad at gmail.com Thu Jul 22 02:21:31 2021 From: a2zhariprasad at gmail.com (Hari Prasad Vendra) Date: Thu, 22 Jul 2021 07:51:31 +0530 Subject: [Starlingx-discuss] Running StralingX Central Controller on AWS/Azure Cloud Message-ID: Is it possible to run StarlingX Central Controller in any other vendor cloud like AWS/Azure to manage StarlingX Edge Clouds ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From Greg.Waines at windriver.com Thu Jul 22 10:57:57 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 22 Jul 2021 10:57:57 +0000 Subject: [Starlingx-discuss] Running StralingX Central Controller on AWS/Azure Cloud In-Reply-To: References: Message-ID: No. But this has been discussed as a relevant use case to consider. Greg. From: Hari Prasad Vendra Sent: Wednesday, July 21, 2021 10:22 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Running StralingX Central Controller on AWS/Azure Cloud [Please note: This e-mail is from an EXTERNAL e-mail address] Is it possible to run StarlingX Central Controller in any other vendor cloud like AWS/Azure to manage StarlingX Edge Clouds ? -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Thu Jul 22 12:18:53 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Thu, 22 Jul 2021 12:18:53 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210722T013405Z Message-ID: Sanity Test from 2021-July-22 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210722T013405Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210722T013405Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 88 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From scott.little at windriver.com Thu Jul 22 16:01:01 2021 From: scott.little at windriver.com (Scott Little) Date: Thu, 22 Jul 2021 12:01:01 -0400 Subject: [Starlingx-discuss] linux-yocto failures from 'repo sync' Message-ID: It appears that the upstream linux-yocto git repo has either deleted or renamed the v5.10/standard/intel-x86 branch that we relied upon. I'm trying to work out a manifest update now. Scott From maryx.camp at intel.com Fri Jul 23 00:41:03 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Fri, 23 Jul 2021 00:41:03 +0000 Subject: [Starlingx-discuss] [docs] Requesting feedback on STX Release 5 - Docs only Message-ID: Hello all, The StarlingX Docs team is looking for retrospective feedback from the community for Release 5 Documentation. New for Release 5 is the ability to view R5 documentation, using the Version button on the main docs site: https://docs.starlingx.io/ The direct URL for R5 docs is: https://docs.starlingx.io/r/stx.5.0/ >From the Docs team's point of view, we had new team members, lots of new upstreamed content, and significant website changes to deal with. We have started an internal list of items to address. We'd like to hear from others in the STX community about what went well during R5 docs development, where we can improve docs processes, and other general feedback you may have on StarlingX docs. Thanks in advance, Mary Camp StarlingX Docs Project Lead | maryx.camp at intel.com -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Fri Jul 23 09:01:47 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Fri, 23 Jul 2021 09:01:47 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210722T013405Z Message-ID: Sanity Test from 2021-July-23 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210722T013405Z/outputs/iso/ ) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210722T013405Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Alexander.Williams at commscope.com Fri Jul 23 20:39:51 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Fri, 23 Jul 2021 20:39:51 +0000 Subject: [Starlingx-discuss] Edge Cloud questions Message-ID: Hi all, A few quick questions on Edge Cloud setups, both assuming AIO duplex configuration: 1. If either controller goes out of service (temporary or permanent), would it still be possible to add new hosts to the system (of any workload)? * Would it require a new controller to be brought online? 2. On an edge cloud, since an OAM ip range is specified, is it possible to have more than 2 controllers for extra redundancy? For example, having 3 controllers ready, and if one were to go down permanently, all control would be handed over to the other two? Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Mon Jul 26 07:10:31 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 26 Jul 2021 07:10:31 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210724T020743Z Message-ID: Sanity Test from 2021-July-24 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210724T020743Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210724T020743Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From yue.tao at windriver.com Mon Jul 26 07:33:39 2021 From: yue.tao at windriver.com (ytao) Date: Mon, 26 Jul 2021 15:33:39 +0800 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Message-ID: Hello Everyone: You may have known Wind River team is working on the transition to Debian OS, more details of the project can be found at https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching a discussion about the userspace packages transition to Debian. The spec can be found at https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html In order to inherit the existing userspace construction as much as possible, our proposal is creating a 'debian' folder in same directory with 'centos' folder for each package. For example, the dhcp package in integ repo, it's centos folder is at: https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos We will create a debian folder in same location https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. All materials in this folder control building dhcp under Debian host. The "stx_deb_folder_layout.rst" is the layout of debian folder. I also attach a couple of samples to demonstrate how to fill the debian folder for a debian package and a 3rd package. This layout is not the final version, I'm appreciated for any suggestion from you. Thanks, ytao -------------- next part -------------- A non-text attachment was scrubbed... Name: stx_deb_folder_layout.rst Type: text/x-rst Size: 7028 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: integ-base-libfdt-debian.tar.gz Type: application/gzip Size: 12021 bytes Desc: not available URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: integ-base-dhcp-debian.tar.gz Type: application/gzip Size: 1939 bytes Desc: not available URL: From alexandru.dimofte at intel.com Mon Jul 26 11:52:39 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Mon, 26 Jul 2021 11:52:39 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210724T020743Z Message-ID: Sanity Test from 2021-July-26 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210724T020743Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210724T020743Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Virtual Environment Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 89 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From openinfradn at gmail.com Mon Jul 26 12:58:27 2021 From: openinfradn at gmail.com (open infra) Date: Mon, 26 Jul 2021 18:28:27 +0530 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Message-ID: Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From Charles.Short at windriver.com Mon Jul 26 23:05:49 2021 From: Charles.Short at windriver.com (Short, Charles) Date: Mon, 26 Jul 2021 23:05:49 +0000 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: References: Message-ID: Hi, I had a look through the document and it looks okay. I just have a couple of comments/questions about the folder content: * **debver** - This is the package version of the debian package? This is generated from the debian/changelog when the package is built. I guess the problem is that you want to differentiate from a starlingx package and native package? My suggestion would be what the Ubuntu developers do. For example, if we have patches to apply for systemd, the debian version would be 247.3-3, while the StarlingX package would be 247.3-3stx0. Then a user or developer can quickly see that they are working with a native debian package or a starlingx modified package. * **deb_patches** - As a developer, if the package has its debian/patches I would rather use the debian/patches directory. If the directory doesnt exist I am more likely to create a debian/patches directory and then send a patch to debian to fix a problem. * **dl_path** - Debian already does this for you already, the debian/watch file keeps track of where to download a tarball, or a zip file, or a python wheel for you. You can use uscan(1), which is a part of devscripts package. Debian/Ubuntu developers have been doing this for years. (https://wiki.debian.org/debian/watch) If you have any questions please let me know. Regards chuck debian/watch - Debian Wiki debian/watch. The file named watch in the debian directory is used to check for newer versions of upstream software is available and to download it if necessary. The download itself will be performed with the uscan program from the devscripts package. It takes the path to the debian directory that uses the watch file as an argument or searches the directories underneath the current working ... wiki.debian.org ________________________________ From: ytao Sent: July 26, 2021 3:33 AM To: starlingx-discuss at lists.starlingx.io ; Asselstine, Mark Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hello Everyone: You may have known Wind River team is working on the transition to Debian OS, more details of the project can be found at https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching a discussion about the userspace packages transition to Debian. The spec can be found at https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html In order to inherit the existing userspace construction as much as possible, our proposal is creating a 'debian' folder in same directory with 'centos' folder for each package. For example, the dhcp package in integ repo, it's centos folder is at: https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos We will create a debian folder in same location https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. All materials in this folder control building dhcp under Debian host. The "stx_deb_folder_layout.rst" is the layout of debian folder. I also attach a couple of samples to demonstrate how to fill the debian folder for a debian package and a 3rd package. This layout is not the final version, I'm appreciated for any suggestion from you. Thanks, ytao -------------- next part -------------- An HTML attachment was scrubbed... URL: From yue.tao at windriver.com Tue Jul 27 02:52:44 2021 From: yue.tao at windriver.com (ytao) Date: Tue, 27 Jul 2021 10:52:44 +0800 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: References: Message-ID: <61fc5f83-49dd-5c31-d9db-5e4d986ab2c6@windriver.com> On 7/27/21 7:05 AM, Short, Charles wrote: > Hi,       First of all, I'm appreciated for your input. > > I had a look through the document and it looks okay. I just have a > couple of comments/questions about the folder content: > > * **debver** - This is the package version of the debian package? > This is generated from the debian/changelog when the package is > built. I guess the problem is that you want to differentiate from > a starlingx package and native package? My suggestion would be > what the Ubuntu developers do. For example, if we have patches to > apply for systemd, the debian version would be 247.3-3, while the > StarlingX package would be 247.3-3stx0. Then a user or developer > can quickly see that they are working with a native debian package > or a starlingx modified package. >        For a Debian native package, the *debver* is the debian version. We use it to download the src via "apt-source packagname=debver", for a starlingx package, it is up to the developer. I observed in CentOS version, they added a patch to change the package version, for example integ/base/lighttpd/centos/meta_patches/Update-package-versioning-for-TIS-format.patch, which appends a 'tis' to package version, that requests developer to update the version manually. After transiting to debian, OBS can achieve it automatically. It can append 'tis' or 'stx' to a package, so don't need developers to update it manually. > * **deb_patches** - As a developer, if the package has its > debian/patches I would rather use the debian/patches directory. If > the directory doesnt exist I am more likely to create a > debian/patches directory and then send a patch to debian to fix a > problem. >        Exactly, we should send a fix to debian community and then uprev the package to get the fix, but we also need to consider local changes. The *patches" folder contains the local change for source codes, that will be copied to debian/patches (or create it if doesn't exist), and update the debian/patches/series. The *deb_patches* contains the patches to change debian folder, for example, we want to customize the installation of a package, we need to update override_dh_install stage in debian/rules. The patches will be applied to package/debian folder directly.        The *deb_patches* is similar with centos/meta_patches        The *patches* is similar with centos/patches > * > > > * **dl_path** - Debian already does this for you already, the > debian/watch file keeps track of where to download a tarball, or a > zip file, or a python wheel for you. You can use uscan(1), which > is a part of devscripts package. Debian/Ubuntu developers have > been doing this for years. (https://wiki.debian.org/debian/watch) >          Thanks you for pointing me the debian/watch. I will take some time to learn about debian/watch and go back soon. It looks a debian file for monitoring the upstream version change. I am not sure if it is suitable for a no debian package. In our internal discussion, Davlet told me a special case, that the url of a tar ball likes this https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31           As Davlet's suggestion, the *dl_path* refers to example file and the script that parses it . I will do assessment for debian/watch. Thanks, ytao > If you have any questions please let me know. > > Regards > chuck > > debian/watch - Debian Wiki > debian/watch. The file named watch in the debian directory is used > to check for newer versions of upstream software is available and > to download it if necessary. The download itself will be performed > with the uscan program from the devscripts package. It takes the > path to the debian directory that uses the watch file as an > argument or searches the directories underneath the current > working ... > wiki.debian.org > > > > > > > > ------------------------------------------------------------------------ > *From:* ytao > *Sent:* July 26, 2021 3:33 AM > *To:* starlingx-discuss at lists.starlingx.io > ; Asselstine, Mark > > *Subject:* [Starlingx-discuss] [Debian Build]: Layout of the debian > folder > Hello Everyone: > > You may have known Wind River team is working on the transition to > Debian OS, more details of the project can be found at > https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching > a discussion about the userspace packages transition to Debian. The spec > can be found at > https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html > > In order to inherit the existing userspace construction as much as > possible, our proposal is creating a 'debian' folder in same directory > with 'centos' folder for each package. For example, the dhcp package in > integ repo, it's centos folder is at: > > https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos > > We will create a debian folder in same location > > https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. > > All materials in this folder control building dhcp under Debian host. > The "stx_deb_folder_layout.rst" is the layout of debian folder. I also > attach a couple of samples to demonstrate how to fill the debian folder > for a debian package and a 3rd package. > > This layout is not the final version, I'm appreciated for any suggestion > from you. > > > Thanks, > > ytao > -------------- next part -------------- An HTML attachment was scrubbed... URL: From alexandru.dimofte at intel.com Tue Jul 27 10:34:21 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Tue, 27 Jul 2021 10:34:21 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210726T230541Z Message-ID: Sanity Test from 2021-July-27 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210726T230541Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210726T230541Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Baremetal Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 76 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 90 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Frank.Miller at windriver.com Tue Jul 27 15:17:51 2021 From: Frank.Miller at windriver.com (Miller, Frank) Date: Tue, 27 Jul 2021 15:17:51 +0000 Subject: [Starlingx-discuss] Merging python3 changes onto master branch Message-ID: StarlingX Cores: A team of developers has been working on the f/centos8 branch to convert StarlingX python code to use python3. We're now at the point where most of those changes are ready to merge onto the master branch. Chuck Short has volunteered to gather the commits and merge them onto master [1] and you will see these commits now up for reviews. By tomorrow we expect to be ready for you to review and allow these commits to merge. This email is to let you know of the testing that is being done on these commits so you can have confidence in the changes. * Testing was completed on the f/centos8 branch as much as possible on an AIO-SX system * All the commits [1] were built into a master branch ISO which was brought up in VBox and confirmed the controller went enabled, applications applied, no alarms present * The same ISO was installed on an AIO-SX hardware lab * Sanity is in progress on 2 hardware labs, simplex and duplex, and results expected today * Some functional testing is underway for some domains (eg: SM, FM). If you have any questions let Chuck or I know. We look forward to these commits merging as python3 is a requirement for the project to be able to move to Debian. Frank PL for Build and Containers projects [1] https://review.opendev.org/q/topic:%2522py3%2522+(status:open) -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Tue Jul 27 16:59:35 2021 From: openinfradn at gmail.com (open infra) Date: Tue, 27 Jul 2021 22:29:35 +0530 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: Tried to start VMs using the hypervisor but no luck. worker-0:~# virsh start instance-00000005 error: Failed to start domain instance-00000005 error: Secret not found: no secret with matching uuid '457eb676-33da-42ec-9a8c-9293d545c337' worker-0:~# virsh start instance-00000052 error: Failed to start domain instance-00000052 error: Secret not found: no secret with matching uuid '457eb676-33da-42ec-9a8c-9293d545c337' Could not figure what object is having the uuid of 457eb676-33da-42ec-9a8c-9293d545c337. On Mon, Jul 26, 2021 at 6:28 PM open infra wrote: > Hi, > > After rebooting the entire stx (r5 standard dedicated storage) > environment, noticed that OpenStack vm can not start and hypervisor status > is down (we have only one worker node). > > Furthermore, openstack-apply was failed as > nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is > CrashLoopBackOff [2]. Here is the description of the pod [3]. > > > VMs were created using nova-local and mounted a shared volume, which is a > ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't > fix the issue. > > I would like to know any hint or suggestion to fix this issue and avoid > similar issue in future. > > [1] https://paste.opendev.org/show/807707/ > [2] https://paste.opendev.org/show/807705/ > [3] https://paste.opendev.org/show/807704/ > > Regards, > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Tue Jul 27 19:26:13 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Tue, 27 Jul 2021 19:26:13 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 28, 2021) Message-ID: Hi all, reminder of the weekly TSC/Community calls coming up tomorrow. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210728T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From alexandru.dimofte at intel.com Wed Jul 28 11:31:26 2021 From: alexandru.dimofte at intel.com (Dimofte, Alexandru) Date: Wed, 28 Jul 2021 11:31:26 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210728T020149Z Message-ID: Sanity Test from 2021-July-28 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210728T020149Z/outputs/iso/) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210728T020149Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Baremetal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 71 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 83 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte at intel.com Intel Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: image001.png Type: image/png Size: 5155 bytes Desc: image001.png URL: From Greg.Waines at windriver.com Wed Jul 28 14:42:34 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Wed, 28 Jul 2021 14:42:34 +0000 Subject: [Starlingx-discuss] Edge Cloud questions In-Reply-To: References: Message-ID: see in-lined below, Greg. From: Williams, Alexander Sent: Friday, July 23, 2021 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Edge Cloud questions [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, A few quick questions on Edge Cloud setups, both assuming AIO duplex configuration: 1. If either controller goes out of service (temporary or permanent), would it still be possible to add new hosts to the system (of any workload)? [Greg] Yes you can still add new worker/compute nodes if only a single controller is available. * Would it require a new controller to be brought online? 1. On an edge cloud, since an OAM ip range is specified, is it possible to have more than 2 controllers for extra redundancy? For example, having 3 controllers ready, and if one were to go down permanently, all control would be handed over to the other two? [Greg] Nope. Currently the max number of controllers is 2. There have been some discussions about supporting a more traditional 3x node controller cluster. But that is not currently being worked on. Greg. Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From Bill.Zvonar at windriver.com Wed Jul 28 14:56:26 2021 From: Bill.Zvonar at windriver.com (Zvonar, Bill) Date: Wed, 28 Jul 2021 14:56:26 +0000 Subject: [Starlingx-discuss] Community (& TSC) Call (July 28, 2021) In-Reply-To: References: Message-ID: >From today's call... * Standing Topics * Build/Sanity * sanity green since last week * Scott encountered an issue that blocked Jenkins jobs - disk was filling up - has been cleared up now & we're back to good * builds can be slow now re: cloning yocto kernel * a variation of the issue we discussed a few weeks ago * this will be apparent over the next few weeks as the new 5.10 kernel is merged (see below) & should be back to quicker clone times by mid-August * Gerrit Reviews in Need of Attention * nothing this week * Topics for this Week * 5.10 Kernel * it's going to merge very soon, as early as today * Python 3 Changes Coming (Frank) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011811.html * many reviews https://review.opendev.org/q/topic:%2522py3%2522+(status:open * Layout of Debian Folders (Yue) * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011805.html * open for comments! * Docs Retrospective * Requesting feedback on STX Release 5 - Docs only * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011801.html * community encouraged to provide feedback * ARs from Previous Meetings * DockerHub - still no update on the Admin changes and we have had instances of us hitting the limit, Bill will follow up with DockerHub again * CENGN Backups - Anthony working on it, not concluded yet * Open Requests for Help * Edge Cloud questions * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011803.html * Greg will respond * After lock/unlock, Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus * http://lists.starlingx.io/pipermail/starlingx-discuss/2021-July/011807.html * OpenStack team to respond -----Original Message----- From: Zvonar, Bill Sent: Tuesday, July 27, 2021 3:26 PM To: StarlingX ML Subject: Community (& TSC) Call (July 28, 2021) Hi all, reminder of the weekly TSC/Community calls coming up tomorrow. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_Steering_Committee_.26_Community_Call [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210728T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09 From scott.little at windriver.com Wed Jul 28 15:41:40 2021 From: scott.little at windriver.com (Scott Little) Date: Wed, 28 Jul 2021 11:41:40 -0400 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: References: Message-ID: <1b8e7404-0aef-cc97-39d5-7f69ebc76f61@windriver.com> debname:    The text is unclear.    I think you are saying that the deb built out of directory "integ/base/dhcp" would get the name "dhcp" by defult.  However the Debian convention is to call this package "isc-dhcp" so we must include a "integ/base/dhcp/debian/debname" file to force the package we build to receive the name "isc-dhcp" deb_folder:    This overrides the upstream folder entirely?  It can't be used to add files or override specific upstream files?  If it does override the entire folder, perhaps we need a warning from the package builder that the override has occurred. src_path:    Should support some common macros rather than relying strictly on relative paths.  This aids in code restructuring, both within and across gits.    e.g. we have macros like... PKG_BASE, GIT_BASE, STX_BASE general:    At some point it would be desirable to have a tool to aid in patch creation.  It should use git.    Currently build-pkg --edit tries to do this, but it's not very reliable as rpm's approach to patching fairly free-form and hard to automate.    e.g. stx edit --pkg-type=debian --package=dhcp    = it copies 'debian' to a working directory and does   git init, git add --all, git commit    = It copies tarball source into a second working directory and does   git init, git add --all, git commit    = It applies 'patches' and 'deb_patches' to the correct gits, in the correct order, and creates a git commit for each.    The user can then edit files, cherry-pick a fix from elsewhere, re-order or delete a patch... etc.    Finally the user uses git format-patch to regenerate the patch and deb-patch series ... or perhaps this step is covered by the 'stx' tool ...          stx edit --publish --pkg-type=debian --package=dhcp    One major use of tis is to 'de-fuzz' patches when moving to a new upstream version/tarball On 2021-07-26 3:33 a.m., ytao wrote: > Hello Everyone: > > You may have known Wind River team is working on the transition to > Debian OS, more details of the project can be found at > https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is > launching a discussion about the userspace packages transition to > Debian. The spec can be found at > https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html > > In order to inherit the existing userspace construction as much as > possible, our proposal is creating a 'debian' folder in same directory > with 'centos' folder for each package. For example, the dhcp package > in integ repo, it's centos folder is at: > > https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos > > We will create a debian folder in same location > > https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. > > All materials in this folder control building dhcp under Debian host. > The "stx_deb_folder_layout.rst" is the layout of debian folder. I also > attach a couple of samples to demonstrate how to fill the debian > folder for a debian package and a 3rd package. > > This layout is not the final version, I'm appreciated for any > suggestion from you. > > > Thanks, > > ytao > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Charles.Short at windriver.com Wed Jul 28 15:49:02 2021 From: Charles.Short at windriver.com (Short, Charles) Date: Wed, 28 Jul 2021 15:49:02 +0000 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: <1b8e7404-0aef-cc97-39d5-7f69ebc76f61@windriver.com> References: <1b8e7404-0aef-cc97-39d5-7f69ebc76f61@windriver.com> Message-ID: Comments inline From: Scott Little Sent: Wednesday, July 28, 2021 11:42 AM To: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] [Debian Build]: Layout of the debian folder debname: The text is unclear. I think you are saying that the deb built out of directory "integ/base/dhcp" would get the name "dhcp" by defult. However the Debian convention is to call this package "isc-dhcp" so we must include a "integ/base/dhcp/debian/debname" file to force the package we build to receive the name "isc-dhcp" deb_folder: This overrides the upstream folder entirely? It can't be used to add files or override specific upstream files? If it does override the entire folder, perhaps we need a warning from the package builder that the override has occurred. src_path: Should support some common macros rather than relying strictly on relative paths. This aids in code restructuring, both within and across gits. e.g. we have macros like... PKG_BASE, GIT_BASE, STX_BASE general: At some point it would be desirable to have a tool to aid in patch creation. It should use git. Currently build-pkg --edit tries to do this, but it's not very reliable as rpm's approach to patching fairly free-form and hard to automate. e.g. stx edit --pkg-type=debian --package=dhcp = it copies 'debian' to a working directory and does git init, git add --all, git commit = It copies tarball source into a second working directory and does git init, git add --all, git commit = It applies 'patches' and 'deb_patches' to the correct gits, in the correct order, and creates a git commit for each. The user can then edit files, cherry-pick a fix from elsewhere, re-order or delete a patch... etc. Finally the user uses git format-patch to regenerate the patch and deb-patch series ... or perhaps this step is covered by the 'stx' tool ... stx edit --publish --pkg-type=debian --package=dhcp One major use of tis is to 'de-fuzz' patches when moving to a new upstream version/tarball We could leverage git-buildpackage to do this for us: https://wiki.debian.org/PackagingWithGit chuck On 2021-07-26 3:33 a.m., ytao wrote: Hello Everyone: You may have known Wind River team is working on the transition to Debian OS, more details of the project can be found at https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching a discussion about the userspace packages transition to Debian. The spec can be found at https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html In order to inherit the existing userspace construction as much as possible, our proposal is creating a 'debian' folder in same directory with 'centos' folder for each package. For example, the dhcp package in integ repo, it's centos folder is at: https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos We will create a debian folder in same location https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. All materials in this folder control building dhcp under Debian host. The "stx_deb_folder_layout.rst" is the layout of debian folder. I also attach a couple of samples to demonstrate how to fill the debian folder for a debian package and a 3rd package. This layout is not the final version, I'm appreciated for any suggestion from you. Thanks, ytao _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss at lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Alexander.Williams at commscope.com Wed Jul 28 15:51:17 2021 From: Alexander.Williams at commscope.com (Williams, Alexander) Date: Wed, 28 Jul 2021 15:51:17 +0000 Subject: [Starlingx-discuss] Edge Cloud questions In-Reply-To: References: Message-ID: Thanks, Greg. One followup question – if a controller were to go down permanently, would it be possible to add a new one by pxebooting from the active one? If so, are there any procedure changes from the standard install? Thanks again for your help Best, Alex From: Waines, Greg Sent: Wednesday, July 28, 2021 10:43 AM To: Williams, Alexander ; starlingx-discuss at lists.starlingx.io Subject: RE: Edge Cloud questions see in-lined below, Greg. ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ External (greg.waines at windriver.com) Report This Email FAQ Protection by INKY see in-lined below, Greg. From: Williams, Alexander > Sent: Friday, July 23, 2021 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Edge Cloud questions [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, A few quick questions on Edge Cloud setups, both assuming AIO duplex configuration: 1. If either controller goes out of service (temporary or permanent), would it still be possible to add new hosts to the system (of any workload)? [Greg] Yes you can still add new worker/compute nodes if only a single controller is available. * Would it require a new controller to be brought online? 1. On an edge cloud, since an OAM ip range is specified, is it possible to have more than 2 controllers for extra redundancy? For example, having 3 controllers ready, and if one were to go down permanently, all control would be handed over to the other two? [Greg] Nope. Currently the max number of controllers is 2. There have been some discussions about supporting a more traditional 3x node controller cluster. But that is not currently being worked on. Greg. Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Wed Jul 28 16:10:37 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 28 Jul 2021 12:10:37 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 1881 - Failure! Message-ID: <640665518.329.1627488641996.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 1881 Status: Failure Timestamp: 20210728T160757Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210728T155708Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210728T155708Z DOCKER_BUILD_ID: jenkins-master-distro-20210728T155708Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210728T155708Z/logs BUILD_IMG: false FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210728T155708Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Wed Jul 28 16:10:44 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 28 Jul 2021 12:10:44 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 575 - Failure! Message-ID: <2050463089.332.1627488645746.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 575 Status: Failure Timestamp: 20210728T155708Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210728T155708Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From maryx.camp at intel.com Wed Jul 28 20:13:52 2021 From: maryx.camp at intel.com (Camp, MaryX) Date: Wed, 28 Jul 2021 20:13:52 +0000 Subject: [Starlingx-discuss] [docs] [meeting] Docs team notes 28-July-21 Message-ID: Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 28-July-21 All -- reviews merged since last meeting: 12 Status/questions/opens Lots of reviews coming through quickly for WR releases. Doc retrospective feedback for R5 - announced at today's community call. No feedback received as yet. DEFER 1 week. Launchpad issue Virtual AIO-SX install documentation presents data interface step inside optional storage section [https://bugs.launchpad.net/starlingx/+bug/1932314] DEFER 1 week. Cherry picking One of the reviews we cherry picked last week has failed [ https://review.opendev.org/c/starlingx/docs/+/801587 ] It failed because it references a file that doesn't exist in R5 branch, because a different older review wasn't cherry picked [ https://review.opendev.org/c/starlingx/docs/+/788551 ] I believe we can fix the issue, by first cherry picking 788551 (the old one) and then retrying the one that failed. From austin.sun at intel.com Thu Jul 29 00:55:12 2021 From: austin.sun at intel.com (Sun, Austin) Date: Thu, 29 Jul 2021 00:55:12 +0000 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jul 29 01:47:09 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 28 Jul 2021 21:47:09 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer_layered - Build # 1883 - Failure! Message-ID: <318675810.336.1627523230530.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer_layered Build #: 1883 Status: Failure Timestamp: 20210729T014421Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210729T013319Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-distro/20210729T013319Z DOCKER_BUILD_ID: jenkins-master-distro-20210729T013319Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-distro/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210729T013319Z/logs BUILD_IMG: false FULL_BUILD: false PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/distro/20210729T013319Z/logs MASTER_JOB_NAME: STX_build_layer_distro_master_master LAYER: distro MY_REPO_ROOT: /localdisk/designer/jenkins/master-distro BUILD_ISO: false From build.starlingx at gmail.com Thu Jul 29 01:47:11 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Wed, 28 Jul 2021 21:47:11 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_distro_master_master - Build # 576 - Still Failing! In-Reply-To: <1180739180.330.1627488644128.JavaMail.javamailuser@localhost> References: <1180739180.330.1627488644128.JavaMail.javamailuser@localhost> Message-ID: <771623387.339.1627523232648.JavaMail.javamailuser@localhost> Project: STX_build_layer_distro_master_master Build #: 576 Status: Still Failing Timestamp: 20210729T013319Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/distro/20210729T013319Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From Greg.Waines at windriver.com Thu Jul 29 02:13:57 2021 From: Greg.Waines at windriver.com (Waines, Greg) Date: Thu, 29 Jul 2021 02:13:57 +0000 Subject: [Starlingx-discuss] Edge Cloud questions In-Reply-To: References: Message-ID: Yes ... - if a controller were to go down permanently, you would first delete the old failed controller host and add a new one by pxebooting from the active one. No procedure changes from the standard install. Greg. From: Williams, Alexander Sent: Wednesday, July 28, 2021 11:51 AM To: Waines, Greg Cc: starlingx-discuss at lists.starlingx.io Subject: RE: Edge Cloud questions [Please note: This e-mail is from an EXTERNAL e-mail address] Thanks, Greg. One followup question - if a controller were to go down permanently, would it be possible to add a new one by pxebooting from the active one? If so, are there any procedure changes from the standard install? Thanks again for your help Best, Alex From: Waines, Greg > Sent: Wednesday, July 28, 2021 10:43 AM To: Williams, Alexander >; starlingx-discuss at lists.starlingx.io Subject: RE: Edge Cloud questions see in-lined below, Greg. From: Williams, Alexander > Sent: Friday, July 23, 2021 4:40 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Edge Cloud questions [Please note: This e-mail is from an EXTERNAL e-mail address] Hi all, A few quick questions on Edge Cloud setups, both assuming AIO duplex configuration: 1. If either controller goes out of service (temporary or permanent), would it still be possible to add new hosts to the system (of any workload)? [Greg] Yes you can still add new worker/compute nodes if only a single controller is available. * Would it require a new controller to be brought online? 1. On an edge cloud, since an OAM ip range is specified, is it possible to have more than 2 controllers for extra redundancy? For example, having 3 controllers ready, and if one were to go down permanently, all control would be handed over to the other two? [Greg] Nope. Currently the max number of controllers is 2. There have been some discussions about supporting a more traditional 3x node controller cluster. But that is not currently being worked on. Greg. Thanks, Alex -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Thu Jul 29 03:15:59 2021 From: openinfradn at gmail.com (open infra) Date: Thu, 29 Jul 2021 08:45:59 +0530 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin wrote: > Hi Danishka: > > I checked the three pieces log you shared, but it’s hard to find any hint > to triage the issue. > > But most likely some wrong in worker nodes. > > Would you like report a bug [1] and upload all logs for > controller/workers. > > > > FYI: One way to collect log is to run “collect –all” from controller-0 > which will collect all necessary info from system. > > > > [1] https://bugs.launchpad.net/starlingx/+bugs > > > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Monday, July 26, 2021 8:58 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Openstack applying failed and > nova-compute-worker-0 pod with CrashLoopBackOff staus > > > > Hi, > > > > After rebooting the entire stx (r5 standard dedicated storage) > environment, noticed that OpenStack vm can not start and hypervisor status > is down (we have only one worker node). > > > > Furthermore, openstack-apply was failed as > nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is > CrashLoopBackOff [2]. Here is the description of the pod [3]. > > > > > > VMs were created using nova-local and mounted a shared volume, which is a > ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't > fix the issue. > > > > I would like to know any hint or suggestion to fix this issue and avoid > similar issue in future. > > > > [1] https://paste.opendev.org/show/807707/ > > [2] https://paste.opendev.org/show/807705/ > > [3] https://paste.opendev.org/show/807704/ > > > > Regards, > > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jul 29 04:53:49 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 29 Jul 2021 00:53:49 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_pre_installer - Build # 1166 - Failure! Message-ID: <485625758.342.1627534430632.JavaMail.javamailuser@localhost> Project: STX_build_pre_installer Build #: 1166 Status: Failure Timestamp: 20210729T044211Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210729T043006Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master/20210729T043006Z DOCKER_BUILD_ID: jenkins-master-20210729T043006Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210729T043006Z/logs PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/monolithic/20210729T043006Z/logs MASTER_JOB_NAME: STX_build_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master From build.starlingx at gmail.com Thu Jul 29 04:53:52 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 29 Jul 2021 00:53:52 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 998 - Failure! Message-ID: <566091548.345.1627534432712.JavaMail.javamailuser@localhost> Project: STX_build_master_master Build #: 998 Status: Failure Timestamp: 20210729T043006Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210729T043006Z/logs -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: false FORCE_BUILD: true From scott.little at windriver.com Thu Jul 29 13:43:57 2021 From: scott.little at windriver.com (Scott Little) Date: Thu, 29 Jul 2021 09:43:57 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_master_master - Build # 998 - Failure! In-Reply-To: <566091548.345.1627534432712.JavaMail.javamailuser@localhost> References: <566091548.345.1627534432712.JavaMail.javamailuser@localhost> Message-ID: <8430a12b-6d32-81ec-168b-1cf3c9595bf9@windriver.com> The build failure is attributed to the introduction of the Linux 5.10 kernel.  The failure looks like this ... 01:47:04 b5: ===== Build SRPM for 'kernel' ===== 01:47:04 b5: PKG_BASE=/localdisk/designer/jenkins/master-distro/cgcs-root/stx/kernel/kernel-std 01:47:04 b5: WORK_BASE=/localdisk/loadbuild/jenkins/master-distro/20210729T013319Z/std/inputs/stx/kernel/kernel-std 01:47:04 b5: RPMBUILD_BASE=/localdisk/loadbuild/jenkins/master-distro/20210729T013319Z/std/inputs/stx/kernel/kernel-std/rpmbuild 01:47:04 b5: fatal: Invalid revision range a8808e541750d4ed34105f615e295f6fbd9950fa..HEAD 01:47:04 b5: ERROR: srpm_source_build_data (3534): Failed to calculate GITREVCOUNT 01:47:04 b5: ERROR: build_dir_spec (969): failed to source centos/build_srpm.data 01:47:04 ERROR: reaper (1316): Failed to build src.rpm from source at 'b5' Two additional fixes have been merged that should address all known problems. https://review.opendev.org/c/starlingx/kernel/*/802820 https://review.opendev.org/c/starlingx/kernel/*/802818 Another CENGN build is underway to verify the fixes. Scott On 2021-07-29 12:53 a.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_master_master > Build #: 998 > Status: Failure > Timestamp: 20210729T043006Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/monolithic/20210729T043006Z/logs > -------------------------------------------------------------------------------- > Parameters > > BUILD_CONTAINERS_DEV: false > BUILD_CONTAINERS_STABLE: false > FORCE_BUILD: true > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From build.starlingx at gmail.com Thu Jul 29 16:44:53 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 29 Jul 2021 12:44:53 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] STX_build_installer_layered - Build # 561 - Failure! Message-ID: <936346063.350.1627577094292.JavaMail.javamailuser@localhost> Project: STX_build_installer_layered Build #: 561 Status: Failure Timestamp: 20210729T164446Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210729T160527Z/logs -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-flock/20210729T160527Z DOCKER_BUILD_ID: jenkins-master-flock-20210729T160527Z-builder OS: centos PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210729T160527Z/logs MY_REPO: /localdisk/designer/jenkins/master-flock/cgcs-root PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/flock/20210729T160527Z/logs MASTER_JOB_NAME: STX_build_layer_flock_master_master MY_REPO_ROOT: /localdisk/designer/jenkins/master-flock From build.starlingx at gmail.com Thu Jul 29 16:44:55 2021 From: build.starlingx at gmail.com (build.starlingx at gmail.com) Date: Thu, 29 Jul 2021 12:44:55 -0400 (EDT) Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 562 - Failure! Message-ID: <700444321.353.1627577096356.JavaMail.javamailuser@localhost> Project: STX_build_layer_flock_master_master Build #: 562 Status: Failure Timestamp: 20210729T160527Z Branch: master Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210729T160527Z/logs -------------------------------------------------------------------------------- Parameters FULL_BUILD: false FORCE_BUILD: false From scott.little at windriver.com Thu Jul 29 17:48:45 2021 From: scott.little at windriver.com (Scott Little) Date: Thu, 29 Jul 2021 13:48:45 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 562 - Failure! In-Reply-To: <700444321.353.1627577096356.JavaMail.javamailuser@localhost> References: <700444321.353.1627577096356.JavaMail.javamailuser@localhost> Message-ID: The build has now failed within the build of the installer. Investigating ... Scott On 2021-07-29 12:44 p.m., build.starlingx at gmail.com wrote: > [Please note: This e-mail is from an EXTERNAL e-mail address] > > Project: STX_build_layer_flock_master_master > Build #: 562 > Status: Failure > Timestamp: 20210729T160527Z > Branch: master > > Check logs at: > http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210729T160527Z/logs > -------------------------------------------------------------------------------- > Parameters > > FULL_BUILD: false > FORCE_BUILD: false > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Thu Jul 29 18:43:39 2021 From: scott.little at windriver.com (Scott Little) Date: Thu, 29 Jul 2021 14:43:39 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 562 - Failure! In-Reply-To: References: <700444321.353.1627577096356.JavaMail.javamailuser@localhost> Message-ID: Two issues have been identified. Reviews will be posted shortly. Scott On 2021-07-29 1:48 p.m., Scott Little wrote: > The build has now failed within the build of the installer. > > Investigating ... > > Scott > > > On 2021-07-29 12:44 p.m., build.starlingx at gmail.com wrote: >> [Please note: This e-mail is from an EXTERNAL e-mail address] >> >> Project: STX_build_layer_flock_master_master >> Build #: 562 >> Status: Failure >> Timestamp: 20210729T160527Z >> Branch: master >> >> Check logs at: >> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210729T160527Z/logs >> -------------------------------------------------------------------------------- >> Parameters >> >> FULL_BUILD: false >> FORCE_BUILD: false >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Fri Jul 30 00:28:00 2021 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 30 Jul 2021 00:28:00 +0000 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: Hi Danishka: Would you like know the bug number once you created ? Thanks. BR Austin Sun. From: open infra Sent: Thursday, July 29, 2021 11:16 AM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin > wrote: Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra > Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Fri Jul 30 07:45:54 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 30 Jul 2021 13:15:54 +0530 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: Hi Austin, Sorry for the delay. Bug ID is 1938508. Is there Google Drive or similar location available to upload the tar file of the 'collect' output? Regards, Danishka On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin wrote: > Hi Danishka: > > Would you like know the bug number once you created ? > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Thursday, July 29, 2021 11:16 AM > *To:* Sun, Austin > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Openstack applying failed and > nova-compute-worker-0 pod with CrashLoopBackOff staus > > > > Thanks, Austin. I will file a bug. > > > > On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin wrote: > > Hi Danishka: > > I checked the three pieces log you shared, but it’s hard to find any hint > to triage the issue. > > But most likely some wrong in worker nodes. > > Would you like report a bug [1] and upload all logs for > controller/workers. > > > > FYI: One way to collect log is to run “collect –all” from controller-0 > which will collect all necessary info from system. > > > > [1] https://bugs.launchpad.net/starlingx/+bugs > > > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Monday, July 26, 2021 8:58 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Openstack applying failed and > nova-compute-worker-0 pod with CrashLoopBackOff staus > > > > Hi, > > > > After rebooting the entire stx (r5 standard dedicated storage) > environment, noticed that OpenStack vm can not start and hypervisor status > is down (we have only one worker node). > > > > Furthermore, openstack-apply was failed as > nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is > CrashLoopBackOff [2]. Here is the description of the pod [3]. > > > > > > VMs were created using nova-local and mounted a shared volume, which is a > ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't > fix the issue. > > > > I would like to know any hint or suggestion to fix this issue and avoid > similar issue in future. > > > > [1] https://paste.opendev.org/show/807707/ > > [2] https://paste.opendev.org/show/807705/ > > [3] https://paste.opendev.org/show/807704/ > > > > Regards, > > Danishka > > -------------- next part -------------- An HTML attachment was scrubbed... URL: From austin.sun at intel.com Fri Jul 30 08:00:38 2021 From: austin.sun at intel.com (Sun, Austin) Date: Fri, 30 Jul 2021 08:00:38 +0000 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: You can directly Click “Add attachment or patch” in bottom of bug link . From: open infra Sent: Friday, July 30, 2021 3:46 PM To: Sun, Austin Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi Austin, Sorry for the delay. Bug ID is 1938508. Is there Google Drive or similar location available to upload the tar file of the 'collect' output? Regards, Danishka On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin > wrote: Hi Danishka: Would you like know the bug number once you created ? Thanks. BR Austin Sun. From: open infra > Sent: Thursday, July 29, 2021 11:16 AM To: Sun, Austin > Cc: starlingx-discuss at lists.starlingx.io Subject: Re: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Thanks, Austin. I will file a bug. On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin > wrote: Hi Danishka: I checked the three pieces log you shared, but it’s hard to find any hint to triage the issue. But most likely some wrong in worker nodes. Would you like report a bug [1] and upload all logs for controller/workers. FYI: One way to collect log is to run “collect –all” from controller-0 which will collect all necessary info from system. [1] https://bugs.launchpad.net/starlingx/+bugs Thanks. BR Austin Sun. From: open infra > Sent: Monday, July 26, 2021 8:58 PM To: starlingx-discuss at lists.starlingx.io Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus Hi, After rebooting the entire stx (r5 standard dedicated storage) environment, noticed that OpenStack vm can not start and hypervisor status is down (we have only one worker node). Furthermore, openstack-apply was failed as nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is CrashLoopBackOff [2]. Here is the description of the pod [3]. VMs were created using nova-local and mounted a shared volume, which is a ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't fix the issue. I would like to know any hint or suggestion to fix this issue and avoid similar issue in future. [1] https://paste.opendev.org/show/807707/ [2] https://paste.opendev.org/show/807705/ [3] https://paste.opendev.org/show/807704/ Regards, Danishka -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Fri Jul 30 08:45:09 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 30 Jul 2021 14:15:09 +0530 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: Just asked, as the file size is almost 1GB. Let me try. On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin wrote: > You can directly Click “Add attachment or patch > ” in > bottom of bug link . > > > > > > *From:* open infra > *Sent:* Friday, July 30, 2021 3:46 PM > *To:* Sun, Austin > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Openstack applying failed and > nova-compute-worker-0 pod with CrashLoopBackOff staus > > > > Hi Austin, > > > > Sorry for the delay. Bug ID is 1938508. > > Is there Google Drive or similar location available to upload the tar file > of the 'collect' output? > > > > Regards, > > Danishka > > > > On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin wrote: > > Hi Danishka: > > Would you like know the bug number once you created ? > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Thursday, July 29, 2021 11:16 AM > *To:* Sun, Austin > *Cc:* starlingx-discuss at lists.starlingx.io > *Subject:* Re: [Starlingx-discuss] Openstack applying failed and > nova-compute-worker-0 pod with CrashLoopBackOff staus > > > > Thanks, Austin. I will file a bug. > > > > On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin wrote: > > Hi Danishka: > > I checked the three pieces log you shared, but it’s hard to find any hint > to triage the issue. > > But most likely some wrong in worker nodes. > > Would you like report a bug [1] and upload all logs for > controller/workers. > > > > FYI: One way to collect log is to run “collect –all” from controller-0 > which will collect all necessary info from system. > > > > [1] https://bugs.launchpad.net/starlingx/+bugs > > > > > > Thanks. > > BR > Austin Sun. > > > > *From:* open infra > *Sent:* Monday, July 26, 2021 8:58 PM > *To:* starlingx-discuss at lists.starlingx.io > *Subject:* [Starlingx-discuss] Openstack applying failed and > nova-compute-worker-0 pod with CrashLoopBackOff staus > > > > Hi, > > > > After rebooting the entire stx (r5 standard dedicated storage) > environment, noticed that OpenStack vm can not start and hypervisor status > is down (we have only one worker node). > > > > Furthermore, openstack-apply was failed as > nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is > CrashLoopBackOff [2]. Here is the description of the pod [3]. > > > > > > VMs were created using nova-local and mounted a shared volume, which is a > ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't > fix the issue. > > > > I would like to know any hint or suggestion to fix this issue and avoid > similar issue in future. > > > > [1] https://paste.opendev.org/show/807707/ > > [2] https://paste.opendev.org/show/807705/ > > [3] https://paste.opendev.org/show/807704/ > > > > Regards, > > Danishka > -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yue.Tao at windriver.com Fri Jul 30 10:50:16 2021 From: Yue.Tao at windriver.com (Tao, Yue) Date: Fri, 30 Jul 2021 10:50:16 +0000 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: <61fc5f83-49dd-5c31-d9db-5e4d986ab2c6@windriver.com> References: <61fc5f83-49dd-5c31-d9db-5e4d986ab2c6@windriver.com> Message-ID: Hi Charles: I did some investigation of debian/watch file, which is the input file for uscan command. It’s purpose is to detect if upstream has a newer release. I tried the uscan, which is limited to run with a full debian folder, not only a dedbian/watch. I also try to the option “—watchfile” to specify a watch file, but it still checks the debian/changelog file. $uscan –watchfile path/watch uscan die: Are you in the source code tree? Cannot find readable debian/changelog anywhere! at And the url in watch file must be a regex expression, for example, Allow http://ftp.isc.org/isc/dhcp/(\d.\d.\d*)/dhcp-(\d.*)\.(?:tgz|tbz2|txz|tar\.(?:gz|bz2|xz)) Not allow http://ftp.isc.org/isc/dhcp/4.4.2/dhcp-4.4.2.tar.gz A developer only care about a special version, but he has to build a regex expression, of cause uscan has –download-version to specify a version, but watch file can’t meet a unregular url, like https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31, which is a downloading url in Starlingx repositories. I think it’s better not to use the uscan+watch to download 3rd part packages. Thanks, ytao From: Tao, Yue Sent: Tuesday, July 27, 2021 10:53 AM To: Short, Charles ; starlingx-discuss at lists.starlingx.io; Asselstine, Mark ; Bai, Haiqing ; Panech, Davlet Subject: Re: [Starlingx-discuss] [Debian Build]: Layout of the debian folder On 7/27/21 7:05 AM, Short, Charles wrote: Hi, First of all, I'm appreciated for your input. I had a look through the document and it looks okay. I just have a couple of comments/questions about the folder content: * **debver** - This is the package version of the debian package? This is generated from the debian/changelog when the package is built. I guess the problem is that you want to differentiate from a starlingx package and native package? My suggestion would be what the Ubuntu developers do. For example, if we have patches to apply for systemd, the debian version would be 247.3-3, while the StarlingX package would be 247.3-3stx0. Then a user or developer can quickly see that they are working with a native debian package or a starlingx modified package. For a Debian native package, the *debver* is the debian version. We use it to download the src via "apt-source packagname=debver", for a starlingx package, it is up to the developer. I observed in CentOS version, they added a patch to change the package version, for example integ/base/lighttpd/centos/meta_patches/Update-package-versioning-for-TIS-format.patch, which appends a 'tis' to package version, that requests developer to update the version manually. After transiting to debian, OBS can achieve it automatically. It can append 'tis' or 'stx' to a package, so don't need developers to update it manually. * **deb_patches** - As a developer, if the package has its debian/patches I would rather use the debian/patches directory. If the directory doesnt exist I am more likely to create a debian/patches directory and then send a patch to debian to fix a problem. Exactly, we should send a fix to debian community and then uprev the package to get the fix, but we also need to consider local changes. The *patches" folder contains the local change for source codes, that will be copied to debian/patches (or create it if doesn't exist), and update the debian/patches/series. The *deb_patches* contains the patches to change debian folder, for example, we want to customize the installation of a package, we need to update override_dh_install stage in debian/rules. The patches will be applied to package/debian folder directly. The *deb_patches* is similar with centos/meta_patches The *patches* is similar with centos/patches * * **dl_path** - Debian already does this for you already, the debian/watch file keeps track of where to download a tarball, or a zip file, or a python wheel for you. You can use uscan(1), which is a part of devscripts package. Debian/Ubuntu developers have been doing this for years. (https://wiki.debian.org/debian/watch) Thanks you for pointing me the debian/watch. I will take some time to learn about debian/watch and go back soon. It looks a debian file for monitoring the upstream version change. I am not sure if it is suitable for a no debian package. In our internal discussion, Davlet told me a special case, that the url of a tar ball likes this https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31 As Davlet's suggestion, the *dl_path* refers to example file and the script that parses it. I will do assessment for debian/watch. Thanks, ytao If you have any questions please let me know. Regards chuck debian/watch - Debian Wiki debian/watch. The file named watch in the debian directory is used to check for newer versions of upstream software is available and to download it if necessary. The download itself will be performed with the uscan program from the devscripts package. It takes the path to the debian directory that uses the watch file as an argument or searches the directories underneath the current working ... wiki.debian.org ________________________________ From: ytao Sent: July 26, 2021 3:33 AM To: starlingx-discuss at lists.starlingx.io ; Asselstine, Mark Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hello Everyone: You may have known Wind River team is working on the transition to Debian OS, more details of the project can be found at https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching a discussion about the userspace packages transition to Debian. The spec can be found at https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html In order to inherit the existing userspace construction as much as possible, our proposal is creating a 'debian' folder in same directory with 'centos' folder for each package. For example, the dhcp package in integ repo, it's centos folder is at: https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos We will create a debian folder in same location https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. All materials in this folder control building dhcp under Debian host. The "stx_deb_folder_layout.rst" is the layout of debian folder. I also attach a couple of samples to demonstrate how to fill the debian folder for a debian package and a 3rd package. This layout is not the final version, I'm appreciated for any suggestion from you. Thanks, ytao -------------- next part -------------- An HTML attachment was scrubbed... URL: From nicolae.jascanu at intel.com Fri Jul 30 11:26:59 2021 From: nicolae.jascanu at intel.com (Jascanu, Nicolae) Date: Fri, 30 Jul 2021 11:26:59 +0000 Subject: [Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210728T020149Z Message-ID: Sanity Test from 2021-July-28 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210728T020149Z/outputs/iso/bootimage.iso ) Status: GREEN Executed on BARE METAL DUPLEX Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210728T020149Z/outputs/helm-charts/helm-charts-stx-openstack-centos-stable-versioned.tgz =========================================== Sanity Test executed on Bare Metal AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] Regards, Nicolae Jascanu, Ph.D. Software Engineer INTEL IOTG Galati, Romania -------------- next part -------------- An HTML attachment was scrubbed... URL: From Charles.Short at windriver.com Fri Jul 30 13:06:39 2021 From: Charles.Short at windriver.com (Short, Charles) Date: Fri, 30 Jul 2021 13:06:39 +0000 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: References: <61fc5f83-49dd-5c31-d9db-5e4d986ab2c6@windriver.com> Message-ID: Hi, You can specify the version you wish to download, I did the following: * apt-get source isc-dhcp * cd isc-dhcp-4.4.1 * uscan –download-version 4.3.5 * Regards chuck From: Tao, Yue Sent: Friday, July 30, 2021 6:50 AM To: Tao, Yue ; Short, Charles ; starlingx-discuss at lists.starlingx.io; Asselstine, Mark ; Bai, Haiqing ; Panech, Davlet Subject: RE: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hi Charles: I did some investigation of debian/watch file, which is the input file for uscan command. It’s purpose is to detect if upstream has a newer release. I tried the uscan, which is limited to run with a full debian folder, not only a dedbian/watch. I also try to the option “—watchfile” to specify a watch file, but it still checks the debian/changelog file. $uscan –watchfile path/watch uscan die: Are you in the source code tree? Cannot find readable debian/changelog anywhere! at And the url in watch file must be a regex expression, for example, Allow http://ftp.isc.org/isc/dhcp/(\d.\d.\d*)/dhcp-(\d.*)\.(?:tgz|tbz2|txz|tar\.(?:gz|bz2|xz)) Not allow http://ftp.isc.org/isc/dhcp/4.4.2/dhcp-4.4.2.tar.gz A developer only care about a special version, but he has to build a regex expression, of cause uscan has –download-version to specify a version, but watch file can’t meet a unregular url, like https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31, which is a downloading url in Starlingx repositories. I think it’s better not to use the uscan+watch to download 3rd part packages. Thanks, ytao From: Tao, Yue > Sent: Tuesday, July 27, 2021 10:53 AM To: Short, Charles >; starlingx-discuss at lists.starlingx.io; Asselstine, Mark >; Bai, Haiqing >; Panech, Davlet > Subject: Re: [Starlingx-discuss] [Debian Build]: Layout of the debian folder On 7/27/21 7:05 AM, Short, Charles wrote: Hi, First of all, I'm appreciated for your input. I had a look through the document and it looks okay. I just have a couple of comments/questions about the folder content: * **debver** - This is the package version of the debian package? This is generated from the debian/changelog when the package is built. I guess the problem is that you want to differentiate from a starlingx package and native package? My suggestion would be what the Ubuntu developers do. For example, if we have patches to apply for systemd, the debian version would be 247.3-3, while the StarlingX package would be 247.3-3stx0. Then a user or developer can quickly see that they are working with a native debian package or a starlingx modified package. For a Debian native package, the *debver* is the debian version. We use it to download the src via "apt-source packagname=debver", for a starlingx package, it is up to the developer. I observed in CentOS version, they added a patch to change the package version, for example integ/base/lighttpd/centos/meta_patches/Update-package-versioning-for-TIS-format.patch, which appends a 'tis' to package version, that requests developer to update the version manually. After transiting to debian, OBS can achieve it automatically. It can append 'tis' or 'stx' to a package, so don't need developers to update it manually. * **deb_patches** - As a developer, if the package has its debian/patches I would rather use the debian/patches directory. If the directory doesnt exist I am more likely to create a debian/patches directory and then send a patch to debian to fix a problem. Exactly, we should send a fix to debian community and then uprev the package to get the fix, but we also need to consider local changes. The *patches" folder contains the local change for source codes, that will be copied to debian/patches (or create it if doesn't exist), and update the debian/patches/series. The *deb_patches* contains the patches to change debian folder, for example, we want to customize the installation of a package, we need to update override_dh_install stage in debian/rules. The patches will be applied to package/debian folder directly. The *deb_patches* is similar with centos/meta_patches The *patches* is similar with centos/patches * * **dl_path** - Debian already does this for you already, the debian/watch file keeps track of where to download a tarball, or a zip file, or a python wheel for you. You can use uscan(1), which is a part of devscripts package. Debian/Ubuntu developers have been doing this for years. (https://wiki.debian.org/debian/watch) Thanks you for pointing me the debian/watch. I will take some time to learn about debian/watch and go back soon. It looks a debian file for monitoring the upstream version change. I am not sure if it is suitable for a no debian package. In our internal discussion, Davlet told me a special case, that the url of a tar ball likes this https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31 As Davlet's suggestion, the *dl_path* refers to example file and the script that parses it. I will do assessment for debian/watch. Thanks, ytao If you have any questions please let me know. Regards chuck debian/watch - Debian Wiki debian/watch. The file named watch in the debian directory is used to check for newer versions of upstream software is available and to download it if necessary. The download itself will be performed with the uscan program from the devscripts package. It takes the path to the debian directory that uses the watch file as an argument or searches the directories underneath the current working ... wiki.debian.org ________________________________ From: ytao Sent: July 26, 2021 3:33 AM To: starlingx-discuss at lists.starlingx.io ; Asselstine, Mark Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hello Everyone: You may have known Wind River team is working on the transition to Debian OS, more details of the project can be found at https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching a discussion about the userspace packages transition to Debian. The spec can be found at https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html In order to inherit the existing userspace construction as much as possible, our proposal is creating a 'debian' folder in same directory with 'centos' folder for each package. For example, the dhcp package in integ repo, it's centos folder is at: https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos We will create a debian folder in same location https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. All materials in this folder control building dhcp under Debian host. The "stx_deb_folder_layout.rst" is the layout of debian folder. I also attach a couple of samples to demonstrate how to fill the debian folder for a debian package and a 3rd package. This layout is not the final version, I'm appreciated for any suggestion from you. Thanks, ytao -------------- next part -------------- An HTML attachment was scrubbed... URL: From openinfradn at gmail.com Fri Jul 30 13:17:29 2021 From: openinfradn at gmail.com (open infra) Date: Fri, 30 Jul 2021 18:47:29 +0530 Subject: [Starlingx-discuss] Openstack applying failed and nova-compute-worker-0 pod with CrashLoopBackOff staus In-Reply-To: References: Message-ID: I am unable to upload the file. On Fri, Jul 30, 2021 at 2:15 PM open infra wrote: > Just asked, as the file size is almost 1GB. > Let me try. > > On Fri, Jul 30, 2021 at 1:30 PM Sun, Austin wrote: > >> You can directly Click “Add attachment or patch >> ” in >> bottom of bug link . >> >> >> >> >> >> *From:* open infra >> *Sent:* Friday, July 30, 2021 3:46 PM >> *To:* Sun, Austin >> *Cc:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] Openstack applying failed and >> nova-compute-worker-0 pod with CrashLoopBackOff staus >> >> >> >> Hi Austin, >> >> >> >> Sorry for the delay. Bug ID is 1938508. >> >> Is there Google Drive or similar location available to upload the tar >> file of the 'collect' output? >> >> >> >> Regards, >> >> Danishka >> >> >> >> On Fri, Jul 30, 2021 at 5:58 AM Sun, Austin wrote: >> >> Hi Danishka: >> >> Would you like know the bug number once you created ? >> >> >> >> Thanks. >> >> BR >> Austin Sun. >> >> >> >> *From:* open infra >> *Sent:* Thursday, July 29, 2021 11:16 AM >> *To:* Sun, Austin >> *Cc:* starlingx-discuss at lists.starlingx.io >> *Subject:* Re: [Starlingx-discuss] Openstack applying failed and >> nova-compute-worker-0 pod with CrashLoopBackOff staus >> >> >> >> Thanks, Austin. I will file a bug. >> >> >> >> On Thu, Jul 29, 2021 at 6:25 AM Sun, Austin wrote: >> >> Hi Danishka: >> >> I checked the three pieces log you shared, but it’s hard to find any hint >> to triage the issue. >> >> But most likely some wrong in worker nodes. >> >> Would you like report a bug [1] and upload all logs for >> controller/workers. >> >> >> >> FYI: One way to collect log is to run “collect –all” from controller-0 >> which will collect all necessary info from system. >> >> >> >> [1] https://bugs.launchpad.net/starlingx/+bugs >> >> >> >> >> >> Thanks. >> >> BR >> Austin Sun. >> >> >> >> *From:* open infra >> *Sent:* Monday, July 26, 2021 8:58 PM >> *To:* starlingx-discuss at lists.starlingx.io >> *Subject:* [Starlingx-discuss] Openstack applying failed and >> nova-compute-worker-0 pod with CrashLoopBackOff staus >> >> >> >> Hi, >> >> >> >> After rebooting the entire stx (r5 standard dedicated storage) >> environment, noticed that OpenStack vm can not start and hypervisor status >> is down (we have only one worker node). >> >> >> >> Furthermore, openstack-apply was failed as >> nova-compute-worker-0-13cc482d-7t9kq pod was not ready [1] and status is >> CrashLoopBackOff [2]. Here is the description of the pod [3]. >> >> >> >> >> >> VMs were created using nova-local and mounted a shared volume, which is a >> ceph based volume. Lock and Unlocking + re-applying of stx-openstack didn't >> fix the issue. >> >> >> >> I would like to know any hint or suggestion to fix this issue and avoid >> similar issue in future. >> >> >> >> [1] https://paste.opendev.org/show/807707/ >> >> [2] https://paste.opendev.org/show/807705/ >> >> [3] https://paste.opendev.org/show/807704/ >> >> >> >> Regards, >> >> Danishka >> > -------------- next part -------------- An HTML attachment was scrubbed... URL: From scott.little at windriver.com Fri Jul 30 14:00:36 2021 From: scott.little at windriver.com (Scott Little) Date: Fri, 30 Jul 2021 10:00:36 -0400 Subject: [Starlingx-discuss] [build-report] master STX_build_layer_flock_master_master - Build # 562 - Failure! In-Reply-To: References: <700444321.353.1627577096356.JavaMail.javamailuser@localhost> Message-ID: <4d8f9d6b-4f56-68ad-2fd2-e817f250cbbe@windriver.com> Two more fixes merged.    https://review.opendev.org/c/starlingx/root/+/800989    https://review.opendev.org/c/starlingx/root/+/800990 CENGN builds have all passed. Scott On 2021-07-29 2:43 p.m., Scott Little wrote: > Two issues have been identified. > > Reviews will be posted shortly. > > Scott > > > > On 2021-07-29 1:48 p.m., Scott Little wrote: >> The build has now failed within the build of the installer. >> >> Investigating ... >> >> Scott >> >> >> On 2021-07-29 12:44 p.m., build.starlingx at gmail.com wrote: >>> [Please note: This e-mail is from an EXTERNAL e-mail address] >>> >>> Project: STX_build_layer_flock_master_master >>> Build #: 562 >>> Status: Failure >>> Timestamp: 20210729T160527Z >>> Branch: master >>> >>> Check logs at: >>> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210729T160527Z/logs >>> -------------------------------------------------------------------------------- >>> Parameters >>> >>> FULL_BUILD: false >>> FORCE_BUILD: false >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss at lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss at lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss > > > > _______________________________________________ > Starlingx-discuss mailing list > Starlingx-discuss at lists.starlingx.io > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss -------------- next part -------------- An HTML attachment was scrubbed... URL: From Yue.Tao at windriver.com Sat Jul 31 14:04:24 2021 From: Yue.Tao at windriver.com (Tao, Yue) Date: Sat, 31 Jul 2021 14:04:24 +0000 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: References: <61fc5f83-49dd-5c31-d9db-5e4d986ab2c6@windriver.com> Message-ID: Hi Exactly, --download-version specify a version to download, but the restriction of uscan is the url in debian/watch must be a regex expression $cat isc-dhcp-4.4.1/debian/watch version=3 opts="uversionmangle=s/(rc|a|b|c)/~$1/,pgpsigurlmangle=s/$/.sha512.asc/" \ http://ftp.isc.org/isc/dhcp/(\d.\d.\d*)/dhcp-(\d.*)\.(?:tgz|tbz2|txz|tar\.(?:gz|bz2|xz)) As I mentioned before, some urls of 3rd part packages can’t be converted to a regex expression, so I think uscan may not suitable for downloading 3rd part packages. But uscan is a very powerful tool, probably we use it in other places. FYI the special urls. http://git.yoctoproject.org/cgit/cgit.cgi/linux-yocto/snapshot/linux-yocto-b44437fb32fe50b5664afd12098d928e1aaee111.tar.bz2 https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31 BTW, I attach a revision layout, which updates md5 checksum in dl_patch. Thanks, Ytao From: Short, Charles Sent: Friday, July 30, 2021 9:07 PM To: Tao, Yue ; starlingx-discuss at lists.starlingx.io; Asselstine, Mark ; Bai, Haiqing ; Panech, Davlet Subject: RE: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hi, You can specify the version you wish to download, I did the following: * apt-get source isc-dhcp * cd isc-dhcp-4.4.1 * uscan –download-version 4.3.5 * Regards chuck From: Tao, Yue > Sent: Friday, July 30, 2021 6:50 AM To: Tao, Yue >; Short, Charles >; starlingx-discuss at lists.starlingx.io; Asselstine, Mark >; Bai, Haiqing >; Panech, Davlet > Subject: RE: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hi Charles: I did some investigation of debian/watch file, which is the input file for uscan command. It’s purpose is to detect if upstream has a newer release. I tried the uscan, which is limited to run with a full debian folder, not only a dedbian/watch. I also try to the option “—watchfile” to specify a watch file, but it still checks the debian/changelog file. $uscan –watchfile path/watch uscan die: Are you in the source code tree? Cannot find readable debian/changelog anywhere! at And the url in watch file must be a regex expression, for example, Allow http://ftp.isc.org/isc/dhcp/(\d.\d.\d*)/dhcp-(\d.*)\.(?:tgz|tbz2|txz|tar\.(?:gz|bz2|xz)) Not allow http://ftp.isc.org/isc/dhcp/4.4.2/dhcp-4.4.2.tar.gz A developer only care about a special version, but he has to build a regex expression, of cause uscan has –download-version to specify a version, but watch file can’t meet a unregular url, like https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31, which is a downloading url in Starlingx repositories. I think it’s better not to use the uscan+watch to download 3rd part packages. Thanks, ytao From: Tao, Yue > Sent: Tuesday, July 27, 2021 10:53 AM To: Short, Charles >; starlingx-discuss at lists.starlingx.io; Asselstine, Mark >; Bai, Haiqing >; Panech, Davlet > Subject: Re: [Starlingx-discuss] [Debian Build]: Layout of the debian folder On 7/27/21 7:05 AM, Short, Charles wrote: Hi, First of all, I'm appreciated for your input. I had a look through the document and it looks okay. I just have a couple of comments/questions about the folder content: * **debver** - This is the package version of the debian package? This is generated from the debian/changelog when the package is built. I guess the problem is that you want to differentiate from a starlingx package and native package? My suggestion would be what the Ubuntu developers do. For example, if we have patches to apply for systemd, the debian version would be 247.3-3, while the StarlingX package would be 247.3-3stx0. Then a user or developer can quickly see that they are working with a native debian package or a starlingx modified package. For a Debian native package, the *debver* is the debian version. We use it to download the src via "apt-source packagname=debver", for a starlingx package, it is up to the developer. I observed in CentOS version, they added a patch to change the package version, for example integ/base/lighttpd/centos/meta_patches/Update-package-versioning-for-TIS-format.patch, which appends a 'tis' to package version, that requests developer to update the version manually. After transiting to debian, OBS can achieve it automatically. It can append 'tis' or 'stx' to a package, so don't need developers to update it manually. * **deb_patches** - As a developer, if the package has its debian/patches I would rather use the debian/patches directory. If the directory doesnt exist I am more likely to create a debian/patches directory and then send a patch to debian to fix a problem. Exactly, we should send a fix to debian community and then uprev the package to get the fix, but we also need to consider local changes. The *patches" folder contains the local change for source codes, that will be copied to debian/patches (or create it if doesn't exist), and update the debian/patches/series. The *deb_patches* contains the patches to change debian folder, for example, we want to customize the installation of a package, we need to update override_dh_install stage in debian/rules. The patches will be applied to package/debian folder directly. The *deb_patches* is similar with centos/meta_patches The *patches* is similar with centos/patches * * **dl_path** - Debian already does this for you already, the debian/watch file keeps track of where to download a tarball, or a zip file, or a python wheel for you. You can use uscan(1), which is a part of devscripts package. Debian/Ubuntu developers have been doing this for years. (https://wiki.debian.org/debian/watch) Thanks you for pointing me the debian/watch. I will take some time to learn about debian/watch and go back soon. It looks a debian file for monitoring the upstream version change. I am not sure if it is suitable for a no debian package. In our internal discussion, Davlet told me a special case, that the url of a tar ball likes this https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31 As Davlet's suggestion, the *dl_path* refers to example file and the script that parses it. I will do assessment for debian/watch. Thanks, ytao If you have any questions please let me know. Regards chuck debian/watch - Debian Wiki debian/watch. The file named watch in the debian directory is used to check for newer versions of upstream software is available and to download it if necessary. The download itself will be performed with the uscan program from the devscripts package. It takes the path to the debian directory that uses the watch file as an argument or searches the directories underneath the current working ... wiki.debian.org ________________________________ From: ytao Sent: July 26, 2021 3:33 AM To: starlingx-discuss at lists.starlingx.io ; Asselstine, Mark Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder Hello Everyone: You may have known Wind River team is working on the transition to Debian OS, more details of the project can be found at https://docs.starlingx.io/specs/specs/stx-6.0/. My purpose is launching a discussion about the userspace packages transition to Debian. The spec can be found at https://docs.starlingx.io/specs/specs/stx-6.0/approved/starlingx_2008704_debian_transition.html In order to inherit the existing userspace construction as much as possible, our proposal is creating a 'debian' folder in same directory with 'centos' folder for each package. For example, the dhcp package in integ repo, it's centos folder is at: https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/centos We will create a debian folder in same location https://opendev.org/starlingx/integ/src/branch/master/base/dhcp/debian. All materials in this folder control building dhcp under Debian host. The "stx_deb_folder_layout.rst" is the layout of debian folder. I also attach a couple of samples to demonstrate how to fill the debian folder for a debian package and a 3rd package. This layout is not the final version, I'm appreciated for any suggestion from you. Thanks, ytao -------------- next part -------------- An HTML attachment was scrubbed... URL: -------------- next part -------------- A non-text attachment was scrubbed... Name: stx_deb_folder_layout.rst Type: application/octet-stream Size: 7122 bytes Desc: stx_deb_folder_layout.rst URL: From fungi at yuggoth.org Sat Jul 31 14:35:35 2021 From: fungi at yuggoth.org (Jeremy Stanley) Date: Sat, 31 Jul 2021 14:35:35 +0000 Subject: [Starlingx-discuss] [Debian Build]: Layout of the debian folder In-Reply-To: References: <61fc5f83-49dd-5c31-d9db-5e4d986ab2c6@windriver.com> Message-ID: <20210731143533.qzvfcuxihurqbkyk@yuggoth.org> On 2021-07-31 14:04:24 +0000 (+0000), Tao, Yue wrote: [...] > As I mentioned before, some urls of 3rd part packages can’t be > converted to a regex expression, so I think uscan may not suitable > for downloading 3rd part packages. But uscan is a very powerful > tool, probably we use it in other places. > > FYI the special urls. > http://git.yoctoproject.org/cgit/cgit.cgi/linux-yocto/snapshot/linux-yocto-b44437fb32fe50b5664afd12098d928e1aaee111.tar.bz2 > https://api.github.com/repos/ceph/jerasure/tarball/96c76b89d661c163f65a014b8042c9354ccf7f31 [...] I recommend reviewing the (very extensive) uscan manpage as the tool has many options for working around such cases as well as convenience modes for things like directly accessing Git repositories or interfacing with popular but known-problematic sites (SourceForge, GitHub, PyPI, NPM, Google Code). Check the examples section for some watchfile options which may be exactly like what you need for these cases. -- Jeremy Stanley -------------- next part -------------- A non-text attachment was scrubbed... Name: signature.asc Type: application/pgp-signature Size: 963 bytes Desc: not available URL: From Linda.Wang at windriver.com Sat Jul 31 01:40:03 2021 From: Linda.Wang at windriver.com (Linda Wang) Date: Fri, 30 Jul 2021 18:40:03 -0700 Subject: [Starlingx-discuss] Bi-Weekly StarlingX OS Distro & Multi-OS Meeting Minutes: July 21, 2021 Message-ID: 07/21/2021 Agenda items: Attendees: Charles Short,Steve Geary, Linda Wang, Mark Asselstine, Scott Little, Jason Norton, Bill Zvonar, Davlet P., Frank Miller 1. OS Distro (Mark) * Continue to get patch review via gerrit : 2 reviews left o Specific to 5.10 kernel work o 100% transferrable to Debian packaging * Container docker files, and build container are out for review o get them build on CENGNG so that developers dont need to build themselves and get these from dockerhub o Then we can circle back to get these push into the repository * Individual components are going well o We have shared the ISO image o we have applily and available for LAT o Main focus now is to get these components to work well together, not individually * Debian equalvent CentOS build folder o 3 ways to use the Debian build folders o had code review on those build folders o get the build folders send out for Xts enviornment to review + AI: to get Yue Tao to send that out. (cc Scott and Charles) * Python Status o lots of python scripts have been merged into Fedora head's branch o backporting python3 change to master 2. Multi-OS * N/A -------------- next part -------------- An HTML attachment was scrubbed... URL: