lists.starlingx.io
Sign In Sign Up
Manage this list Sign In Sign Up

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Starlingx-discuss

Thread Start a new thread
Download
Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2018 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
starlingx-discuss@lists.starlingx.io

April 2021

  • 36 participants
  • 164 discussions
[Starlingx-discuss] [build-report] STX_build_lst_audit - Build # 1817 - Failure!
by build.starlingx@gmail.com 30 Apr '21

30 Apr '21
Project: STX_build_lst_audit Build #: 1817 Status: Failure Timestamp: 20210430T181013Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20… -------------------------------------------------------------------------------- Parameters MY_WORKSPACE: /localdisk/loadbuild/jenkins/master-compiler/20210430T180309Z DOCKER_BUILD_ID: jenkins-master-compiler-20210430T180309Z-builder OS: centos MY_REPO: /localdisk/designer/jenkins/master-compiler/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/compiler/20… PUBLISH_LOGS_BASE: /export/mirror/starlingx/master/centos/compiler/20210430T180309Z/logs MASTER_JOB_NAME: STX_build_layer_compiler_master_master LAYER: compiler MY_REPO_ROOT: /localdisk/designer/jenkins/master-compiler PUBLISH_DISTRO_BASE: /export/mirror/starlingx/master/centos/compiler
1 0
0 0
[Starlingx-discuss] LP1918420 issue debug and fix solution proposal
by Chen, Haochuan Z 30 Apr '21

30 Apr '21
Hi Bob and Qian bin I check this issue, describe the fail reason. You can download the log file to check https://bugs.launchpad.net/starlingx/+bug/1918420 log file https://bugs.launchpad.net/starlingx/+bug/1918420/+attachment/5493137/+file… In the above log file, at 14:19, controller-0 reboot, at this time controller-0 is active. Then controller-1 begin to swact to active. >From controller-1 sm.log file, sm on controller-1 enable services, and launch enabled service audit. but mgr-restful-plugin audit fail. 2021-04-28T15:19:50.000 controller-1 sm: debug time[993.980] log<816> INFO: sm[91418]: sm_node_swact_monitor.cpp(37): Swact update: controller-1 is now active 2021-04-28T15:19:50.000 controller-1 sm: debug time[993.980] log<817> INFO: sm[91418]: sm_node_swact_monitor.cpp(57): Swact has completed successfully. 2021-04-28T15:19:50.000 controller-1 sm: debug time[993.980] log<818> INFO: sm[91418]: sm_service_group_fsm.c(904): Service group (vim-services) was in the go-active state and is now in the active state. ... 2021-04-28T15:19:51.000 controller-1 sm: debug time[994.849] log<820> INFO: sm[91418]: sm_main_event_handler.c(170): Set node (controller-1) requested, action=2, admin_state=unlocked, oper_state=enabled, avail_status=available, seqno=1. 2021-04-28T15:19:51.000 controller-1 sm: debug time[995.073] log<821> INFO: sm[91418]: sm_main_event_handler.c(170): Set node (controller-0) requested, action=2, admin_state=unlocked, oper_state=enabled, avail_status=available, seqno=2. 2021-04-28T15:19:51.000 controller-1 sm: debug time[995.329] log<822> INFO: sm[91418]: sm_service_audit.c(176): Action (audit-enabled) timeout with result (failed), state (unknown), status (unknown), and condition (unknown) for service (mgr-restful-plugin), reason_text=, exit_code=-65534. 2021-04-28T15:19:51.000 controller-1 sm: debug time[995.329] log<823> INFO: sm[91418]: sm_service_action.c(345): Aborting service (mgr-restful-plugin) with kill signal, pid=527592. 2021-04-28T15:19:51.000 controller-1 sm: debug time[995.329] log<824> INFO: sm[91418]: sm_service_audit.c(75): Max retires not met for action (audit-enabled) of service (mgr-restful-plugin), attempts=1. 2021-04-28T15:19:51.000 controller-1 sm: debug time[995.330] log<825> ERROR: sm[91418]: sm_service_audit.c(227): Failed to query service based on pid (527592), error=NOT_FOUND. >From controller-1, mgr-restful-plugin.log, at 15:19:51, we could find as just swact, "ceph mgr services list -format json", will still return link https://controller-0:7999", (it will take a while then return "controller-1:7999") As at this time controller-0 is rebooting, so mgr-restful-plugin ping to ceph mgr service doesn't work. This makes sm audit fail 2021-04-28 15:19:47,994 99448 WARNING mgr-restful-plugin REST API ping failed: reason=HTTPSConnectionPool(host='controller-0', port=7999): Max retries exceeded with url: / (Caused by ConnectTimeoutError(<urllib3.connection.VerifiedHTTPSConnection object at 0x7f4d0cd23e90>, 'Connection to controller-0 timed out. (connect timeout=15)')) 2021-04-28 15:19:47,994 99448 INFO mgr-restful-plugin REST API ping failure count=0 After 2 times audit fail, sm will disable all service and try to swact to controller-0, but as controller-0 is still doesn't recovery, so sm always make such loop, enable service and disable service. 2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1678> INFO: sm[91418]: sm_service_domain_filter.c(338): Uncontrolled swact start 2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1679> INFO: sm[91418]: sm_node_swact_monitor.cpp(29): Swact has started, host will be standby 2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1680> INFO: sm[91418]: sm_service_domain_filter.c(338): Uncontrolled swact start 2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1681> INFO: sm[91418]: sm_node_swact_monitor.cpp(29): Swact has started, host will be standby 2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1682> INFO: sm[91418]: sm_service_domain_filter.c(338): Uncontrolled swact start 2021-04-28T15:22:30.000 controller-1 sm: debug time[1153.993] log<1683> INFO: sm[91418]: sm_node_swact_monitor.cpp(29): Swact has started, host will be standby In such short period, user make request to sysinv-conductor to remove system application, as cluster-host-ip service is disabled by sm, application remove fail. sysinv 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes [-] Exception generating bootstrap token: CalledProcessError: Command '['kubeadm', '--kubeconfig=/etc/kubernetes/admin.conf', 'config', 'view']' returned non-zero exit status 1 sysinv 2021-04-28 15:35:21.663 1072608 WARNING sysinv.conductor.kube_app [-] Failed to clear Armada locks.: MaxRetryError: HTTPSConnectionPool(host='192.168.206.1', port=6443): Max retries exceeded with url: /apis/armada.process/v1/namespaces/kube-system/locks/locks.armada.process.lock (Caused by NewConnectionError('<urllib3.connection.VerifiedHTTPSConnection object at 0x7fb256c28650>: Failed to establish a new connection: [Errno 111] ECONNREFUSED',)) 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes Traceback (most recent call last): 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes File "/usr/lib64/python2.7/site-packages/sysinv/puppet/kubernetes.py", line 168, in _get_kubernetes_join_cmd 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes subprocess.check_call(cmd, stdout=f) # pylint: disable=not-callable 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes File "/usr/lib64/python2.7/subprocess.py", line 542, in check_call 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes raise CalledProcessError(retcode, cmd) 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes CalledProcessError: Command '['kubeadm', '--kubeconfig=/etc/kubernetes/admin.conf', 'config', 'view']' returned non-zero exit status 1 2021-04-28 15:35:27.492 1072608 ERROR sysinv.puppet.kubernetes In mgr-restful-plugin.py, the timer to check ceph-mgr service ready, it will count 5, timeout 15s and if fail, wait 3s and check again. In sm, audit mgr-restful-plugin, timeout attempt 2 times, timeout 15s, smaller than above timeout setting. Propose in sm, timeout change to 20s(longer than 15s+3s), change timeout attempt change to 3 times. INSERT INTO "SERVICE_ACTIONS" VALUES('mgr-restful-plugin','audit-enabled','lsb-script','','mgr-restful-plugin','status','',2,2,2,15,40); My patch for your idea. https://review.opendev.org/c/starlingx/ha/+/788897 Thanks! Martin, Chen IOTG, Software Engineer 021-61164330
1 1
0 0
[Starlingx-discuss] [build-report] r/stx.5.0 STX_5.0_build_layer_containers - Build # 8 - Still Failing!
by build.starlingx@gmail.com 29 Apr '21

29 Apr '21
Project: STX_5.0_build_layer_containers Build #: 8 Status: Still Failing Timestamp: 20210430T000932Z Branch: r/stx.5.0 Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/containers/… -------------------------------------------------------------------------------- Parameters BUILD_CONTAINERS_DEV: false BUILD_CONTAINERS_STABLE: true FORCE_BUILD: false
1 0
0 0
[Starlingx-discuss] [stable] [build-report] r/stx.5.0 STX_build_docker_images_layered - Build # 116 - Still Failing!
by build.starlingx@gmail.com 29 Apr '21

29 Apr '21
Project: STX_build_docker_images_layered Build #: 116 Status: Still Failing Timestamp: 20210430T002152Z Branch: r/stx.5.0 Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/containers/… -------------------------------------------------------------------------------- Parameters BRANCH: r/stx.5.0 MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-5.0-containers/20210430T000932Z OS: centos MUNGED_BRANCH: rc-5.0 MY_REPO: /localdisk/designer/jenkins/rc-5.0-containers/cgcs-root PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/containers/… MASTER_BUILD_NUMBER: 8 PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/5.0/centos/containers/20210430T000932Z/logs MASTER_JOB_NAME: STX_5.0_build_layer_containers MY_REPO_ROOT: /localdisk/designer/jenkins/rc-5.0-containers PUBLISH_DISTRO_BASE: /export/mirror/starlingx/rc/5.0/centos/containers PUBLISH_TIMESTAMP: 20210430T000932Z DOCKER_BUILD_ID: jenkins-rc-5.0-containers-20210430T000932Z-builder TIMESTAMP: 20210430T000932Z OS_VERSION: 7.5.1804 BUILD_STREAM: stable PUBLISH_INPUTS_BASE: /export/mirror/starlingx/rc/5.0/centos/containers/20210430T000932Z/inputs LAYER: containers PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/rc/5.0/centos/containers/20210430T000932Z/outputs
1 0
0 0
[Starlingx-discuss] [stable] [build-report] STX_build_docker_flock_images - Build # 367 - Still Failing!
by build.starlingx@gmail.com 29 Apr '21

29 Apr '21
Project: STX_build_docker_flock_images Build #: 367 Status: Still Failing Timestamp: 20210430T004955Z Branch: Check logs at: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/containers/… -------------------------------------------------------------------------------- Parameters WEB_HOST: mirror.starlingx.cengn.ca MY_WORKSPACE: /localdisk/loadbuild/jenkins/rc-5.0-containers/20210430T000932Z OS: centos MY_REPO: /localdisk/designer/jenkins/rc-5.0-containers/cgcs-root BASE_VERSION: rc-5.0-stable-20210430T000932Z PUBLISH_LOGS_URL: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/containers/… REGISTRY_USERID: slittlewrs LATEST_PREFIX: rc-5.0 PUBLISH_LOGS_BASE: /export/mirror/starlingx/rc/5.0/centos/containers/20210430T000932Z/logs PUBLISH_TIMESTAMP: 20210430T000932Z FLOCK_VERSION: rc-5.0-centos-stable-20210430T000932Z WEB_HOST_PORT: 80 PREFIX: rc-5.0 TIMESTAMP: 20210430T000932Z BUILD_STREAM: stable REGISTRY_ORG: starlingx PUBLISH_OUTPUTS_BASE: /export/mirror/starlingx/rc/5.0/centos/containers/20210430T000932Z/outputs REGISTRY: docker.io
1 0
0 0
[Starlingx-discuss] [docs] [meeting] Docs team notes 28-Apr-21
by Camp, MaryX 29 Apr '21

29 Apr '21
Hello all, Here are this week's docs team meeting minutes (short form). Details in [2]. Join us if you have interest in StarlingX docs! We meet on Wednesdays 12:30 PST.   [1]   Call logistics: https://wiki.openstack.org/wiki/Starlingx/Meetings   [2]   Tracking Etherpad: https://etherpad.openstack.org/p/stx-documentation thanks, Mary Camp ========== 28-Apr-21 All -- reviews merged since last meeting: 6 All -- bug status -- 17 total - team agrees to defer all low priority LP until the upstreaming effort is completed. Status/questions/opens PTG feedback - there was good discussion about StarlingX docs on the first day of the PTG. After R5, we should take a look at the doc set as a whole - organization, contributor guide updates, and other things. We need to do a pass through all the upstreamed content - is it organized in a logical way, especially for new users? Where to go after the first install? New ? on the mailing list has similar question: http://lists.starlingx.io/pipermail/starlingx-discuss/2021-April/011286.html What happened to the sample application guide - could we bring that back? from the older Wind River downstream docs Release 5 Preparation - May 5th is the release target. In the release meeting we clarified that the STX docs are not frozen or included in the software release. They are tagged/branched at the same time but released completely separately. [This answers Ghada's questions about cherry-picking.] Cherry pick decisions We agreed that the person who submitted the review should be the one to set the 'cherry pick' option, add reviewers, then submit like a typical review. Cores need to be watching for cherry picks to come through so they can get approved quickly. Version drop-down update Kevin is working with Jeremy from OpenStack to update Zuul promotion jobs to make this work. Stay tuned for more updates.
1 0
0 0
[Starlingx-discuss] StarlingX Kubernetes Certification - wrong category?
by Radosław Piliszek 29 Apr '21

29 Apr '21
Hi All, I have noticed that StarlingX is, at [1], categorised as a *Hosted* platform. I am pretty sure this is wrong and it should be *Distribution* instead (like Airship is). [1] https://www.cncf.io/certification/software-conformance/ Kind regards, Radosław Piliszek (yoctozepto)
2 1
0 0
[Starlingx-discuss] Sanity RC 5.0 Test LAYERED build ISO 20210428T230321Z
by Dimofte, Alexandru 29 Apr '21

29 Apr '21
Sanity Test from 2021-April-28 (http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/flock/20210… ) Status: YELLOW The Setup and Provision are PASSING. This might be related to: https://bugs.launchpad.net/starlingx/+bug/1918420 Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/rc/5.0/centos/flock/20210… Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte(a)intel.com<mailto:alexandru.dimofte@intel.com> Intel Romania
1 0
0 0
[Starlingx-discuss] Sanity RC 5.0 Test LAYERED build ISO 20210429T040444Z
by Dimofte, Alexandru 29 Apr '21

29 Apr '21
Sanity Test from 2021-April-29 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210…) Status: RED This image is affected by: https://bugs.launchpad.net/starlingx/+bug/1926029 - Compute-0 install failure: Configuration failure, threshold reached, Lock/Unlock to retry Critical alarm Id: 200.004 - compute-0 experienced a service-affecting failure. Auto-recovery in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful. Critical alarm Id: 200.011 - compute-0 experienced a configuration failure. OBS: This issue is observed only on MASTER branch (RC 5.0 is not affected). Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210… Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte(a)intel.com<mailto:alexandru.dimofte@intel.com> Intel Romania
1 0
0 0
[Starlingx-discuss] virtlogd: End of file while reading data: Input/output error
by Longqian Zhao 29 Apr '21

29 Apr '21
Hi, Thank you very much for your help. I according to https://docs.starlingx.io/deploy_install_guides/r4_release/virtual/controll…. After I run `bash setup_configuration.sh -c controllerstorage -i ./bootimage.iso`, I can view the error message through the `sudo systemctl status virtlogd` command. Could you please help me? Thanks. Error message: "virtlogd: End of file while reading data: Input/output error" ===================================== My Environment: 1. I install StarlingX in Virtual Environment. 2. StarlingX version: R4.0 3. stx-tools: stx.4.0
1 0
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • ...
  • 17
  • Older →

HyperKitty Powered by HyperKitty version 1.3.12.