lists.starlingx.io
Sign In Sign Up
Manage this list Sign In Sign Up

Keyboard Shortcuts

Thread View

  • j: Next unread message
  • k: Previous unread message
  • j a: Jump to all threads
  • j l: Jump to MailingList overview

Starlingx-discuss

Thread Start a new thread
Download
Threads by month
  • ----- 2025 -----
  • May
  • April
  • March
  • February
  • January
  • ----- 2024 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2023 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2022 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2021 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2020 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2019 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
  • May
  • April
  • March
  • February
  • January
  • ----- 2018 -----
  • December
  • November
  • October
  • September
  • August
  • July
  • June
starlingx-discuss@lists.starlingx.io

May 2021

  • 25 participants
  • 144 discussions
[Starlingx-discuss] patch review to enable bandit for project utilities and ansbile-playbooks
by Chen, Haochuan Z 26 May '21

26 May '21
Hi I add bandit code scanning for project utilities and ansible-playbooks. https://review.opendev.org/c/starlingx/utilities/+/793256 https://review.opendev.org/c/starlingx/ansible-playbooks/+/793258 BR! Martin, Chen IOTG, Software Engineer 021-61164330
1 0
0 0
[Starlingx-discuss] OpenDev IRC services are moving to OFTC this weekend
by Jeremy Stanley 26 May '21

26 May '21
As a majority of our constituent projects have voiced a preference for enacting our long-standing evacuation plan, the OpenDev Collaboratory's IRC service bots will be switching from Freenode to the OFTC network this weekend (May 29-30, 2021). We understand this is short notice, but multiple projects have requested that we act quickly. Please expect some gaps in channel logging and notifications from our various bots over the course of the weekend. I have provided a much more detailed writeup to the service-discuss mailing list, and encourage anyone with questions to read it and follow up there if needed. Subsequent updates will be sent only to service-discuss, in order to limit noise for individual project lists and keep further discussion focused in one place as much as possible: http://lists.opendev.org/pipermail/service-discuss/2021-May/000249.html -- Jeremy Stanley
1 0
0 0
[Starlingx-discuss] Sanity Master Test LAYERED build ISO 20210526T013449Z
by Dimofte, Alexandru 26 May '21

26 May '21
Sanity Test from 2021-May-26 (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210…) Status: GREEN Helm-Chart used: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20210… OBS: Today LP1918420 was not seen on any configuration on baremetal and virtual using this master image. So I sent this report GREEN. I know that LP1918420 is sporadic so we'll see next days if this issue disappeared or it will be observed again. =========================================== Sanity Test executed on Bare Metal AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard - Local Storage (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External - Dedicated Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] =========================================== Sanity Test executed on Virtual Environment AIO - Simplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 49 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 61 TCs ] AIO - Duplex Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 07 TCs [PASS] TOTAL: [ 64 TCs ] Standard (2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 08 TCs [PASS] TOTAL: [ 65 TCs ] Standard External Storage (2+2+2) Setup 04 TCs [PASS] Provisioning 01 TCs [PASS] Sanity OpenStack 52 TCs [PASS] Sanity Platform 09 TCs [PASS] TOTAL: [ 66 TCs ] Kind Regards, Alexandru Dimofte [Logo Description automatically generated] Dimofte Alexandru Software Engineer STARLINGX TEAM Skype no: +40 336403734 Personal Mobile: +40 743167456 alexandru.dimofte(a)intel.com<mailto:alexandru.dimofte@intel.com> Intel Romania
1 0
0 0
[Starlingx-discuss] [Distributed Edge cloud] Edge cloud: kubelet-taking 100% in edge cloud
by Dharwadkar, Sriram 26 May '21

26 May '21
Hi Team, We have deployed Distributed StarlingX in one of our systems (with the load 20.06). Below is the version OS="centos" SW_VERSION="20.06" BUILD_TARGET="Host Installer" BUILD_TYPE="Formal" BUILD_ID="r/stx.4.0" JOB="STX_4.0_build_layer_flock" BUILD_BY="starlingx.build(a)cengn.ca" BUILD_NUMBER="22" BUILD_HOST="starlingx_mirror" BUILD_DATE="2020-08-05 12:25:52 +0000" Edge cloud is running in AIO-Duplex low latency mode in 2 controllers with 24 cores each. When we deploy CPU intensive pods attached to SRIOV-vfs, we are observing an issue, where kubelet is taking 100% of CPU. Below are the details captured. ontroller-1:/proc/2455541/fd# ps -ealf | grep -i kubelet 4 S root 2453203 1 39 80 0 - 265440 - 12:47 ? 00:31:18 /usr/bin/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --kubeconfig=/etc/kubernetes/kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=/var/run/containerd/containerd.sock --cni-bin-dir=/usr/libexec/cni --node-ip=10.222.35.3 --volume-plugin-dir=/usr/libexec/kubernetes/kubelet-plugins/volume/exec/ --feature-gates TopologyManager=true --cpu-manager-policy=static --topology-manager-policy=single-numa-node --system-reserved=memory=7000Mi --reserved-cpus=0-2 --container-runtime=remote --container-runtime-endpoint=unix:///var/run/containerd/containerd.sock 0 S root 3554919 2551522 0 80 0 - 28181 - 14:06 pts/2 00:00:00 grep --color=auto -i kubelet controller-1:/proc/2455541/fd# ps -eLP | grep 2453203 2453203 2453203 0 ? 00:00:00 kubelet 2453203 2453206 2 ? 00:00:19 kubelet 2453203 2453207 1 ? 00:00:00 kubelet 2453203 2453208 1 ? 00:00:12 kubelet 2453203 2453209 0 ? 00:00:00 kubelet 2453203 2453210 1 ? 00:00:00 kubelet 2453203 2453211 1 ? 00:00:00 kubelet 2453203 2453215 2 ? 00:00:00 kubelet 2453203 2453218 2 ? 00:00:00 kubelet 2453203 2453254 1 ? 00:00:00 kubelet 2453203 2453255 1 ? 00:00:00 kubelet 2453203 2453287 0 ? 00:00:00 kubelet 2453203 2453290 1 ? 00:00:19 kubelet 2453203 2455519 1 ? 00:00:17 kubelet 2453203 2455540 2 ? 00:00:14 kubelet 2453203 2455541 0 ? 00:28:24 kubelet ..... There are around 31 threads of kubelet running. One of the thread(2455541) is taking 100% cpu. Strace done on kubelet, shows the below output. It is trying to read something from the fd 32 continously controller-1:/proc/2455541/fd# strace -p 2455541 strace: Process 2455541 attached read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) read(32, 0xc002125900, 4608) = ? ERESTARTNOINTR (To be restarted) controller-1:/proc/2455541# cd fd controller-1:/proc/2455541/fd# ls 0 1 10 11 12 13 14 15 16 17 18 19 2 20 21 22 23 24 25 26 27 29 3 30 31 32 34 4 5 6 7 8 9 Fd 32 seems to be a calico virtual interface. Kubelet is spinning on this interface continuously which shoots up the CPU usage to 100% controller-1:/proc/2455541/fd# ls -l 32 lr-x------ 1 root root 64 May 21 13:44 32 -> /sys/devices/virtual/net/cali0db93ba21d0/speed controller-1:/proc/2455541/fd# Within few mins, we cannot do ssh to controller-1. In console, top shows the kubelet taking 100%. Any clues on this issue would be of great help, let me know if any information is required. Regards, Sriram
2 1
0 0
[Starlingx-discuss] Issue - Host Switchover failure. [732]
by Rai, Ankush 26 May '21

26 May '21
Hi we are seeing the node switchover failure Issue: Switch active controller action in Central cloud is failing. It is not showing any failure in StarlingX GUI but after progressing for a while, it returns to the old state. i.e. same controller node remains in Active state. Steps to reproduce: (1) Under Hosts tab for Active controller, click on Actions dropdown and select "swact host" (2) Verify Controller-0 and Controller-1 Personality in the Host Inventory section Expected Result: (1) At step 5, Swact host action should be successful. (2) At step 6, whichever controller was active before should be displayed as Standby now Note: StarlingX GUI issue screenshots are attached >From sysinv.log, sysinv 2021-03-29 12:35:48.468 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 1. delta_handle ['action'] sysinv 2021-03-29 12:35:48.850 29411 INFO sysinv.api.controllers.v1.rest_api [-] PATCH cmd:http://controller-1:7777/v1/servicenode/controller-1 hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:{"origin": "sysinv", "action": "swact-pre-check", "admin": "unknown", "oper": "unknown", "avail": ""} sysinv 2021-03-29 12:35:48.940 29411 INFO sysinv.api.controllers.v1.rest_api [-] Response={u'origin': u'sm', u'oper': u'unknown', u'admin': u'unknown', u'hostname': u'controller-1', u'avail': u'', u'error_details': None, u'action': u'swact-pre-check', u'error_code': u'0'} sysinv 2021-03-29 12:35:48.942 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 Action staged: swact sysinv 2021-03-29 12:35:48.942 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 post action_stage hostupdate action=swact notify_vim=False notify_mtc=True skip_notify_mtce=False sysinv 2021-03-29 12:35:48.942 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 2. delta_handle ['action'] sysinv 2021-03-29 12:35:48.943 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 apply ihost_val {'task': u'Swacting'} sysinv 2021-03-29 12:35:48.957 29411 INFO sysinv.api.controllers.v1.host [-] controller-1 Action swact perform notify_mtce cmd:http://localhost:2112/v1/hosts/6cf35736-20dd-4921-a5ed-faebbfa036b4 hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:{"tboot": "false", "ttys_dcd": null, "subfunctions": "controller,worker,lowlatency", "bm_ip": null, "install_state": "completed+", "rootfs_device": "/dev/sda", "bm_username": null, "clock_synchronization": "ntp", "operation": "modify", "serialid": null, "id": 2, "console": "ttyS0,115200", "uuid": "6cf35736-20dd-4921-a5ed-faebbfa036b4", "mgmt_ip": "10.222.21.3", "software_load": "20.06", "config_status": null, "hostname": "controller-1", "iscsi_initiator_name": "iqn.1994-05.com.redhat:4e6911b5176d", "capabilities": {"stor_function": "monitor"} , "install_output": "text", "device_image_update": null, "location": {}, "availability": "available", "invprovision": "provisioned", "peer_id": null, "administrative": "unlocked", "personality": "controller", "recordtype": "standard", "reboot_needed": false, "bm_mac": null, "inv_state": "inventoried", "mtce_info": null, "isystem_uuid": "73b38f1a-8b20-436e-9eba-9a619a448cf4", "boot_device": "/dev/sda", "install_state_info": null, "mgmt_mac": "2c:ea:7f:65:a8:a6", "subfunction_oper": "enabled", "target_load": "20.06", "vsc_controllers": null, "operational": "enabled", "subfunction_avail": "available", "action": "swact", "bm_type": null} sysinv 2021-03-29 12:36:34.304 10553 ERROR sysinv.openstack.common.rpc.common [-] Failed to consume message from queue: [Errno 104] Connection reset by peer: error: [Errno 104] Connection reset by peer There are some alarms raised during switchover process and it seems the peer node is not responding. It is observed that after some time the original role is restored. [cid:image003.jpg@01D751A3.6A7BB410] Please let me know how to further debug the issue. Thanks, Ankush
2 1
0 0
[Starlingx-discuss] Passing boot parameters to nodes other than controller-0
by open infra 26 May '21

26 May '21
Hi, I am trying to deploy stx node that has a NVME drive as the primary disk. As mentioned in the documentation under 'Configure NVMe Drive as Primary Disk' [1], we can edit the boot parameters of controller-0 at boot. Is it possible to alter boot parameters in PXE server to deploy a specific node? [1] https://docs.starlingx.io/deploy_install_guides/nvme_config.html Regards, Danishka
2 1
0 0
[Starlingx-discuss] Community (& TSC) Call (May 26, 2021)
by Zvonar, Bill 26 May '21

26 May '21
Hi all, reminder of the weekly TSC/Community calls coming up later today. We'll talk about the final steps for stx.5.0. Please feel free to add other items to the agenda [0] for the community call. Bill... [0] etherpad: https://etherpad.openstack.org/p/stx-status [1] call details: https://wiki.openstack.org/wiki/Starlingx/Meetings#7am_Pacific_-_Technical_… [2] meeting start time in various time-zones: https://www.timeanddate.com/worldclock/fixedtime.html?iso=20210526T1400 [3] https://zoom.us/j/342730236?pwd=N21CUXNXVlJXMlcyZjZ0SE96cVNjQT09
1 1
0 0
[Starlingx-discuss] Fwd: Your application to the Docker Open Source Program
by Ildiko Vancsa 26 May '21

26 May '21
Hi StarlingX TSC and Community, As you know I submitted an application into the DockerHub open source program to solve the ate limit issues by using their offer for open source projects. Below is their response with more information about the program. Please check the details and raise any issues and concerns you may see. If there is no objection I will reply back and ask them to proceed with the next steps of the process. Thanks and Best Regards, Ildikó > Begin forwarded message: > To: ildiko(a)openinfra.dev > > Hello Ildiko, > > Thank you for submitting your application to the Docker Open Source Program. We are excited to have you as a part of our tremendously talented developer community, and we appreciate that you took the time to fill out this application. We are in the process of reviewing your application, and we will get back to you as soon as we have any updates. Please bear with us as we go through this process, since this program has generated a lot of excitement, and we are moving through the review process as quickly as we can. > > Docker New Image Retention and Data Egress Policies > > In August, we announced that we are creating new policies for image retention <https://www.docker.com/blog/scaling-dockers-business-to-serve-millions-more…> and data pull rates <https://www.docker.com/blog/scaling-docker-to-serve-millions-more-developer…>. We made these changes to build Docker into a sustainable business for the long term, so that we can continue supporting our community and our ecosystem. We got great feedback from our extensive user base, and adjusted <https://www.docker.com/blog/docker-hub-image-retention-policy-delayed-and-s…> our policies to suspend the policies on image retention. The plan for data pull rates is moving forward, and on Nov 2, we have started to gradually change the existing pull rates to the new limits > > Unauthenticated users will be restricted to 100 pulls every 6 hours > Authenticated free users will be restricted to 200 pulls every 6 hours > Docker Open Source Policy > > Docker remains highly committed to providing a platform where the non-commercial open source developers can continue collaborating, innovating and pushing this industry into new directions. For the approved, non-commercial, open source namespaces, Docker will suspend data pull rate restrictions, with no egress restrictions applying to any Docker users pulling images from those namespaces > > Open Source Qualification Criteria > > To qualify the Publisher namespace for the Open Source Program status, the Publisher namespace - > > Must be shared in public repos > Is not funded by commercial company or organization > Meets the Open Source Initiative definition (defined here <https://opensource.org/docs/osd>), including definitions for > Free distribution, source code, derived works, integrity of source code, licensing and no tolerance for discrimination > Distributes images under an OSI approved open source license <https://opensource.org/licenses/alphabetical> > Review and Approval > > The process for applying for Open Source status is summarized below - > > The Publisher submits the Open Source Community Application form <https://www.docker.com/community/open-source/application>. > Docker reviews the form, and determines if the Publisher qualifies for open source status. > If Docker approves the Publisher’s application, Docker will waive the pull rate policy for the Publisher’s namespace, for a period of one year > Every 12 months, Docker will review the Publisher’s namespace, to verify that the Publisher continues to qualify with the Docker Open Source Program criteria, and will extend the Open Source Program status for another 12 months. > Docker may, at its discretion, also review eligibility criteria within the 12-months period, and, depending on the changes of Publisher’s compliance with program criteria, revise the Publisher’s Open Source Program status > The Publisher may have other namespaces, that either partially comply or do not comply with open source policy requirements, and therefore, will not qualify for open source status > Joint Marketing Programs > > While the Publisher retains the Open Source project status, the Publisher agrees to - > > Become a Docker public reference for press releases, blogs, webinars, etc > Create joint blogs, webinars and other marketing content > Create explicit links to their Docker Hub repos, with no ‘wrapping’ or hiding sources of their images > Include information about Docker on the website and in documentation > Please let me know if you have any questions on anything in this letter. > > Please reply to this email if your open source project complies with the above criteria, and you would like to move forward with the review and approval of your Open Source Program. > > Thank you for all your support for Docker and the Docker community. > >
2 1
0 0
[Starlingx-discuss] Debugging and Troubleshooting documentation
by Rai, Ankush 25 May '21

25 May '21
Please provide some documentation reference for Debugging and Troubleshooting starlingx. There are lots of log file under /var/log/ but it is not clear what log file to refer for what module or error scenario. Is there any documentation for these log files ? Thanks, Ankush
2 1
0 0
[Starlingx-discuss] StarlingX TSC election - Nomination period ended
by Ildiko Vancsa 25 May '21

25 May '21
Hi StarlingX Community, I would like to inform you that the nomination period[1] for the StarlingX TSC election has ended. This time around we have not received any nominations from any active contributors of the StarlingX community during the official nomination period. The election officials will discuss next steps with the StarlingX TSC and get back to you with further updates. Thank you, [1] https://docs.starlingx.io/election/ [2] https://docs.starlingx.io/election/#election-officials
1 0
0 0
  • ← Newer
  • 1
  • 2
  • 3
  • 4
  • 5
  • ...
  • 15
  • Older →

HyperKitty Powered by HyperKitty version 1.3.12.