[Starlingx-discuss] Trying to debug a stx-openstack-apply failure

Bailey, Henry Albert (Al) Al.Bailey at windriver.com
Thu Apr 18 15:16:14 UTC 2019


From what I have seen so far, there are not crashed pods, but when armada  gets to the compute-kit chartgroup (nova, libvirt, openvswitch, nova-api-proxy, neutron)   that entire section takes more than 30 minutes.
Currently it will timeout on the openvswitch portion because that timer is the default (15 minutes), but even if you increase that to 30 minutes, it will still timeout.

On my vbox env,  the load average during that chart section is over 50.
All the processes are only running on only 1 of the 4 virtual cpus, while the other 3 cpus are idle.

I have not tried experimenting with the newly changed 
 /etc/systemd/system.conf.d/platform-cpuaffinity.conf 
 to see if that makes a difference.

Al


-----Original Message-----
From: Saul Wold [mailto:sgw at linux.intel.com] 
Sent: Thursday, April 18, 2019 11:06 AM
To: Miller, Frank; starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure



On 4/18/19 7:50 AM, Miller, Frank wrote:
> Saul:
> 
> I'll let the community members more familiar with how to debug to answer specific debug questions, but it looks like you are hitting this LP reported in sanity:
> https://bugs.launchpad.net/starlingx/+bug/1825045
> 
I looked at that one yesterday, since this is a simplex setup, I don't 
have the neutron-ovs-agent-compute node and I could not find any 
CrashLoop related messages. The logs from what might be close 
neutron-opvs-agent-controller does show this:

> kubectl logs neutron-ovs-agent-controller-0-9626473e-jrlzv -n openstack -c neutron-ovs-agent-init
> + chown neutron: /run/openvswitch/db.sock
> + neutron-sanity-check --version
> + timeout 3m neutron-sanity-check --config-file /etc/neutron/neutron.conf --config-file /etc/neutron/plugins/ml2/openvswitch_agent.ini --ovsdb_native --nokeepalived_ipv6_support
> 2019-04-18 02:37:29.837 41 INFO neutron.common.config [-] Logging enabled!
> 2019-04-18 02:37:29.837 41 INFO neutron.common.config [-] /var/lib/openstack/bin/neutron-sanity-check version 14.0.0.0b4.dev16
> 2019-04-18 02:37:30.922 41 ERROR ovsdbapp.backend.ovs_idl.idlutils [-] Unable to open stream to tcp:127.0.0.1:6640 to retrieve schema: Connection refused

Maybe this is the problem, not sure if it's the same as the LP you 
mentioned.


> 2019-04-18 02:37:32.748 41 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-file', '/etc/neutron/plugins/ml2/openvswitch_agent.ini', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmpxbjXcQ/privsep.sock']
> 2019-04-18 02:37:36.263 41 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
> ++ sed 's/[{}"]//g' /tmp/auto_bridge_add
> ++ tr , '\n'
> + for bmap in '`sed '\''s/[{}"]//g'\'' /tmp/auto_bridge_add | tr "," "\n"`'
> + bridge=br-phy0
> + iface=eth1000
> + ovs-vsctl --no-wait --may-exist add-br br-phy0
> + '[' -n eth1000 ']'
> + '[' eth1000 '!=' null ']'
> + ovs-vsctl --no-wait --may-exist add-port br-phy0 eth1000
> + ip link set dev eth1000 up
> + for bmap in '`sed '\''s/[{}"]//g'\'' /tmp/auto_bridge_add | tr "," "\n"`'
> + bridge=br-phy1
> + iface=eth1001
> + ovs-vsctl --no-wait --may-exist add-br br-phy1
> + '[' -n eth1001 ']'
> + '[' eth1001 '!=' null ']'
> + ovs-vsctl --no-wait --may-exist add-port br-phy1 eth1001
> + ip link set dev eth1001 up
> + tunnel_interface=docker0
> + '[' -z docker0 ']'
> ++ ip a s docker0
> ++ grep 'inet '
> ++ awk '{print $2}'
> ++ awk -F / '{print $1}'
> + LOCAL_IP=172.17.0.1
> + '[' -z 172.17.0.1 ']'
> + tee

> That one does not yet have a solution.
> 
> Frank
> 
> -----Original Message-----
> From: Saul Wold [mailto:sgw at linux.intel.com]
> Sent: Wednesday, April 17, 2019 10:49 PM
> To: starlingx-discuss at lists.starlingx.io
> Subject: [Starlingx-discuss] Trying to debug a stx-openstack-apply failure
> 
> 
> Hi Folks,
> 
> I have been trying to get a deployment up in a libvirt/qemu environment (non-proxy).  I am seeing the following issue.  I am using the image that passed (mostly) Sanity Test on Monday 4/15 [0].
> 
> I am setting this up in AIO-Simplex mode, I have not setup any kind of registry. It seems to start up all the contains and kubectl get pods shows all the pods Running or Completed. I retrieved the stx-openstack-apply.log from armada as recommended by the Container Debug FAQ [1].
> I see multiple Errors that the Application apply aborted due to what seems like download failures.  As I said, I am not behind any proxy or firewall.
> 
> It seems to fail during processing chart: osh-openstack-neutron at 65%
> 
> Not sure what the next steps are to debug this issue.
> 
> Thanks
>      Sau!
> 
> 
> [0]
> http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190415T233001Z/
> [1] https://wiki.openstack.org/wiki/StarlingX/Containers/FAQ
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> 

_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss


More information about the Starlingx-discuss mailing list