[Starlingx-discuss] Help understanding an application-appy failure

Saul Wold sgw at linux.intel.com
Wed Aug 7 22:43:20 UTC 2019



On 8/7/19 3:25 PM, Rowsell, Brent wrote:
> kubectl get nodes --show-labels
> source /etc/platform/openrc
> system show
> 

controller-0:~$ kubectl get nodes --show-labels
NAME           STATUS   ROLES    AGE   VERSION   LABELS
controller-0   Ready    master   33h   v1.13.5 
beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/hostname=controller-0,node-role.kubernetes.io/master=,openstack-compute-node=enabled,openstack-control-plane=enabled,sriov=enabled
controller-0:~$ source /etc/platform/openrc
[sysadmin at controller-0 ~(keystone_admin)]$ system show
+----------------------+--------------------------------------+
| Property             | Value                                |
+----------------------+--------------------------------------+
| contact              | None                                 |
| created_at           | 2019-08-06T13:16:31.064956+00:00     |
| description          | None                                 |
| https_enabled        | False                                |
| location             | None                                 |
| name                 | bd080471-05ba-444e-a43a-f571916f9736 |
| region_name          | RegionOne                            |
| sdn_enabled          | False                                |
| security_feature     | spectre_meltdown_v1                  |
| service_project_name | services                             |
| software_version     | 19.01                                |
| system_mode          | simplex                              |
| system_type          | All-in-one                           |
| timezone             | UTC                                  |
| updated_at           | 2019-08-06T13:17:30.333232+00:00     |
| uuid                 | 8a10d157-eb91-4a14-9661-42f449d3d6da |
| vswitch_type         | none                                 |
+----------------------+--------------------------------------+
[sysadmin at controller-0 ~(keystone_admin)]$


Sau!

> -----Original Message-----
> From: Saul Wold [mailto:sgw at linux.intel.com]
> Sent: Wednesday, August 7, 2019 5:29 PM
> To: Richard, Joseph <Joseph.Richard at windriver.com>; starlingx-discuss at lists.starlingx.io
> Subject: Re: [Starlingx-discuss] Help understanding an application-appy failure
> 
> 
> 
> On 8/7/19 11:25 AM, Richard, Joseph wrote:
>> The issue is with openvswitch not coming up, so the helm charts that depend on it (neutron, nova, libvirt) can't come up.
>>
>> If you run `kubectl -n openstack get pods`, do you see the openvswitch(not neutron-ovs-agent) pod?  What state is it in?
>>
> I do not see openvswitch at all in kubectl output.
> 
>> Did you remove and then reapply the application?
>> If you run `helm list`, do you see osh-openstack-openvswitch?
> 
> osh-openstack-openvswitch        	1       	Tue Aug  6 23:30:15 2019
> DEPLOYED	openvswitch-0.1.0        	           	openstack
> 
>> If you run `helm delete osh-openstack-openvswitch --purge` and then reapply the stx-openstack application, does it come up?
>>
> I tried this just now and it's again stopped at 58%, it did restart the osh-openstack-openvswitch helm chart, but when I tried kubectl, there was still no openvswitch.
> 
> More thoughts, suggestions?
> 
> Sau!
> 
>> -----Original Message-----
>> From: Saul Wold [mailto:sgw at linux.intel.com]
>> Sent: Wednesday, August 7, 2019 1:09 PM
>> To: starlingx-discuss at lists.starlingx.io
>> Subject: [Starlingx-discuss] Help understanding an application-appy
>> failure
>>
>>
>> Folks,
>>
>> I am getting a failure during application-apply on a new Baremetal install. I installed based on the image built over the weekend.
>>
>> It appears to be a timeout (see the attached log).
>>
>> Thoughts, suggestions to debug this further and if a bug is needed
>>
>> Thanks
>>        Sau!
>>
>>
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller [-] [chart=openstack-libvirt]: Error while installing release osh-openstack-libvirt: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
>>>           status = StatusCode.UNKNOWN
>>>           details = "release osh-openstack-libvirt failed: timed out waiting for the condition"
>>>           debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}"
>>>>
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller Traceback (most recent call last):
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 465, in install_release
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller     metadata=self.metadata)
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller   File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller     return _end_unary_response_blocking(state, call, False, None)
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller   File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller     raise _Rendezvous(state, None, None, deadline)
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller      status = StatusCode.UNKNOWN
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller      details = "release osh-openstack-libvirt failed: timed out waiting for the condition"
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller      debug_error_string = "{"created":"@1565150558.052442767","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-libvirt failed: timed out waiting for the condition","grpc_status":2}"
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller >
>>> 2019-08-07 04:02:38.052 85578 ERROR armada.handlers.tiller ^[[00m
>>> 2019-08-07 04:02:38.053 85578 DEBUG armada.handlers.tiller [-]
>>> [chart=openstack-libvirt]: Helm getting release status for
>>> release=osh-openstack-libvirt, version=0 get_release_status
>>> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:531^
>>> [
>>> [00m
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.wait [-]
>>> [chart=openstack-openvswitch]: Timed out waiting for pods
>>> (namespace=openstack,
>>> labels=(release_group=osh-openstack-openvswitch)). None found! Are
>>> `wait.labels` correct? Does `wait.resources` need to exclude `type:
>>> pod`?^[[00m
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada [-] Chart deploy [openstack-openvswitch] failed: armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`?
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada Traceback (most recent call last):
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 223, in handle_result
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     result = get_result()
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/lib/python3.6/concurrent/futures/_base.py", line 425, in result
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     return self.__get_result()
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/lib/python3.6/concurrent/futures/_base.py", line 384, in __get_result
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     raise self._exception
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/lib/python3.6/concurrent/futures/thread.py", line 56, in run
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     result = self.fn(*self.args, **self.kwargs)
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/armada.py", line 212, in deploy_chart
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     prefix, known_releases)
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/chart_deploy.py", line 242, in execute
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     chart_wait.wait(timer)
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 130, in wait
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     wait.wait(timeout=timeout)
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 292, in wait
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     modified = self._wait(deadline)
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/wait.py", line 349, in _wait
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada     raise k8s_exceptions.KubernetesWatchTimeoutException(error)
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada armada.exceptions.k8s_exceptions.KubernetesWatchTimeoutException: Timed out waiting for pods (namespace=openstack, labels=(release_group=osh-openstack-openvswitch)). None found! Are `wait.labels` correct? Does `wait.resources` need to exclude `type: pod`?
>>> 2019-08-07 04:02:38.129 85578 ERROR armada.handlers.armada ^[[00m
>>> 2019-08-07 04:02:38.162 85578 DEBUG armada.handlers.tiller [-] [chart=openstack-libvirt]: GetReleaseStatus= name: "osh-openstack-libvirt"
>>> info {
>>>     status {
>>>       code: FAILED
>>>     }
>>>     first_deployed {
>>>       seconds: 1565148757
>>>       nanos: 968267140
>>>     }
>>>     last_deployed {
>>>       seconds: 1565148757
>>>       nanos: 968267140
>>>     }
>>>     Description: "Release \"osh-openstack-libvirt\" failed: timed out waiting for the condition"
>>> }
>>> namespace: "openstack"
>>>    get_release_status
>>> /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:539^
>>> [
>>> [00m
>>
> 
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
> 



More information about the Starlingx-discuss mailing list