[Starlingx-discuss] DUPLEX: applying stx-openstack fails with timeout for mariadb

Marcel Schaible marcel at schaible-consulting.de
Thu May 9 15:54:17 UTC 2019


Hi,

I  do not see rbd-provisioner-storage-init-...

Is there a way to (re)-create it?

Thanks

Marcel

> "Hu, Yong" <yong.hu at intel.com> hat am 9. Mai 2019 um 08:08 geschrieben:
> 
> 
> Marcel,
> AFAIK, secret "ceph-pool-kube-rbd" was supposedly created by job "rbd-provisioner-storage-init", 
> which should've completed a pod named "rbd-provisioner-storage-init-xyz". (to replace "xyz" with actual string).
> 
> You might start with:
> 1). Look into the pod: kubectl get pods -n openstack | grep rbd
> 2). Dump the pod: kubectl describe pod rbd-provisioner-storage-init-xyz -n openstack | grep rbd
> 3). Find a way to manually run the container used in the pod by looking into the description in step 2).
> 
> In addition, you might have a look at these items:
> 1. ceph osd tree
> 2. ceph osd pool ls
> 
> -Yong
> 
> On 08/05/2019, 11:59 PM, "Marcel Schaible" <marcel at schaible-consulting.de> wrote:
> 
>     The Installation went fine so far. I am seeing now in /var/daemon.log the following Errors:
>     
>     2019-05-08T17:55:23.616 controller-0 kubelet[10395]: info E0508 17:55:23.612220   10395 pod_workers.go:190] Error syncing pod 81a0d9c3-71a1-11e9-b537-ec9ecd1f7eb0 ("mariadb-server-2_default(81a0d9c3-71a1-11e9-b537-ec9ecd1f7eb0)"), skipping: timeout expired waiting for volumes to attach or mount for pod "default"/"mariadb-server-2". list of unmounted volumes=[mysql-data]. list of unattached volumes=[mysql-data mycnfd mariadb-bin mariadb-etc mariadb-secrets listening-dragon-mariadb-token-7tbpp]
>     2019-05-08T17:55:47.655 controller-0 kubelet[10395]: info E0508 17:55:47.655708   10395 upgradeaware.go:343] Error proxying data from client to backend: readfrom tcp 127.0.0.1:49779->127.0.0.1:53550: write tcp 127.0.0.1:49779->127.0.0.1:53550: write: broken pipe
>     2019-05-08T17:55:50.591 controller-0 systemd[1]: info Started Session 35 of user root.
>     2019-05-08T17:55:53.673 controller-0 kubelet[10395]: info E0508 17:55:53.673694   10395 httpstream.go:251] error forwarding port 44134 to pod 0126d6453a4779bf8754eb5ee470fa59c595eec8da11150d444484e31f0de73e, uid : write tcp 127.0.0.1:53550->127.0.0.1:49929: write: broken pipe:
>     2019-05-08T17:56:14.108 controller-0 kubelet[10395]: info E0508 17:56:14.108916   10395 container_manager_linux.go:98] Unable to ensure the docker processes run in the desired containers: errors moving "docker-containerd" pid: failed to find pid namespace of process 'ࣧ'
>     2019-05-08T17:56:20.212 controller-0 collectd[8165]: info ptp plugin PTP Service Disabled
>     2019-05-08T17:56:23.623 controller-0 kubelet[10395]: info W0508 17:56:23.623056   10395 kubelet_pods.go:832] Unable to retrieve pull secret default/default-registry-key for default/mariadb-ingress-764fb78974-w9fjp due to secrets "default-registry-key" not found.  The image pull may not succeed.
>     2019-05-08T17:56:26.390 controller-0 kubelet[10395]: info E0508 17:56:26.390652   10395 nestedpendingoperations.go:267] Operation for "\"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-81a30662-71a1-11e9-8bdf-06f25f691339\"" failed. No retries permitted until 2019-05-08 17:58:28.390606979 +0200 CEST m=+9734.977918759 (durationBeforeRetry 2m2s). Error: "MountVolume.NewMounter initialization failed for volume \"pvc-81a017eb-71a1-11e9-b537-ec9ecd1f7eb0\" (UniqueName: \"kubernetes.io/rbd/kube-rbd:kubernetes-dynamic-pvc-81a30662-71a1-11e9-8bdf-06f25f691339\") pod \"mariadb-server-2\" (UID: \"81a0d9c3-71a1-11e9-b537-ec9ecd1f7eb0\") : Couldn't get secret default/ceph-pool-kube-rbd err: secrets \"ceph-pool-kube-rbd\" not found"
>     2019-05-08T17:56:48.544 controller-0 kubelet[10395]: info E0508 17:56:48.544404   10395 upgradeaware.go:357] Error proxying data from backend to client: write tcp 172.27.1.101:10250->172.27.1.101:51232: write: connection reset by peer
>     
>     Especially the error "Couldn't get secret default/ceph-pool-kube-rbd err" makes me nervous.
>     Any idea what to do with it?
>     
>     Thanks
>     
>     Marcel
>     
>     
>     
>     > "Hu, Yong" <yong.hu at intel.com> hat am 8. Mai 2019 um 04:22 geschrieben:
>     > 
>     > 
>     > FYI: Here are somethings I used to do to debug this kind of problem:
>     > 1. look into the helm chart and understand what docker images are used.
>     > 2. on controller, check whether or not those images were successfully pulled, by "docker images".
>     > 3. If images do exist, look into yml to know what instances are supposed to launch by kubectl, for example, job, pod, daemonSet, Service, and etc.
>     > 4. have a try to launch these instances manually by "kubectl run" or "docker run",  and see what will go wrong. 
>     > 
>     > Regards,
>     > Yong
>     > 
>     > On 08/05/2019, 12:17 AM, "Marcel Schaible" <marcel at schaible-consulting.de> wrote:
>     > 
>     >     Hi Don,
>     >     
>     >     just tried the matching stable-versioned tar file. Unfortunately the mariadb refuses to install.
>     >     Is there a way to install the db manuallyandr get more debug Information?
>     >     
>     >     Thanks
>     >     
>     >     Marcel
>     >     
>     >     > "Penney, Don" <Don.Penney at windriver.com> hat am 7. Mai 2019 um 15:54 geschrieben:
>     >     > 
>     >     > 
>     >     > It is highly recommended you use helm-charts-manifest-centos-stable-versioned.tgz
>     >     > 
>     >     > The "stable" images are built on stable/stein and are the supported images for this release. The "dev" images are "bleeding edge", pointing to the latest master branches.
>     >     > 
>     >     > "latest" is the set of images tagged with a "latest" tag, which is updated on each build.
>     >     > "versioned" is the set of images with a timestamp-based version tag that will be unique to the build.
>     >     > 
>     >     > -----Original Message-----
>     >     > From: Marcel Schaible [mailto:marcel at schaible-consulting.de] 
>     >     > Sent: Tuesday, May 07, 2019 9:14 AM
>     >     > To: Hu, Yong; starlingx-discuss at lists.starlingx.io
>     >     > Subject: Re: [Starlingx-discuss] DUPLEX: applying stx-openstack fails with timeout for mariadb
>     >     > 
>     >     > Doesn't help. Failed with the same mariadb error.
>     >     > 
>     >     > In the Directory http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190429T233000Z/outputs/helm-charts/ 
>     >     > 
>     >     > are several different (?) helm-charts.
>     >     > 
>     >     > What are the differences between them
>     >     > 
>     >     > helm-charts-manifest-centos-dev-latest.tgz
>     >     > helm-charts-manifest-centos-dev-versioned.tgz
>     >     > helm-charts-manifest-centos-stable-latest.tgz
>     >     > helm-charts-manifest-centos-stable-versioned.tgz
>     >     > 
>     >     > and which one is recommended for the 20190429 Image?
>     >     > 
>     >     > 
>     >     > > "Hu, Yong" <yong.hu at intel.com> hat am 7. Mai 2019 um 14:41 geschrieben:
>     >     > > 
>     >     > > 
>     >     > > "latest_docker_image_build" is a floating target which could be updated day by day.
>     >     > > You can see the stx-openstack helm-chart's "Last Modified:" right now is indicated "2019-May-07 00:34:30".
>     >     > > 
>     >     > > So, when you are using 0429 boot image, you need to use 0429 helm chart, because the versions/tags of docker images might be different day by day.
>     >     > >   
>     >     > > 
>     >     > > 
>     >     > > On 07/05/2019, 8:32 PM, "Marcel Schaible" <marcel at schaible-consulting.de> wrote:
>     >     > > 
>     >     > >     I have used the one mentioned in Installation doc from here:
>     >     > >     
>     >     > >     http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_docker_image_build/outputs/helm-charts/stx-openstack-1.0-11-centos-stable-latest.tgz
>     >     > >     
>     >     > >     
>     >     > >     
>     >     > >     
>     >     > >     > "Hu, Yong" <yong.hu at intel.com> hat am 7. Mai 2019 um 14:28 geschrieben:
>     >     > >     > 
>     >     > >     > 
>     >     > >     > A few weeks ago, I met the similar issue that it was stuck when applying osh-openstack-mariadb helm chart.
>     >     > >     > At that time, the issue was caused by a failure of mariadb depending on a ceph osd pool (kube-rbd).
>     >     > >     > 
>     >     > >     > Given you were using 0429 image (http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190429T233000Z/outputs/iso/bootimage.iso),
>     >     > >     > did you download helm chart from the same build? - http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190429T233000Z/outputs/helm-charts/helm-charts-manifest-centos-dev-latest.tgz
>     >     > >     > These 2 have to be aligned.
>     >     > >     > 
>     >     > >     > 
>     >     > >     > On 07/05/2019, 7:42 PM, "Marcel Schaible" <marcel at schaible-consulting.de> wrote:
>     >     > >     > 
>     >     > >     >     Hi,
>     >     > >     >     
>     >     > >     >     we have got the image from 20190429 up and running in a duplex configuration.
>     >     > >     >     
>     >     > >     >     
>     >     > >     >     When I'll try to bring up the application Services with:
>     >     > >     >     
>     >     > >     >     system application-apply stx-openstack
>     >     > >     >     
>     >     > >     >     I am getting the error and log file is below.
>     >     > >     >     
>     >     > >     >     # system application-list
>     >     > >     >     stx-openstack | armada-manifest | manifest.yaml | apply-failed | operation aborted, check logs for detail |
>     >     > >     >     
>     >     > >     >     I am using this stx-application: stx-openstack-1.0-11-centos-stable-latest.tgz
>     >     > >     >     
>     >     > >     >     Thanks
>     >     > >     >     
>     >     > >     >     Marcel
>     >     > >     >     
>     >     > >     >     2019-05-06 09:13:34.447 11348 INFO armada.handlers.test [-] [chart=ceph-pools-audit]: PASSED: osh-openstack-ceph-pools-audit
>     >     > >     >     2019-05-06 09:13:34.447 11348 INFO armada.handlers.armada [-] All Charts applied in ChartGroup provisioner.
>     >     > >     >     2019-05-06 09:13:34.447 11348 INFO armada.handlers.armada [-] Processing ChartGroup: openstack-mariadb (Mariadb), sequenced=True
>     >     > >     >     2019-05-06 09:13:34.447 11348 INFO armada.handlers.chart_deploy [-] [chart=mariadb]: Processing Chart, release=osh-openstack-mariadb
>     >     > >     >     2019-05-06 09:13:34.447 11348 INFO armada.handlers.chart_deploy [-] [chart=mariadb]: known: [], release_name: osh-openstack-mariadb
>     >     > >     >     2019-05-06 09:13:34.448 11348 INFO armada.handlers.chartbuilder [-] [chart=mariadb]: Building dependency chart helm-toolkit for release openstack-mariadb.
>     >     > >     >     2019-05-06 09:13:34.463 11348 INFO armada.handlers.chart_deploy [-] [chart=mariadb]: Installing release osh-openstack-mariadb in namespace openstack, wait=True, timeout=1800s
>     >     > >     >     2019-05-06 09:13:34.466 11348 INFO armada.handlers.tiller [-] [chart=mariadb]: Helm install release: wait=True, timeout=1800
>     >     > >     >     2019-05-06 09:13:59.605 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:14:59.668 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:15:59.730 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:16:59.792 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:17:59.852 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:18:59.906 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:19:59.967 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:21:00.031 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:22:00.094 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:23:00.154 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:24:00.216 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:25:00.279 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:26:00.341 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:27:00.405 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:28:00.468 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:29:00.530 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:30:00.595 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:31:00.651 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:32:00.712 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:33:00.769 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:34:00.826 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:35:00.890 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:36:00.949 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:37:01.009 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:38:01.069 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:39:01.130 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:40:01.194 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:41:01.257 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:42:01.320 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:43:01.385 11348 DEBUG armada.handlers.lock [-] Updating lock update_lock /usr/local/lib/python3.6/dist-packages/armada/handlers/lock.py:155
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller [-] [chart=mariadb]: Error while installing release osh-openstack-mariadb: grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
>     >     > >     >             status = StatusCode.UNKNOWN
>     >     > >     >             details = "release osh-openstack-mariadb failed: timed out waiting for the condition"
>     >     > >     >             debug_error_string = "{"created":"@1557135814.815037033","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-mariadb failed: timed out waiting for the condition","grpc_status":2}"
>     >     > >     >     >
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller Traceback (most recent call last):
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 462, in install_release
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller     metadata=self.metadata)
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller   File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller     return _end_unary_response_blocking(state, call, False, None)
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller   File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller     raise _Rendezvous(state, None, None, deadline)
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller      status = StatusCode.UNKNOWN
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller      details = "release osh-openstack-mariadb failed: timed out waiting for the condition"
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller      debug_error_string = "{"created":"@1557135814.815037033","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-mariadb failed: timed out waiting for the condition","grpc_status":2}"
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller >
>     >     > >     >     2019-05-06 09:43:34.815 11348 ERROR armada.handlers.tiller
>     >     > >     >     2019-05-06 09:43:34.816 11348 DEBUG armada.handlers.tiller [-] [chart=mariadb]: Helm getting release status for release=osh-openstack-mariadb, version=0 get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:528
>     >     > >     >     2019-05-06 09:43:34.995 11348 DEBUG armada.handlers.tiller [-] [chart=mariadb]: GetReleaseStatus= name: "osh-openstack-mariadb"
>     >     > >     >     info {
>     >     > >     >       status {
>     >     > >     >         code: FAILED
>     >     > >     >       }
>     >     > >     >       first_deployed {
>     >     > >     >         seconds: 1557134014
>     >     > >     >         nanos: 472287926
>     >     > >     >       }
>     >     > >     >       last_deployed {
>     >     > >     >         seconds: 1557134014
>     >     > >     >         nanos: 472287926
>     >     > >     >       }
>     >     > >     >       Description: "Release \"osh-openstack-mariadb\" failed: timed out waiting for the condition"
>     >     > >     >     }
>     >     > >     >     namespace: "openstack"
>     >     > >     >      get_release_status /usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py:536
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada [-] Chart deploy [mariadb] failed: armada.exceptions.tiller_exceptions.ReleaseException: Failed to Install release: osh-openstack-mariadb - Tiller Message: b'Release "osh-openstack-mariadb" failed: timed out waiting for the condition'
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada Traceback (most recent call last):
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/armada/handlers/tiller.py", line 462, in install_release
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada     metadata=self.metadata)
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 533, in __call__
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada     return _end_unary_response_blocking(state, call, False, None)
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada   File "/usr/local/lib/python3.6/dist-packages/grpc/_channel.py", line 467, in _end_unary_response_blocking
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada     raise _Rendezvous(state, None, None, deadline)
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada grpc._channel._Rendezvous: <_Rendezvous of RPC that terminated with:
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada      status = StatusCode.UNKNOWN
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada      details = "release osh-openstack-mariadb failed: timed out waiting for the condition"
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada      debug_error_string = "{"created":"@1557135814.815037033","description":"Error received from peer","file":"src/core/lib/surface/call.cc","file_line":1017,"grpc_message":"release osh-openstack-mariadb failed: timed out waiting for the condition","grpc_status":2}"
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada >
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada During handling of the above exception, another exception occurred:
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada
>     >     > >     >     2019-05-06 09:43:34.995 11348 ERROR armada.handlers.armada Traceback (most recent call last):
>     >     > >     >     
>     >     > >     >     _______________________________________________
>     >     > >     >     Starlingx-discuss mailing list
>     >     > >     >     Starlingx-discuss at lists.starlingx.io
>     >     > >     >     http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
>     >     > >     >     
>     >     > >     >
>     >     > >     
>     >     > >
>     >     > 
>     >     > _______________________________________________
>     >     > Starlingx-discuss mailing list
>     >     > Starlingx-discuss at lists.starlingx.io
>     >     > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
>     >     
>     >
>     
>



More information about the Starlingx-discuss mailing list