[Starlingx-discuss] Openstack reapply failing

Austin Gillmann ji at sibyl.li
Sun Mar 15 03:04:10 UTC 2020


Furthermore, I get the following traceback when attempting to apply or
delete the application after letting the cluster sit:

```
sysinv 2020-03-15 02:43:38.049 1094205 INFO sysinv.conductor.kube_app
[-] Application stx-openstack (1.0-19-centos-stable-latest) apply
started.
sysinv 2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
[-] File does not exists: /tmp/tmpz3U7_z: ConfigException: File does
not exists: /tmp/tmpz3U7_z
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
Traceback (most recent call last):
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py",
line 1947, in perform_app_apply
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
self._create_local_registry_secrets(app.name)
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib64/python2.7/site-packages/sysinv/conductor/kube_app.py",
line 1002, in _create_local_registry_secrets
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
self._kube.kube_get_secret(DOCKER_REGISTRY_SECRET, ns)):
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib64/python2.7/site-packages/sysinv/common/kubernetes.py", line
192, in kube_get_secret
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app     c
= self._get_kubernetesclient_core()
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib64/python2.7/site-packages/sysinv/common/kubernetes.py", line
78, in _get_kubernetesclient_core
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
self._load_kube_config()
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib64/python2.7/site-packages/sysinv/common/kubernetes.py", line
63, in _load_kube_config
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
config.load_kube_config('/etc/kubernetes/admin.conf')
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 531, in load_kube_config
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
loader.load_and_set(config)
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 413, in load_and_set
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
self._load_cluster_info()
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 392, in _load_cluster_info
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
file_base_path=self._config_base_path).as_file()
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app   File
"/usr/lib/python2.7/site-packages/kubernetes/config/kube_config.py",
line 116, in as_file
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
raise ConfigException("File does not exists: %s" % self._file)
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
ConfigException: File does not exists: /tmp/tmpz3U7_z
2020-03-15 02:43:38.832 1094205 ERROR sysinv.conductor.kube_app
```

On Sat, Mar 14, 2020 at 8:12 PM Austin Gillmann <ji at sibyl.li> wrote:
>
> Hello again Mingyuan and team,
>
> While this workaround still works fine for me, the issue of applying
> still persists and can get annoying having to wipe network and disk
> images to make a helm edit. As said before it seems like it's not just
> PDB that is out of helms control, but everything including secrets,
> service accounts, and RBAC. I am curious if running SAS disks on my
> OSD has anything to do with this... If there is a possible fix or an
> associated bug I can track please let me know!
>
> Thank you!
> Austin Gillmann
>
> On Thu, Feb 13, 2020 at 9:51 PM Qi, Mingyuan <mingyuan.qi at intel.com> wrote:
> >
> > Yes, you can always remove the stx-openstack application if you do not need to keep the data within mariadb database.
> >
> >
> >
> > Mingyuan
> >
> >
> >
> > From: Austin Gillmann <ji at sibyl.li>
> > Sent: Friday, February 14, 2020 8:36
> > To: Qi, Mingyuan <mingyuan.qi at intel.com>; starlingx-discuss at lists.starlingx.io
> > Subject: Re: [Starlingx-discuss] Openstack reapply failing
> >
> >
> >
> > Hi again!
> >
> >
> >
> > Seems I found a workaround to my issue, by removing the application that seems to clean up everything well enough to apply successfully again! Albeit destructive it seems to have worked well.
> >
> >
> >
> > Best regards,
> >
> > Austin
> >
> >
> >
> > On Thu, Feb 13, 2020 at 8:28 AM Austin Gillmann <ji at sibyl.li> wrote:
> >
> > Hi Mingyuan,
> >
> > I checked the PDB list and neutron-server was indeed listed so I
> > deleted it, however it seems kubernetes is playing whack-a-mole with
> > me; so far I had to delete secrets, serviceaccounts, and RBAC roles as
> > it seems like helm does not have control on *any* neutron related
> > items
> >
> > Best regards,
> > Austin
> >
> > On Thu, Feb 13, 2020 at 6:42 AM Qi, Mingyuan <mingyuan.qi at intel.com> wrote:
> > >
> > > Hi Austin,
> > >
> > > Could you check the pdb resources in openstack namespace when reapplying the opentack app? `$ Kubectl -n openstack get pdb`
> > > I suspect that neutron-server pdb is out of helm chart's control for some reason. And you could try to delete it only and re-apply the app again.
> > >
> > > Mingyuan
> > >
> > > -----Original Message-----
> > > From: Austin Gillmann <ji at sibyl.li>
> > > Sent: Sunday, February 9, 2020 2:00
> > > To: starlingx-discuss at lists.starlingx.io
> > > Subject: [Starlingx-discuss] Openstack reapply failing
> > >
> > > Hi all,
> > >
> > > My earlier email was never replied to so I will try to re-word it:
> > >
> > > I recently deployed StarlingX 3.0 under a dedicated storage standard cluster (the storage servers running on dell r740xd's with HDD's and compute/controllers running on HP bl460c G8 blades with 2 SSD's) the stx-openstack application applies successfully on the first try, but will abort upon reapply for a config change or such.
> > >
> > > The errors armada puts out is details = "release osh-openstack-neutron
> > > failed: poddisruptionbudgets.policy "neutron-server" already exists"
> > > and "getting deployed release "osh-openstack-neutron": release:
> > > "osh-openstack-neutron" not found"
> > >
> > > Everything else in logs look normal to me, but let me know if you need any files attached! If anyone has any idea how to get past this please let me know!
> > >
> > > Best wishes,
> > > Austin Gillmann
> > >
> > > _______________________________________________
> > > Starlingx-discuss mailing list
> > > Starlingx-discuss at lists.starlingx.io
> > > http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss



More information about the Starlingx-discuss mailing list