[Starlingx-discuss] issue for backup and restore

Chen, Haochuan Z haochuan.z.chen at intel.com
Tue Jun 30 07:47:55 UTC 2020


Hi Dan

Currently backup and restore, function breaks, exception the last time issue, I workaround by add "-overwrite=true" in bootstrap/bringup-essential-services/tasks/bringup_helm.yml line 242.

And there is another issue, armada-api pod could not launch, always in pending status.
I also work around by delete this pod before this task
TASK [bootstrap/bringup-essential-services : Wait for 120 seconds to ensure kube-system pods are all started]

Propose you check the latest code for B&R.


TASK [bootstrap/bringup-essential-services : Fail if any of the Kubernetes component, Networking or Armada pods are not ready by this time] *************************************************
failed: [localhost] (item={'_ansible_parsed': True, 'stderr_lines': [u'error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c'], u'changed': True, u'stderr': u'error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c', u'ansible_job_id': u'567509288348.112224', u'stdout': u'', '_ansible_item_result': True, u'invocation': {u'module_args': {u'creates': None, u'executable': None, u'_uses_shell': False, u'_raw_params': u'kubectl --kubeconfig=/etc/kubernetes/admin.conf wait --namespace=armada --for=condition=Ready pods --selector application=armada --timeout=30s', u'removes': None, u'argv': None, u'warn': True, u'chdir': None, u'stdin': None}}, 'attempts': 1, u'delta': u'0:00:30.122867', 'stdout_lines': [], 'failed_when_result': False, '_ansible_no_log': False, u'end': u'2020-06-30 07:26:37.030731', '_ansible_item_label': {'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_item_label': u'application=armada', u'ansible_job_id': u'567509288348.112224', 'item': u'application=armada', u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/root/.ansible_async/567509288348.112224', '_ansible_ignore_errors': None, '_ansible_no_log': False}, u'start': u'2020-06-30 07:26:06.907864', u'cmd': [u'kubectl', u'--kubeconfig=/etc/kubernetes/admin.conf', u'wait', u'--namespace=armada', u'--for=condition=Ready', u'pods', u'--selector', u'application=armada', u'--timeout=30s'], u'finished': 1, u'failed': False, 'item': {'_ansible_parsed': True, '_ansible_item_result': True, '_ansible_no_log': False, u'ansible_job_id': u'567509288348.112224', 'item': u'application=armada', u'started': 1, 'changed': True, 'failed': False, u'finished': 0, u'results_file': u'/root/.ansible_async/567509288348.112224', '_ansible_ignore_errors': None, '_ansible_item_label': u'application=armada'}, u'rc': 1, u'msg': u'non-zero return code', '_ansible_ignore_errors': None}) => {"changed": false, "item": {"ansible_job_id": "567509288348.112224", "attempts": 1, "changed": true, "cmd": ["kubectl", "--kubeconfig=/etc/kubernetes/admin.conf", "wait", "--namespace=armada", "--for=condition=Ready", "pods", "--selector", "application=armada", "--timeout=30s"], "delta": "0:00:30.122867", "end": "2020-06-30 07:26:37.030731", "failed": false, "failed_when_result": false, "finished": 1, "invocation": {"module_args": {"_raw_params": "kubectl --kubeconfig=/etc/kubernetes/admin.conf wait --namespace=armada --for=condition=Ready pods --selector application=armada --timeout=30s", "_uses_shell": false, "argv": null, "chdir": null, "creates": null, "executable": null, "removes": null, "stdin": null, "warn": true}}, "item": {"ansible_job_id": "567509288348.112224", "changed": true, "failed": false, "finished": 0, "item": "application=armada", "results_file": "/root/.ansible_async/567509288348.112224", "started": 1}, "msg": "non-zero return code", "rc": 1, "start": "2020-06-30 07:26:06.907864", "stderr": "error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c", "stderr_lines": ["error: timed out waiting for the condition on pods/armada-api-6b76cfdbf4-9rm9c"], "stdout": "", "stdout_lines": []}, "msg": "Pod application=armada is still not ready."}

localhost:~$ sudo kubectl --kubeconfig=/etc/kubernetes/admin.conf --namespace=armada get po
Password:
NAME                          READY   STATUS    RESTARTS   AGE
armada-api-6b76cfdbf4-9rm9c   0/2     Pending   0          17m
localhost:~$

BR!

Martin, Chen
IOTG, Software Engineer
021-61164330

From: Voiculeasa, Dan <Dan.Voiculeasa at windriver.com>
Sent: Friday, June 19, 2020 4:20 AM
To: Chen, Haochuan Z <haochuan.z.chen at intel.com>; starlingx-discuss at lists.starlingx.io
Subject: Re: issue for backup and restore

Hello Martin,

No, it is the first time seeing it.

But I see that logic is introduced by
Project: starlingx/ansible-playbooks
Commit 514d4e7262f80a73ab37e0132f9e3b30088d14ad
CommitDate: Wed Jun 10 13:17:00 2020 -0400

Thanks,
Dan Voiculeasa
________________________________
From: Chen, Haochuan Z <haochuan.z.chen at intel.com<mailto:haochuan.z.chen at intel.com>>
Sent: Thursday, June 18, 2020 4:18 AM
To: Voiculeasa, Dan <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io> <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: RE: issue for backup and restore


Hi Dan



I check restore for latest code, restore will fail with such log. I used to check code base Jun 5 master branch, no such issue.

You know about this?



TASK [bootstrap/bringup-essential-services : Create Armada node label] **********************************************************************************************************************

fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["kubectl", "label", "node", "controller-0", "armada=enabled"], "delta": "0:00:00.102152", "end": "2020-06-18 00:57:32.563552", "msg": "non-zero return code", "rc": 1, "start": "2020-06-18 00:57:32.461400", "stderr": "error: 'armada' already has a value (enabled), and --overwrite is false", "stderr_lines": ["error: 'armada' already has a value (enabled), and --overwrite is false"], "stdout": "", "stdout_lines": []}



PLAY RECAP **********************************************************************************************************************************************************************************

localhost                  : ok=354  changed=156  unreachable=0    failed=1



[sysadmin at controller-0 ~(keystone_admin)]$



BR!



Martin, Chen

IOTG, Software Engineer

021-61164330



From: Chen, Haochuan Z
Sent: Thursday, June 11, 2020 10:53 AM
To: Voiculeasa, Dan <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: RE: issue for backup and restore



Hi voiculeasa



I confirm backup and restore works without ceph backend.



This issue is caused with my improper provision step.



BR!



Martin, Chen

IOTG, Software Engineer

021-61164330



From: Voiculeasa, Dan <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>
Sent: Tuesday, June 9, 2020 5:54 PM
To: Chen, Haochuan Z <haochuan.z.chen at intel.com<mailto:haochuan.z.chen at intel.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: issue for backup and restore



Hello Martin,



I didn't encounter that issue when testing, but also I didn't test recently without ceph backend.



Are you using a local build iso? Are you testing some change in the source code? Any prior successful restore on a simplex with ceph / simplex without ceph?





Thanks,
Dan Voiculeasa

________________________________

From: Chen, Haochuan Z <haochuan.z.chen at intel.com<mailto:haochuan.z.chen at intel.com>>
Sent: Monday, June 8, 2020 5:21 AM
To: Voiculeasa, Dan <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io> <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: Re: [Starlingx-discuss] issue for backup and restore



Hi voiculeasa



When you restore system, do you have such issue. I deploy the system without add storagebackend ceph, simplex.



Restore process

$ sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Local.123 admin_password=Local.123 backup_filename=localhost_platform_backup_2020_06_08_00_25_30.tgz"

$ source /etc/platform/openrc

$ system host-unlock 1



u'9\nTraceback (most recent call last):\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/amqp.py", line 437, in _process_data\n    **args)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/openstack/common/rpc/dispatcher.py", line 172, in dispatch\n    result = getattr(proxyobj, method)(ctxt, **kwargs)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1691, in configure_ihost\n    self._configure_controller_host(context, host)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/conductor/manager.py", line 1325, in _configure_controller_host\n    self._puppet.update_host_config(host)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 31, in _wrapper\n    func(self, *args, **kwargs)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/puppet.py", line 148, in update_host_config\n    config.update(puppet_plugin.obj.get_host_config(host))\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 111, in get_host_config\n    generate_driver_config(context, config)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1412, in generate_driver_config\n    generate_mlx4_core_options(context, config)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1389, in generate_mlx4_core_options\n    num_vfs_options = build_mlx4_num_vfs_options(context)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1358, in build_mlx4_num_vfs_options\n    ifaces = find_sriov_interfaces_by_driver(context, constants.DRIVER_MLX_CX3)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 1264, in find_sriov_interfaces_by_driver\n    port = get_interface_port(context, iface)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/puppet/interface.py", line 515, in get_interface_port\n    return interface.get_interface_port(context, iface)\n\n  File "/usr/lib64/python2.7/site-packages/sysinv/common/interface.py", line 105, in get_interface_port\n    return context[\'ports\'][iface[\'id\']]\n\nKeyError: 9\n'

[sysadmin at localhost playbooks(keystone_admin)]$



Thanks



Martin, Chen

IOTG, Software Engineer

021-61164330



From: Voiculeasa, Dan <Dan.Voiculeasa at windriver.com<mailto:Dan.Voiculeasa at windriver.com>>
Sent: Tuesday, June 2, 2020 9:23 PM
To: Chen, Haochuan Z <haochuan.z.chen at intel.com<mailto:haochuan.z.chen at intel.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: issue for backup and restore



Hello,



What does /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log say?



If you have the setup in the reproduced state, send me a zoom meeting invite starting in the interval [now, +7 hours].



Thanks,
Dan Voiculeasa

________________________________

From: Chen, Haochuan Z <haochuan.z.chen at intel.com<mailto:haochuan.z.chen at intel.com>>
Sent: Sunday, May 24, 2020 4:08 PM
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io> <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: [Starlingx-discuss] issue for backup and restore



Hi



I follow this guide to check backup and restore

https://opendev.org/starlingx/docs/src/commit/adc24ba565a58cf7e20639d4630d6d1893337bbb/doc/source/developer_resources/backup_restore.rst



But when I run this command to restore the system, it will fail with such error log.

sudo ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=sysadmin admin_password=sysadmin backup_filename=localhost_platform_backup_2020_05_23_23_43_40.tgz"



TASK [bootstrap/apply-bootstrap-manifest : Applying puppet bootstrap manifest] *******************************************************************************

fatal: [localhost]: FAILED! => {"changed": true, "cmd": ["/usr/local/bin/puppet-manifest-apply.sh", "/tmp/hieradata", "192.188.204.3", "controller", "ansible_bootstrap", ">", "/tmp/apply_manifest.log"], "delta": "0:02:18.526028", "end": "2020-05-24 12:53:09.811312", "msg": "non-zero return code", "rc": 1, "start": "2020-05-24 12:50:51.285284", "stderr": "cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory\ncp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory\ncp: cannot stat '>': No such file or directory", "stderr_lines": ["cp: cannot stat '/tmp/hieradata/192.188.204.3.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/system.yaml': No such file or directory", "cp: cannot stat '/tmp/hieradata/secure_system.yaml': No such file or directory", "cp: cannot stat '>': No such file or directory"], "stdout": "Applying puppet ansible_bootstrap manifest...\n[WARNING]\nWarnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details", "stdout_lines": ["Applying puppet ansible_bootstrap manifest...", "[WARNING]", "Warnings found. See /var/log/puppet/2020-05-24-12-50-51_controller/puppet.log for details"]}



Any idea about this.



Thanks!



Martin, Chen

IOTG, Software Engineer

021-61164330


-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20200630/cc285126/attachment-0001.html>


More information about the Starlingx-discuss mailing list