[Starlingx-discuss] New feature available: StarlingX Platform Backup & Restore

Poncea, Ovidiu Ovidiu.Poncea at windriver.com
Mon Oct 7 14:44:00 UTC 2019


Hi Folks,

Last piece of the Backup & Restore StarlingX feature has been merged on Friday (also known as B&R).

This feature provides us with a last resort disaster recovery option in cases where the StarlingX software and/or data are compromised.  The feature provides a backup utility to create a tarball containing a snapshot with the deployment state. This tarball contains all that's needed to restore the deployment to a previously good working state.

This feature provides two B&R options:
I. Platform B&R - backup and restores the platform configuration and data. This step takes care of the platform data: nodes, their configuration and user data.
II. Openstack Application - backup and restores the stx-openstack application

The Platform B&R supports two modes:
I. Keep all Ceph cluster data (to recover from platform software or data corruption)
II. Wipe all Ceph cluster data (to perform a full system restore for software and/or application corruption)
In both cases, the restore will re-build all platform data required.  Applications will require additional steps. For the case where the user decides to keep the Ceph data (eg: application data is preserved) some effort may be required by the applications to re-use the preserved data. For the case where the user decides to start with a clean Ceph, the user will need to recover application data through other means (e.g. snapshot and use rbd export commands for block devices).

The backup and restore is done through Ansible and the restore process involves reinstalling each node in the system. The backup is run from the active controller and the restore is run from controller-0 (local play) or remotely by running Ansible and pointing it to controller-0 (remote play).

This email describe the Platform B&R procedure. The Openstack procedure will be described in a later mail.

 Backing up the platform
==========================

 Local play method
~~~~~~~~~~~~~~~~~~~
Run:
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/backup.yml -e "ansible_become_pass=<sysadmin password> admin_password=<sysadmin password>"

The <admin_password> and <ansible_become_pass> need to be set correctly via the “-e” option on the command line or an override file or in the ansible secret file.
This will output a file named in this format: <inventory_hostname>_platform_backup_<timestamp>.tgz. The prefix <platform_backup_filename_prefix> and <openstack_backup_filename_prefix> can be overridden via the “-e” option on the command line or an override file.

The generated backup tar files will look like this: localhost_platform_backup_2019_08_08_15_25_36.tgz and localhost_openstack_backup_2019_08_08_15_25_36.tgz. They are located in /opt/backups directory on controller-0.

 Remote play method
~~~~~~~~~~~~~~~~~~~~
1. Login to the host where ansible is installed and clone the playbook code from opendev at https://opendev.org/starlingx/ansible-playbooks.git
2. Provide an inventory file, either a customized one that is specified via the ‘-i’ option or the default one which resides in Ansible configuration directory (i.e. /etc/ansible/hosts), must specify the IP of the controller host. For example, if the host-name is my_vbox, the inventory-file should have an entry called my_vbox :
    ---
    all:
      hosts:
        wc68:
          ansible_host: 128.222.100.02
       my_vbox:
          ansible_host: 128.224.141.74

3. Run ansible:
ansible-playbook <path-to-backup-playbook-entry-file> --limit host-name -i <inventory-file> -e <optional-extra-vars>

The generated backup tar files can be found in <host_backup_dir> which is $HOME by default. It can be overridden by “-e” option on the command line or in an override file.

The generated backup tar have naming convention as in a local play.

Example:
ansible-playbook /localdisk/designer/repo/cgcs-root/stx/stx-ansible-playbooks/playbookconfig/src/playbooks/backup-restore/backup.yml --limit my_vbox -i $HOME/br_test/hosts -e "host_backup_dir=$HOME/br_test ansible_become_pass=Li69nux* admin_password=Li69nux* ansible_ssh_pass=Li69nux* ansible_ssh_pass=Li69nux*"

 Detailed information of the contents of the backup
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
- Postgresql config: Backup roles, table spaces and schemas for databases
- Postgresql data:
    o template1, sysinv, barbican db data, fm db data,
    o keystone db for primary region,
    o dcmanager db for dc controller,
    o dcorch db for dc controller
- LDAP db
- Ceph crushmap
- DNS server list
- System Inventory network overrides. These are needed at restore to correctly set up the OS configuration:
    o addrpool
    o pxeboot_subnet
    o management_subnet
    o management_start_address
    o cluster_host_subnet
    o cluster_pod_subnet
    o cluster_service_subnet
    o external_oam_subnet
    o external_oam_gateway_address
    o external_oam_floating_address
- Docker registries on controller
- Docker no=proxy
- Backup up data:
    o OS configuration
        ok: [localhost] => (item=/etc) - note although everything here is backed up, not all of the content will be restored.
    o Home directory ‘sysadmin’ user and all LDAP user accounts
        ok: [localhost] => (item=/home)
    o Geberated platform configuration
        ok: [localhost] => (item=/opt/platform/config/19.09)
        ok: [localhost] => (item=/opt/platform/puppet/19.09/hieradata) - All the hieradata under is backed-up. However only the static hieradata (static.yaml and secure_static.yaml) will be restored to bootstrap controller-0.
    o Keyring
        ok: [localhost] => (item=/opt/platform/.keyring/19.09)
    o Patching and package repositories
        ok: [localhost] => (item=/opt/patching)
        ok: [localhost] => (item=/www/pages/updates)
    o Extension filesystem
        ok: [localhost] => (item=/opt/extension)
    o atch-vault filesystem for distributed cloud system-controller
        ok: [localhost] => (item=/opt/patch-vault)
    o Armada manifests
        ok: [localhost] => (item=/opt/platform/armada/19.09)
    o Helm charts
        ok: [localhost] => (item=/opt/platform/helm_charts)


 Restoring
=============

 I. Platform Restore
----------------------
Platform restore can be done in two modes:
A. By keeping the ceph cluster data intact - after restore previous Ceph data will remain in place.
B. By wiping Ceph cluster entireley - Ceph cluster has to be recreated.

Warning:
- The StarlingX system data backup can only be used to restore the system from which the backup was made. You cannot use the system data backup to restore the system on different hardware.
- Restore has to use the exact same version of boot image (iso) that was used at the time of the original installation.

Prerequisite: a backup file stored externally and StarlingX iso

Prepare for the procedure:
- Power down all nodes. If you have a storage setup and want to keep Ceph data then power down all the nodes except the storage ones; cluster has to be functional during restore.
- Install iso on controller-0 (boot from ISO and reinstall it)

The restore procedure has multiple steps, dependent on the system type. The first step of the procedure is identical for all: run the ansible playbook with the backup file as input. For this step, similar to the backup procedure, we have two options: local and remote play.

 Local play
~~~~~~~~~~~~
First download the backup to the controller (you can also use an external storage device, e.g. an USB drive).

Then run the command:

ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=<location_of_tarball> ansible_become_pass=<admin_password> admin_password=<admin_password> backup_filename=<backup_filename>"

Optional:
o <wipe_ceph_osds> set to wipe_ceph_osds=true or wipe_ceph_osds=false (default to false). This will select one of the two restore modes: A. keep Ceph data intact (false) or B. start with an empty ceph cluster (true).

Example for a backup file in /home/sysadmin:
ansible-playbook /usr/share/ansible/stx-ansible/playbooks/restore_platform.yml -e "initial_backup_dir=/home/sysadmin ansible_become_pass=Li69nux* admin_password=Li69nux* backup_filename=localhost_platform_backup_2019_09_27_07_48_48.tgz"

 Remote play
~~~~~~~~~~~~~~
1. Login to the host where ansible is installed and clone the playbook code from OpenDev at https://opendev.org/starlingx/ansible-playbooks.git
2. Provide an inventory file, either a customized one that is specified via the ‘-i’ option or the default one which resides in Ansible configuration directory (i.e. /etc/ansible/hosts), must specify the IP of the controller host. For example, if the host-name is my_vbox, the inventory-file should have an entry called my_vbox :
    ---
    all:
      hosts:
        wc68:
          ansible_host: 128.222.100.02
       my_vbox:
          ansible_host: 128.224.141.74

3. Run ansible:
ansible-playbook <path-to-backup-playbook-entry-file> --limit host-name -i <inventory-file> -e <optional-extra-vars>

Where optional-extra-vars can be:
 o <wipe_ceph_osds> set to wipe_ceph_osds=true or wipe_ceph_osds=false (default to false). This will select if the user wants to keep Ceph data intact (false) or start with an empty ceph cluster (true)
 o The <backup_filename> is the platform backup tar file. It must be provided via  the “-e” option on the command line, e.g. -e “backup_filename= localhost_platform_backup_2019_07_15_14_46_37.tgz”
 o The <initial_backup_dir> is the location on the Ansible control machine where the platform backup tar file is placed to restore the platform. It must be provided via “-e” option on the command line.
 o The <admin_password> ,  <ansible_become_pass> and <ansible_ssh_pass> need to be set correctly via the “-e” option on the command line or in the ansible secret file. <ansible_ssh_pass> is the password to the sysadmin user on controller-0.
 o The <ansible_remote_tmp> should be set to a new directory (no need to create it ahead of time) under /home/sysadmin on controller-0 via the “-e” option on the command line

e.g.
ansible-playbook /localdisk/designer/jenkins/tis-stx-dev/cgcs-root/stx/ansible-playbooks/playbookconfig/src/playbooks/restore_platform.yml --limit my_vbox -i $HOME/br_test/hosts -e " ansible_become_pass=Li69nux* admin_password=Li69nux* ansible_ssh_pass=Li69nux* initial_backup_dir=$HOME/br_test  backup_filename=my_vbox_system_backup_2019_08_08_15_25_36.tgz ansible_remote_tmp=/home/sysadmin/ansible-restore"

After ansible is exectued then the following steps are based on the deployment mode:

 AIO-SX
~~~~~~~~~
1. Unlock controller-0 & wait for it to boot

 AIO-DX
~~~~~~~~~
1. Unlock controller-0 & wait for it to boot
2. Reinstall controller-1 (boot it from PXE, wait for it to become 'online')
3. Unlock controller-1

 Standard with controller storage
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
1. Unlock controller-0 & wait for it to boot
2. Reinstall controller-1 and all computes (boot them from PXE, wait for them to become 'online')
3. Unlock controller-1 and wait for it to be available
4. Unlock compute nodes

  Standard with storage nodes
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
This depends on wipe_ceph_osds configuration.

A. If wipe_ceph_osd is false (default option):
1. Unlock controller-0 & wait for it to boot. Afer unlock you will see storage nodes as available and ceph operational.
2. Reinstall controller-1 and all computes (boot them from PXE, wait for them to become 'online')
3. Unlock controller-1 and wait for it to be available
4. Unlock compute nodes

Storage nodes do not need reinstall.

B. If wipe_ceph_osd is false (default option):
1. Unlock controller-0 & wait for it to boot. Afer unlock you will see all nodes, including storage nodes as offline.
2. Reinstall controller-1, all storage and compute nodes (boot them from PXE, wait for them to become 'online')
3. Unlock controller-1 and wait for it to be available
4. Unlock storage nodes and wait for them to be available
5. Unlock compute nodes and wait for them to be available

Regards,
Ovidiu Poncea
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20191007/708474ff/attachment-0001.html>


More information about the Starlingx-discuss mailing list