Hi Eric,

 

I was able to figure out the issue. Thanks for the pointers.

 

While bringing up data network, I had used the below command for the both the workers as I wanted sriov interfaces.

I m not sure if this is the reason the why worker nodes were in disabled and offline state.

 

When I changed the class from pci-sriov to “data”, worker nodes are enabled and operational status is available.

 

Do you see any problem in first command.

 

Regards,

Sriram

 

 

From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Tuesday, October 6, 2020 12:36 AM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded

 

 

Sriram,

 

For the worker node configuration failures, look or Error Warn logs in /var/log/puppet/* logs.

 

Those logs should help you understand what config failed.

 

Eric.


From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Monday, October 5, 2020 2:57 PM
To: MacDonald, Eric <Eric.MacDonald@windriver.com>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded

 

Hi Eric,

 

Thanks for the explanation.

 

Initially I was planning to bring up 2 controller (All in one stand alone) + 1 worker node. For this kind of configuration, we would need high capacity h/w for all the nodes which may not be available. That’s the reason I deployed 1 standard controller -with storage + 2 worker nodes configuration.  

Controller node came up w/o any issues, worker nodes also came up after system host-unlock worker-0 and worker-1. But I could see these alarms

 

  • worker-1 experienced a service-affecting failure. Auto-recovery  | host=worker-1                | critical | 2020-10-05T |

in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful

  • worker-1 experienced a configuration failure.

 

  • worker-0 experienced a service-affecting failure. Auto-recovery  | host=worker-0                | critical | 2020-10-05T |

             in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful

  • worker-0 experienced a configuration failure.

 

I did lock and unlock of nodes. After nodes came up, I still see the same issue.

How to check for configuration failures?

Also there is taint on both the worker nodes “Taints: services=disabled:NoExecute”, which I could see using kubectl describe node worker-0 and kubectl describe node worker-1.

So, I don’t see Sriov CNI and Sriov DP being deployed in worker nodes.

Regards,

Sriram

 

 

From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Monday, October 5, 2020 9:14 PM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded

 

 

Sriram,

 

The AIO controller has a lot more work to do during startup as it contains both control and compute functions.

As a result, you may temporarily see CPU resource utilization logs during the latter stages of the provisioning.

 

Now that you went to standard config, you might see another degrade for a short period of time following the unlock of the second controller during initial filesystem sync.

 

Again, degrade is general, but if you see a degrade there should be an alarm that represents the reason for the degrade.

 

Eric.


From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Monday, October 5, 2020 10:44 AM
To: MacDonald, Eric <Eric.MacDonald@windriver.com>
Subject: RE: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded

 

Thanks Eric.

I reinstalled the system with iso again. I didn’t see this error again with different configuration (I selected standard controller, instead of all in one duplex).

I will check for alarms in case of any issues further.

 

Regards,

Sriram

 

From: MacDonald, Eric <Eric.MacDonald@windriver.com>
Sent: Monday, October 5, 2020 5:55 PM
To: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Subject: Re: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded

 

 

What does 'fm alarm-list' show ?

 

As a general rule ; if a host is degraded there should be an alarm raised for that degraded condition.

 

Eric MacDonald

StarlingX Maintenance


From: Dharwadkar, Sriram <Sriram.Dharwadkar@commscope.com>
Sent: Monday, October 5, 2020 4:00 AM
To: starlingx-discuss@lists.starlingx.io <starlingx-discuss@lists.starlingx.io>
Subject: [Starlingx-discuss] [Distributed StarlingX-4.0] - Controller-0 in edge cloud shows availability as degraded

 

Hi,

 

I have installed distributed starlingx 4.0 in "All in one Duplex" mode. There are two nodes in the central cloud and two in the edge cloud. Central cloud is up and running.

For the edge cloud configuration, in the bootstrap override file, I have configured the private registry.  From the central cloud, I was able to add the edge cloud. Images required for starlingX installation are downloaded from private registry and installation goes through w/o any issues.

 

[sysadmin@controller-0 ~(keystone_admin)]$ dcmanager subcloud list

+----+------+------------+--------------+---------------+---------+

| id | name | management | availability | deploy status | sync    |

+----+------+------------+--------------+---------------+---------+

| 47 | edge | unmanaged  | offline      | complete      | unknown |

+----+------+------------+--------------+---------------+---------+

[sysadmin@controller-0 ~(keystone_admin)]$

 

Then I followed the steps mentioned in the document https://docs.starlingx.io/deploy_install_guides/r4_release/bare_metal/aio_duplex_install_kubernetes.html#configure-controller-0

And finally did unlock of controller-0. System went for reboot and it came up successfully. After I see availability as “degraded”

 

[sysadmin@controller-0 log(keystone_admin)]$ system host-list

+----+--------------+-------------+----------------+-------------+--------------+

| id | hostname     | personality | administrative | operational | availability |

+----+--------------+-------------+----------------+-------------+--------------+

| 1  | controller-0 | controller  | unlocked       | enabled     | degraded     |

+----+--------------+-------------+----------------+-------------+--------------+

 

Tail -f /var/log/sysinv.log shows – prerequisites not met..

ysinv 2020-10-05 07:57:12.271 96246 INFO ceph_client [-] Result: {u'waiting': [], u'has_failed': False, u'state': u'success', u'is_waiting': False, u'running': [], u'failed': [], u'finished': [{u'outb': u'{"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","health":{"checks":{},"status":"HEALTH_OK","overall_status":"HEALTH_WARN"},"election_epoch":7,"quorum":[0],"quorum_names":["controller"],"monmap":{"epoch":1,"fsid":"50634828-68b2-43c4-aaa0-ebf53f6e675a","modified":"2020-10-05 06:53:11.461060","created":"2020-10-05 06:53:11.461060","features":{"persistent":["kraken","luminous","mimic","osdmap-prune"],"optional":[]},"mons":[{"rank":0,"name":"controller","addr":"192.168.22.101:6789/0","public_addr":"192.168.22.101:6789/0"}]},"osdmap":{"osdmap":{"epoch":10,"num_osds":1,"num_up_osds":1,"num_in_osds":1,"full":false,"nearfull":false,"num_remapped_pgs":0}},"pgmap":{"pgs_by_state":[],"num_pgs":0,"num_pools":0,"num_objects":0,"data_bytes":0,"bytes_used":112181248,"bytes_avail":1197865828352,"bytes_total":1197978009600},"fsmap":{"epoch":1,"by_rank":[]},"mgrmap":{"epoch":48,"active_gid":24132,"active_name":"controller-0","active_addr":"192.168.22.102:6804/93283","available":true,"standbys":[],"modules":["restful"],"available_modules":[{"name":"balancer","can_run":true,"error_string":""},{"name":"dashboard","can_run":false,"error_string":"Frontend assets not found: incomplete build?"},{"name":"hello","can_run":true,"error_string":""},{"name":"iostat","can_run":true,"error_string":""},{"name":"localpool","can_run":true,"error_string":""},{"name":"prometheus","can_run":true,"error_string":""},{"name":"restful","can_run":true,"error_string":""},{"name":"selftest","can_run":true,"error_string":""},{"name":"smart","can_run":true,"error_string":""},{"name":"status","can_run":true,"error_string":""},{"name":"telegraf","can_run":true,"error_string":""},{"name":"telemetry","can_run":true,"error_string":""},{"name":"zabbix","can_run":true,"error_string":""}],"services":{"restful":"https://controller-0:7999/"}},"servicemap":{"epoch":1,"modified":"0.000000","services":{}}}\n', u'outs': u'', u'command': u'status format=json'}], u'is_finished': True, u'id': u'140404196232080'}

sysinv 2020-10-05 07:57:12.284 96246 INFO sysinv.conductor.manager [-] Platform managed application platform-integ-apps: Prerequisites not met.

sysinv 2020-10-05 07:57:12.286 96246 INFO sysinv.conductor.manager [-] Platform managed application oidc-auth-apps: Prerequisites not met.

sysinv 2020-10-05 07:57:12.291 96246 INFO sysinv.api.controllers.v1.rest_api [-] GET cmd:http://localhost:30001/nfvi-plugins/v1/sw-update hdr:{'Content-type': 'application/json', 'User-Agent': 'sysinv/1.0'} payload:None

sysinv 2020-10-05 07:57:12.293 96246 INFO sysinv.ap

 

I’m not sure why availability is shown as degraded. Any help would be appreciated. Let me know if any logs are required.

 

Regards,

Sriram