[Starlingx-discuss] Compute node exceeded memory threshold and went degreaded state or offline state.

parkeryan(闫志杰) parkeryan at tencent.com
Tue Dec 24 07:26:30 UTC 2019


Hi, folks

I have deployed StarlingX 2.0 witch 2 controller nodes and 12 compute nodes, and I also installed stx-openstack application. I have some questions while deploying virtual servers with openstack.

Here is the virtual servers that I have deployed, actually, it’s just part of what I wanted to deploy, because some compute nodes entered degreaded state after deploying these virtual servers.

主机名字

类型

VCPU(已用)

VCPU(总计)

内存(已用)

内存(总计)

本地存储(已用)

本地存储(总计)

实例

compute-0<http://192.168.137.231:31000/admin/hypervisors/31_compute-0/>

QEMU

10

46

19.8GB

127.9GB

50GB

265GB

5

compute-1<http://192.168.137.231:31000/admin/hypervisors/34_compute-1/>

QEMU

10

46

19.8GB

127.9GB

50GB

265GB

5

compute-2<http://192.168.137.231:31000/admin/hypervisors/7_compute-2/>

QEMU

12

54

21.8GB

127.9GB

60GB

265GB

6

compute-3<http://192.168.137.231:31000/admin/hypervisors/4_compute-3/>

QEMU

8

54

17.8GB

127.9GB

40GB

265GB

4

compute-4<http://192.168.137.231:31000/admin/hypervisors/10_compute-4/>

QEMU

10

54

19.8GB

127.9GB

50GB

265GB

5

compute-5<http://192.168.137.231:31000/admin/hypervisors/13_compute-5/>

QEMU

8

54

17.8GB

127.9GB

40GB

265GB

4

compute-6<http://192.168.137.231:31000/admin/hypervisors/25_compute-6/>

QEMU

12

46

21.8GB

127.9GB

60GB

265GB

6

compute-7<http://192.168.137.231:31000/admin/hypervisors/16_compute-7/>

QEMU

14

54

23.8GB

127.9GB

70GB

265GB

7

compute-8<http://192.168.137.231:31000/admin/hypervisors/28_compute-8/>

QEMU

8

54

17.8GB

127.9GB

40GB

265GB

4

compute-10<http://192.168.137.231:31000/admin/hypervisors/22_compute-10/>

QEMU

6

54

15.8GB

127.9GB

30GB

265GB

3

compute-11<http://192.168.137.231:31000/admin/hypervisors/19_compute-11/>

QEMU

12

54

21.8GB

127.9GB

60GB

265GB

6



[sysadmin at controller-0 ~(keystone_admin)]$ system host-list
+----+--------------+-------------+----------------+-------------+--------------+
| id | hostname     | personality | administrative | operational | availability |
+----+--------------+-------------+----------------+-------------+--------------+
| 1  | controller-0 | controller  | unlocked       | enabled     | available    |
| 18 | controller-1 | controller  | unlocked       | enabled     | available    |
| 19 | compute-0    | worker      | unlocked       | enabled     | degraded     |
| 20 | compute-1    | worker      | unlocked       | enabled     | degraded     |
| 21 | compute-2    | worker      | unlocked       | enabled     | degraded     |
| 22 | compute-3    | worker      | unlocked       | enabled     | available    |
| 23 | compute-4    | worker      | unlocked       | enabled     | available    |
| 24 | compute-5    | worker      | unlocked       | enabled     | available    |
| 25 | compute-6    | worker      | unlocked       | enabled     | degraded     |
| 26 | compute-7    | worker      | unlocked       | enabled     | degraded     |
| 27 | compute-8    | worker      | unlocked       | enabled     | available    |
| 28 | compute-9    | worker      | locked         | disabled    | online       |
| 29 | compute-10   | worker      | unlocked       | enabled     | available    |
| 30 | compute-11   | worker      | unlocked       | enabled     | available    |
+----+--------------+-------------+----------------+-------------+--------------+


From the fm alarm log, I can found it was because the host memory had exceeded threshold.

[sysadmin at controller-0 ~(keystone_admin)]$ fm alarm-list
+----------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------+----------+-------------------------+
| Alarm ID | Reason Text                                                                                                                      | Entity ID                           | Severity | Time Stamp              |
+----------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------+----------+-------------------------+
| 100.103  | Platform Memory threshold exceeded ; threshold 90.00%, actual 98.55%                                                             | host=compute-1.numa=node1           | critical | 2019-12-24T06:56:44.    |
|          |                                                                                                                                  |                                     |          | 392673                  |
|          |                                                                                                                                  |                                     |          |                         |
| 100.103  | Platform Memory threshold exceeded ; threshold 90.00%, actual 96.63%                                                             | host=compute-0.numa=node1           | critical | 2019-12-24T06:56:39.    |
|          |                                                                                                                                  |                                     |          | 336593                  |
|          |                                                                                                                                  |                                     |          |                         |
| 100.103  | Platform Memory threshold exceeded ; threshold 80.00%, actual 80.70%                                                             | host=compute-4.numa=node1           | major    | 2019-12-24T06:52:38.    |
|          |                                                                                                                                  |                                     |          | 287378                  |
|          |                                                                                                                                  |                                     |          |                         |
| 100.103  | Platform Memory threshold exceeded ; threshold 90.00%, actual 98.76%                                                             | host=compute-7.numa=node1           | critical | 2019-12-24T06:39:41.    |
|          |                                                                                                                                  |                                     |          | 186485                  |
|          |                                                                                                                                  |                                     |          |                         |
| 100.103  | Platform Memory threshold exceeded ; threshold 90.00%, actual 97.07%                                                             | host=compute-2.numa=node1           | critical | 2019-12-24T06:39:29.    |
|          |                                                                                                                                  |                                     |          | 700993                  |
|          |                                                                                                                                  |                                     |          |                         |
| 100.103  | Platform Memory threshold exceeded ; threshold 90.00%, actual 98.77%                                                             | host=compute-6.numa=node1           | critical | 2019-12-24T06:39:15.    |
|          |                                                                                                                                  |                                     |          | 864868                  |
|          |                                                                                                                                  |                                     |          |                         |
| 200.006  | compute-9 is degraded due to the failure of its 'pci-irq-affinity-agent' process. Auto recovery of this major process is in      | host=compute-9.process=pci-irq-     | major    | 2019-12-23T07:00:11.    |
|          | progress.                                                                                                                        | affinity-agent                      |          | 942609                  |
|          |                                                                                                                                  |                                     |          |                         |
| 200.006  | compute-9 critical 'kubelet' process has failed and could not be auto-recovered gracefully. Auto-recovery progression by host    | host=compute-9.process=kubelet      | critical | 2019-12-23T03:50:02.    |
|          | reboot is required and in progress. Manual Lock and Unlock may be required if auto-recovery is unsuccessful.                     |                                     |          | 934247                  |
|          |                                                                                                                                  |                                     |          |                         |
| 200.001  | compute-9 was administratively locked to take it out-of-service.                                                                 | host=compute-9                      | warning  | 2019-12-23T03:46:04.    |
|          |                                                                                                                                  |                                     |          | 527208                  |
|          |                                                                                                                                  |                                     |          |                         |
+----------+----------------------------------------------------------------------------------------------------------------------------------+-------------------------------------+----------+-------------------------+

And my question is, who exhausted the memory, from the openstack and system command, there is a lot of available memory which can be allocated.

controller-0:~/sow$ openstack host show compute-0
+-----------+----------------------------------+-----+-----------+---------+
| Host      | Project                          | CPU | Memory MB | Disk GB |
+-----------+----------------------------------+-----+-----------+---------+
| compute-0 | (total)                          |  46 |    130978 |     265 |
| compute-0 | (used_now)                       |  10 |     20240 |      50 |
| compute-0 | (used_max)                       |  10 |     10240 |      50 |
| compute-0 | 943ada3993eb4e9bada5e9eac3aadeb0 |  10 |     10240 |      50 |
+-----------+----------------------------------+-----+-----------+---------+
controller-0:~/sow$ openstack host show compute-1
+-----------+----------------------------------+-----+-----------+---------+
| Host      | Project                          | CPU | Memory MB | Disk GB |
+-----------+----------------------------------+-----+-----------+---------+
| compute-1 | (total)                          |  46 |    130978 |     265 |
| compute-1 | (used_now)                       |  10 |     20240 |      50 |
| compute-1 | (used_max)                       |  10 |     10240 |      50 |
| compute-1 | 943ada3993eb4e9bada5e9eac3aadeb0 |  10 |     10240 |      50 |
+-----------+----------------------------------+-----+-----------+---------+

[sysadmin at controller-0 ~(keystone_admin)]$ system host-memory-list compute-0
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+
| processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_total_ | app_hp_total_2M | app_hp_avail_2M | app_hp_pending_2M | app_hp_total_1G | app_hp_avail_1G | app_hp_pending_1G | app_hp_use_1G |
|           | al(MiB) | rm(MiB)    | il(MiB) | configured     | size(M | total  | avail  | _reqd | 4K         |                 |                 |                   |                 |                 |                   |               |
|           |         |            |         |                | iB)    |        |        |       |            |                 |                 |                   |                 |                 |                   |               |
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+
| 0         | 57442   | 8000       | 57442   | True           | 1024   | 0      | 0      | None  | 1470976    | 25848           | 25848           | None              | 0               | 0               | None              | True          |
| 1         | 63536   | 2000       | 63536   | True           | 1024   | 0      | 0      | None  | 1626624    | 28591           | 28591           | None              | 0               | 0               | None              | True          |
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+
[sysadmin at controller-0 ~(keystone_admin)]$ system host-memory-list compute-1
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+
| processor | mem_tot | mem_platfo | mem_ava | hugepages(hp)_ | vs_hp_ | vs_hp_ | vs_hp_ | vs_hp | app_total_ | app_hp_total_2M | app_hp_avail_2M | app_hp_pending_2M | app_hp_total_1G | app_hp_avail_1G | app_hp_pending_1G | app_hp_use_1G |
|           | al(MiB) | rm(MiB)    | il(MiB) | configured     | size(M | total  | avail  | _reqd | 4K         |                 |                 |                   |                 |                 |                   |               |
|           |         |            |         |                | iB)    |        |        |       |            |                 |                 |                   |                 |                 |                   |               |
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+
| 0         | 57442   | 8000       | 57442   | True           | 1024   | 0      | 0      | None  | 1470976    | 25848           | 25848           | None              | 0               | 0               | None              | True          |
| 1         | 63536   | 2000       | 63536   | True           | 1024   | 0      | 0      | None  | 1626624    | 28591           | 28591           | None              | 0               | 0               | None              | True          |
+-----------+---------+------------+---------+----------------+--------+--------+--------+-------+------------+-----------------+-----------------+-------------------+-----------------+-----------------+-------------------+---------------+
[sysadmin at controller-0 ~(keystone_admin)]$


If I keep on deploying more virtual servers, the compute node will go offline state, and go to booting state. And such compute node may corrupt and sometimes I have to reinstall the stx-openstack application to make the compute node come back.


Best regards
parkeryan

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20191224/2ebcd411/attachment-0001.html>


More information about the Starlingx-discuss mailing list