[Starlingx-discuss] STX AIO Duplex - CEPH OSD failures

Scott V. Kamp kamp.scott at gmail.com
Wed Oct 27 08:12:30 UTC 2021


stx aio duplex pre 6.x seeing an issue with osd failures to initialize

[sysadmin at controller-0 ~(keystone_admin)]$ system host-stor-list 
controller-0
+--------------------------------------+----------+-------+----------------------+--------------------------------------+-------------------------------------------------------+--------------+------------------+-----------+

| uuid                                 | function | osdid | state           
     | idisk_uuid                           | journal_path                  
                        | journal_node | journal_size_gib | tier_name |
+--------------------------------------+----------+-------+----------------------+--------------------------------------+-------------------------------------------------------+--------------+------------------+-----------+

| 0dccc3f3-0b94-4dbd-b624-cbe6f58c92e5 | osd      | 4     | 
configuration-failed | b963cdf4-30bf-4de8-bc1b-b5c70c9aa857 | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:5:0-part2 | /dev/sdf2    | 1    
            | storage   |
| 1fc2980b-7a46-494e-a6c8-ed9cac528194 | osd      | 1     | configured      
     | f78215be-3831-4902-8b10-9fba40d959a3 | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:2:0-part2 | /dev/sdc2    | 1    
            | storage   |
| 2e9f21fd-329b-4d07-9b45-7a784f235435 | osd      | 0     | configured      
     | 2a40e763-4704-4df7-b1df-763e4bef29f7 | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:1:0-part2 | /dev/sdb2    | 1    
            | storage   |
| 50e74e0a-04b8-48d1-898d-9fefda97bcfc | osd      | 2     | 
configuration-failed | 27b18bec-4586-4fc9-86aa-f9b9b2522c9e | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:3:0-part2 | /dev/sdd2    | 1    
            | storage   |
| b1336318-0a8a-41f8-b3b7-737d01373e36 | osd      | 3     | 
configuration-failed | b61c26b5-c580-4a87-b74f-a84ebfe978d7 | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:4:0-part2 | /dev/sde2    | 1    
            | storage   |
+--------------------------------------+----------+-------+----------------------+--------------------------------------+-------------------------------------------------------+--------------+------------------+-----------+

[sysadmin at controller-0 ~(keystone_admin)]$ system host-stor-list 
controller-1
+--------------------------------------+----------+-------+----------------------+--------------------------------------+-------------------------------------------------------+--------------+------------------+-----------+

| uuid                                 | function | osdid | state           
     | idisk_uuid                           | journal_path                  
                        | journal_node | journal_size_gib | tier_name |
+--------------------------------------+----------+-------+----------------------+--------------------------------------+-------------------------------------------------------+--------------+------------------+-----------+

| 0def8209-fec6-46af-8f3a-1f3b8c952436 | osd      | 6     | 
configuration-failed | 94a3909b-787d-4986-902d-df1ca02e96cc | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:2:0-part2 | /dev/sdc2    | 1    
            | storage   |
| 687e6921-1d3a-41a5-93ee-ca483f591fff | osd      | 7     | 
configuration-failed | 46ac4b95-e00d-4618-ab40-ff8752fb480b | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:3:0-part2 | /dev/sdd2    | 1    
            | storage   |
| 7e14cfae-d165-47e7-a453-5ec444630276 | osd      | 8     | 
configuration-failed | 27a35eee-6186-46d5-b72f-6e010b66265d | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:4:0-part2 | /dev/sde2    | 1    
            | storage   |
| 985e14de-04cd-4f6f-84b7-69a73a408ea6 | osd      | 5     | configured      
     | 944eec29-037c-46ed-8b6d-283d4c9cb2cb | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:1:0-part2 | /dev/sdb2    | 1    
            | storage   |
| d6547ffc-5724-486d-a2ac-9becea58b7b7 | osd      | 9     | 
configuration-failed | 607ad8b1-6134-428c-b7ee-b1e926a4ce42 | 
/dev/disk/by-path/pci-0000:02:00.0-scsi-0:2:5:0-part2 | /dev/sdf2    | 1    
            | storage   |
+--------------------------------------+----------+-------+----------------------+--------------------------------------+-------------------------------------------------------+--------------+------------------+-----------+


then both nodes are degraded, and fm alarm-list says

  200.006  | controller-1 is degraded due to the failure of its 'ceph 
(osd.6, osd.7, osd.8 | host=controller-1.    | major    | 2021-10-27T01: |
|          | , )' process. Auto recovery of this major process is in 
progress.             | process=ceph (osd.6,  |          | 55:30.227065   |
|          |                                                                
               | osd.7, osd.8, )       |          |                |
| 200.006  | controller-0 is degraded due to the failure of its 'ceph 
(osd.2, osd.3, osd.4 | host=controller-0.    | major    | 2021-10-27T01: |
|          | , )' process. Auto recovery of this major process is in 
progress.             | process=ceph (osd.2,  |          | 53:42.734229   |
|          |                                                                
               | osd.3, osd.4, )       |          |                |


so, how can i see, whats heppening? or watch the log, or fix/resolve the 
failed osd
and oddly ceph seems to believe it has more osds in a good state then stx 
reported as failed ?

5 per node, 10 total

[sysadmin at controller-0 ~(keystone_admin)]$ ceph status
   cluster:
     id:     1bb58c2a-6c22-422d-a9b7-e5deea698a28
     health: HEALTH_OK
  
   services:
     mon: 1 daemons, quorum controller
     mgr: controller-0(active), standbys: controller-1
     mds: kube-cephfs-1/1/1 up  {0=controller-0=up:active}, 1 up:standby
     osd: 10 osds: 8 up, 8 in
  
   data:
     pools:   3 pools, 192 pgs
     objects: 22  objects, 2.2 KiB
     usage:   862 MiB used, 7.3 TiB / 7.3 TiB avail
     pgs:     192 active+clean
  
[sysadmin at controller-0 ~(keystone_admin)]$ ceph osd status
+----+--------------+-------+-------+--------+---------+--------+---------+------------+

| id |     host     |  used | avail | wr ops | wr data | rd ops | rd data | 
  state    |
+----+--------------+-------+-------+--------+---------+--------+---------+------------+

| 0  | controller-0 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 1  | controller-0 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 2  |              |    0  |    0  |    0   |     0   |    0   |     0   | 
exists,new |
| 3  | controller-0 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 4  | controller-0 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 5  | controller-1 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 6  | controller-1 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 7  | controller-1 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
| 8  |              |    0  |    0  |    0   |     0   |    0   |     0   | 
exists,new |
| 9  | controller-1 |  107M |  929G |    0   |     0   |    0   |     0   | 
exists,up  |
+----+--------------+-------+-------+--------+---------+--------+---------+------------+



-- 
Sent with Vivaldi Mail. Download Vivaldi for free at vivaldi.com



More information about the Starlingx-discuss mailing list