<div dir="ltr">Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs.<div><br></div><div><br><div>controller-0:/var/log/ceph$ tailf ceph-osd.0.log<br></div><div>2019-09-16 21:04:35.099 7fae54cc51c0  0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746<br>2019-09-16 21:04:35.126 7fae54cc51c0 -1  ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory<br>2019-09-16 21:05:06.616 7f29425401c0  0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950<br>2019-09-16 21:05:06.645 7f29425401c0 -1  ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory<br>2019-09-16 21:05:37.588 7f60d58081c0  0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899<br>2019-09-16 21:05:37.615 7f60d58081c0 -1  ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory<br>2019-09-16 21:06:09.540 7fb94c4771c0  0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282<br>2019-09-16 21:06:09.568 7fb94c4771c0 -1  ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory<br>2019-09-16 21:06:41.656 7f33b6a111c0  0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090<br>2019-09-16 21:06:41.681 7f33b6a111c0 -1  ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory<br>2019-09-16 21:07:13.623 7f3fe3c031c0  0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145<br>2019-09-16 21:07:13.651 7f3fe3c031c0 -1  ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory<br></div></div><div><br></div><div>The output of ceph -s</div><div><br></div><div>controller-0:/var/log/ceph$ ceph -s<br>  cluster:<br>    id:     6cbe0ddd-f791-4226-8530-7a8347f12437<br> <b>   health: HEALTH_WARN</b><br>            Reduced data availability: 64 pgs inactive<br> <br>  services:<br>    mon: 1 daemons, quorum controller-0<br>    mgr: controller-0(active)<br> <b>   osd: 1 osds: 0 up, 0 in</b><br> <br>  data:<br>    pools:   1 pools, 64 pgs<br>    objects: 0  objects, 0 B<br>    usage:   0 B used, 0 B / 0 B avail<br>    pgs:     100.000% pgs unknown<br>             64 unknown<br> <br></div><div>I have no osd UP.</div><div><br></div><div>[sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0<br><br>+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+<br>| uuid                                 | function | osdid | state      | idisk_uuid                           | journal_path                     | journal_node | journal_size_gib | tier_name |<br>+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+<br>| c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd      | 0     | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2    | 1                | storage   |<br>|                                      |          |       |            |                                      | .0-scsi-0:1:0:1-part2            |              |                  |           |<br>|                                      |          |       |            |                                      |                                  |              |                  |           |<br>+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+<br>[sysadmin@controller-0 ceph(keystone_admin)]$ <br></div><div>[sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0<br></div><div>+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+<br>| uuid                                 | device_no | device_ | device_ | size_ | available_ | rpm          | serial_i | device_path                                     |<br>|                                      | de        | num     | type    | gib   | gib        |              | d        |                                                 |<br>+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+<br>| d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda  | 2048    | HDD     | 300.0 | 16.977     | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 |<br>|                                      |           |         |         |       |            |              | H4J1HN   |                                                 |<br>|                                      |           |         |         |       |            |              |          |                                                 |<br>| c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb  | 2064    | HDD     | 538.  | 0.0        | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 |<br>|                                      |           |         |         | 33    |            |              | H4J1HN   |                                                 |<br>|                                      |           |         |         |       |            |              |          |                                                 |<br>+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+<br></div><div><br></div><div>I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all.</div><div><br></div><div>Regards,</div><div>Mariano</div><div><br></div></div>