Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (< haochuan.z.chen@intel.com>) escribió:
Hi Mariano
What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder
/etc/
/var/log
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
*From:* Xie, Cindy *Sent:* Tuesday, September 17, 2019 6:31 PM *To:* Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie < tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> *Subject:* RE: [Starlingx-discuss] Error with Ceph OSD
+ Tingjie who may able to provide help.
*From:* Mariano Ucha <lw2dht@gmail.com> *Sent:* Tuesday, September 17, 2019 8:15 AM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Error with Ceph OSD
Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs.
controller-0:/var/log/ceph$ tailf ceph-osd.0.log
2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
The output of ceph -s
controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 * health: HEALTH_WARN* Reduced data availability: 64 pgs inactive
services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) * osd: 1 osds: 0 up, 0 in*
data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown
I have no osd UP.
[sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name |
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | |
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$
[sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+
I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all.
Regards,
Mariano