[Starlingx-discuss] Error with Ceph OSD
Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 * health: HEALTH_WARN* Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) * osd: 1 osds: 0 up, 0 in* data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
+ Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (< haochuan.z.chen@intel.com>) escribió:
Hi Mariano
What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder
/etc/
/var/log
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
*From:* Xie, Cindy *Sent:* Tuesday, September 17, 2019 6:31 PM *To:* Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie < tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> *Subject:* RE: [Starlingx-discuss] Error with Ceph OSD
+ Tingjie who may able to provide help.
*From:* Mariano Ucha <lw2dht@gmail.com> *Sent:* Tuesday, September 17, 2019 8:15 AM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Error with Ceph OSD
Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs.
controller-0:/var/log/ceph$ tailf ceph-osd.0.log
2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
The output of ceph -s
controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 * health: HEALTH_WARN* Reduced data availability: 64 pgs inactive
services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) * osd: 1 osds: 0 up, 0 in*
data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown
I have no osd UP.
[sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name |
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | |
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$
[sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+
I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all.
Regards,
Mariano
In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>>; Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>>; Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>>; Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (< haochuan.z.chen@intel.com>) escribió:
We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation.
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
*From:* Mariano Ucha [mailto:lw2dht@gmail.com] *Sent:* Wednesday, September 18, 2019 8:01 PM *To:* Chen, Haochuan Z <haochuan.z.chen@intel.com> *Cc:* Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie < tingjie.chen@intel.com> *Subject:* RE: [Starlingx-discuss] Error with Ceph OSD
Hi Martin
I added in the launchpad the outputs of all commands.
Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again.
I’ll wipe the partition and delete them and test, if not do a fresh install.
Agaian thanks you very much for your help!
Regards,
Mariano
*De: *Chen, Haochuan Z <haochuan.z.chen@intel.com> *Enviado: *miércoles, 18 de septiembre de 2019 04:13 *Para: *Mariano Ucha <lw2dht@gmail.com> *CC: *Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> *Asunto: *RE: [Starlingx-discuss] Error with Ceph OSD
And one more command to check.
# /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4'
# echo $?
I think it will be 0, which means your /dev/sdb disk must be used for ceph before.
You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check.
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
*From:* Chen, Haochuan Z *Sent:* Wednesday, September 18, 2019 2:40 PM *To:* 'Mariano Ucha' <lw2dht@gmail.com> *Cc:* Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie < tingjie.chen@intel.com> *Subject:* RE: [Starlingx-discuss] Error with Ceph OSD
In the log file, you uploaded.
log/puppet/2019-09-16-23-33-46_controller/puppet.log
2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: *mount_activate: Failed to activate*^[[0m
2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']*' failed with status code* 17^[[0m
Could you help to check, these two commands
# mount /dev/sdb1 /var/lib/ceph/osd/ceph-0
# ls /var/lib/ceph/osd/ceph-0 -l
Still go on study your uploaded log.
Thanks!
Martin, Chen
SSP, Software Engineer
021-61164330
*From:* Mariano Ucha [mailto:lw2dht@gmail.com <lw2dht@gmail.com>] *Sent:* Tuesday, September 17, 2019 10:05 PM *To:* Chen, Haochuan Z <haochuan.z.chen@intel.com> *Cc:* Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie < tingjie.chen@intel.com> *Subject:* Re: [Starlingx-discuss] Error with Ceph OSD
Hi Martin!
I created this launchpad
https://bugs.launchpad.net/starlingx/+bug/1844332
I uploaded a brief description and the logs.
Thank you very much in advance for the support!
Regards,
Mariano
El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (< haochuan.z.chen@intel.com>) escribió:
Hi Mariano
What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder
/etc/
/var/log
BR!
Martin, Chen
SSP, Software Engineer
021-61164330
*From:* Xie, Cindy *Sent:* Tuesday, September 17, 2019 6:31 PM *To:* Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie < tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> *Subject:* RE: [Starlingx-discuss] Error with Ceph OSD
+ Tingjie who may able to provide help.
*From:* Mariano Ucha <lw2dht@gmail.com> *Sent:* Tuesday, September 17, 2019 8:15 AM *To:* starlingx-discuss@lists.starlingx.io *Subject:* [Starlingx-discuss] Error with Ceph OSD
Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs.
controller-0:/var/log/ceph$ tailf ceph-osd.0.log
2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory
The output of ceph -s
controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 * health: HEALTH_WARN* Reduced data availability: 64 pgs inactive
services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) * osd: 1 osds: 0 up, 0 in*
data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown
I have no osd UP.
[sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name |
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | |
+--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$
[sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | |
+--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+
I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all.
Regards,
Mariano
Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com<mailto:lw2dht@gmail.com>] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>>; Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi Martin! Yes I see in the puppet logs, the error of trying to configure the OSD. When i execute system-host-delete pls uuid I got this error. system host-stor-delete 20640458-c15b-45cc-8ca1-1fa1203f2261 Deleting a Storage Function other than 'journal' and 'osd' in state 'configuring-on-unlock' is not supported on this setup. Now I’m running dd if=/dev/zero of=/dev/sdb with the 2 partition deleted. I don’t know why systems keeps rebooting after the first unlock. Seems to be a component not installed correctly. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 22:30 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Actually dd to erase disk is abrupt way. So you can check after erase partition, “# shutdown –P 0” and reinstall. I could help you to check “configuring-on-unlock” issue. Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 10:08 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin! Yes I see in the puppet logs, the error of trying to configure the OSD. When i execute system-host-delete pls uuid I got this error. system host-stor-delete 20640458-c15b-45cc-8ca1-1fa1203f2261 Deleting a Storage Function other than 'journal' and 'osd' in state 'configuring-on-unlock' is not supported on this setup. Now I’m running dd if=/dev/zero of=/dev/sdb with the 2 partition deleted. I don’t know why systems keeps rebooting after the first unlock. Seems to be a component not installed correctly. Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: jueves, 19 de septiembre de 2019 22:30 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com<mailto:lw2dht@gmail.com>] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>>; Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Martin, First i’ll try with this DD command and fresh reinstallation. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 23:32 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Actually dd to erase disk is abrupt way. So you can check after erase partition, “# shutdown –P 0” and reinstall. I could help you to check “configuring-on-unlock” issue. Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 10:08 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin! Yes I see in the puppet logs, the error of trying to configure the OSD. When i execute system-host-delete pls uuid I got this error. system host-stor-delete 20640458-c15b-45cc-8ca1-1fa1203f2261 Deleting a Storage Function other than 'journal' and 'osd' in state 'configuring-on-unlock' is not supported on this setup. Now I’m running dd if=/dev/zero of=/dev/sdb with the 2 partition deleted. I don’t know why systems keeps rebooting after the first unlock. Seems to be a component not installed correctly. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 22:30 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
No way, again the same problem. Server keeps rebooing a couple of times then i have to reboot manually. After that i check ceph and no osd running and the same problema ☹ Regards, Mariano De: Mariano Ucha Enviado: jueves, 19 de septiembre de 2019 23:53 Para: Chen, Haochuan Z CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Martin, First i’ll try with this DD command and fresh reinstallation. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 23:32 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Actually dd to erase disk is abrupt way. So you can check after erase partition, “# shutdown –P 0” and reinstall. I could help you to check “configuring-on-unlock” issue. Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 10:08 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin! Yes I see in the puppet logs, the error of trying to configure the OSD. When i execute system-host-delete pls uuid I got this error. system host-stor-delete 20640458-c15b-45cc-8ca1-1fa1203f2261 Deleting a Storage Function other than 'journal' and 'osd' in state 'configuring-on-unlock' is not supported on this setup. Now I’m running dd if=/dev/zero of=/dev/sdb with the 2 partition deleted. I don’t know why systems keeps rebooting after the first unlock. Seems to be a component not installed correctly. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 22:30 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi! I made a WorkAround and solve this problema. I execute a dd if=/dev/zero of=/dev/sdb with FS unmounted. Then rebooted the server. When it came I have to lock and unlock. And now is running ok. Is a strange behaviour and i do not know with always happens the same. Regards and have a good weekend. Mariano De: Mariano Ucha Enviado: viernes, 20 de septiembre de 2019 15:11 Para: Chen, Haochuan Z CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD No way, again the same problem. Server keeps rebooing a couple of times then i have to reboot manually. After that i check ceph and no osd running and the same problema ☹ Regards, Mariano De: Mariano Ucha Enviado: jueves, 19 de septiembre de 2019 23:53 Para: Chen, Haochuan Z CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Martin, First i’ll try with this DD command and fresh reinstallation. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 23:32 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Actually dd to erase disk is abrupt way. So you can check after erase partition, “# shutdown –P 0” and reinstall. I could help you to check “configuring-on-unlock” issue. Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 10:08 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin! Yes I see in the puppet logs, the error of trying to configure the OSD. When i execute system-host-delete pls uuid I got this error. system host-stor-delete 20640458-c15b-45cc-8ca1-1fa1203f2261 Deleting a Storage Function other than 'journal' and 'osd' in state 'configuring-on-unlock' is not supported on this setup. Now I’m running dd if=/dev/zero of=/dev/sdb with the 2 partition deleted. I don’t know why systems keeps rebooting after the first unlock. Seems to be a component not installed correctly. Regards, Mariano De: Chen, Haochuan Z Enviado: jueves, 19 de septiembre de 2019 22:30 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha CC: Xie, Cindy; starlingx-discuss@lists.starlingx.io; Chen, Tingjie Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com>; Chen, Haochuan Z <haochuan.z.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
Hi Mariano Sorry, I have should inform to lock host firstly. To make change to host lock host. And when you debug any issue, such as always reboot, host lock firstly and check log. $ system host-lock <host name or id> BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha <lw2dht@gmail.com> Sent: Saturday, September 21, 2019 9:27 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com> Cc: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io; Chen, Tingjie <tingjie.chen@intel.com> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi! I made a WorkAround and solve this problema. I execute a dd if=/dev/zero of=/dev/sdb with FS unmounted. Then rebooted the server. When it came I have to lock and unlock. And now is running ok. Is a strange behaviour and i do not know with always happens the same. Regards and have a good weekend. Mariano De: Mariano Ucha<mailto:lw2dht@gmail.com> Enviado: viernes, 20 de septiembre de 2019 15:11 Para: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD No way, again the same problem. Server keeps rebooing a couple of times then i have to reboot manually. After that i check ceph and no osd running and the same problema ☹ Regards, Mariano De: Mariano Ucha<mailto:lw2dht@gmail.com> Enviado: jueves, 19 de septiembre de 2019 23:53 Para: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Martin, First i’ll try with this DD command and fresh reinstallation. Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: jueves, 19 de septiembre de 2019 23:32 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Actually dd to erase disk is abrupt way. So you can check after erase partition, “# shutdown –P 0” and reinstall. I could help you to check “configuring-on-unlock” issue. Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 10:08 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin! Yes I see in the puppet logs, the error of trying to configure the OSD. When i execute system-host-delete pls uuid I got this error. system host-stor-delete 20640458-c15b-45cc-8ca1-1fa1203f2261 Deleting a Storage Function other than 'journal' and 'osd' in state 'configuring-on-unlock' is not supported on this setup. Now I’m running dd if=/dev/zero of=/dev/sdb with the 2 partition deleted. I don’t know why systems keeps rebooting after the first unlock. Seems to be a component not installed correctly. Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: jueves, 19 de septiembre de 2019 22:30 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD Hi Mariano Everytime reboot puppet will apply ceph osd configuration. So another way to ensure deploy successfully, system host-stor-delete, and use dd if=/dev/zero of=/dev/sdb. After that, press power button to shutdown directly and reinstall. After you once successfully installed, you will be more proficient. Thank you for your patience! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Friday, September 20, 2019 7:38 AM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I like StarlingX but i getting mad trying to install it hehe. I have no lucky. I have done a dd if=/dev/zero of=/dev/sdb1 and them remove all partition with fdisk. But unfortunately ceph is not working properly again. Before i unlock the server restart like two times and have no contriners running. I made again another reboot and them things are "working". I added again de logs from /etc and /var/log and the las commands output to the launchpad. Thank you for your patience. Regards, Mariano El mié., 18 sept. 2019 a las 10:43, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: We are so happy, you like starlingx. And you check with wipe partition, and check with us with any hesitation. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com<mailto:lw2dht@gmail.com>] Sent: Wednesday, September 18, 2019 8:01 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD Hi Martin I added in the launchpad the outputs of all commands. Yes I was thinking the same thing, I used this server to test several StarlingX images before. I think the every reinstall Will wipe the partition and create again. I’ll wipe the partition and delete them and test, if not do a fresh install. Agaian thanks you very much for your help! Regards, Mariano De: Chen, Haochuan Z<mailto:haochuan.z.chen@intel.com> Enviado: miércoles, 18 de septiembre de 2019 04:13 Para: Mariano Ucha<mailto:lw2dht@gmail.com> CC: Xie, Cindy<mailto:cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie<mailto:tingjie.chen@intel.com> Asunto: RE: [Starlingx-discuss] Error with Ceph OSD And one more command to check. # /usr/sbin/ceph-disk list | grep -v 'unknown cluster' | grep " *$(readlink -f /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1).*ceph data" | grep -v unprepared | grep 'osd uuid c04938b2-cb80-411b-af2a-1a6b82d13df4' # echo $? I think it will be 0, which means your /dev/sdb disk must be used for ceph before. You should erase all data on /dev/sdb1 with “dd if=/dev/zero of=/dev/sdb1” and delete /dev/sdb1 and /dev/sdb2 with fdisk. And host-stor-add again or shutdown and reinstall to check. BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Chen, Haochuan Z Sent: Wednesday, September 18, 2019 2:40 PM To: 'Mariano Ucha' <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD In the log file, you uploaded. log/puppet/2019-09-16-23-33-46_controller/puppet.log 2019-09-16T23:35:57.103 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: mount_activate: Failed to activate^[[0m 2019-09-16T23:35:57.105 ^[[mNotice: 2019-09-16 20:35:57 -0300 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1]/returns: '['ceph', '--cluster', 'ceph', '--name', 'client.bootstrap-osd', '--keyring', '/var/lib/ceph/bootstrap-osd/ceph.keyring', '-i', '-', 'osd', 'new', u'c04938b2-cb80-411b-af2a-1a6b82d13df4', u'0']' failed with status code 17^[[0m Could you help to check, these two commands # mount /dev/sdb1 /var/lib/ceph/osd/ceph-0 # ls /var/lib/ceph/osd/ceph-0 -l Still go on study your uploaded log. Thanks! Martin, Chen SSP, Software Engineer 021-61164330 From: Mariano Ucha [mailto:lw2dht@gmail.com] Sent: Tuesday, September 17, 2019 10:05 PM To: Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Cc: Xie, Cindy <cindy.xie@intel.com<mailto:cindy.xie@intel.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>> Subject: Re: [Starlingx-discuss] Error with Ceph OSD Hi Martin! I created this launchpad https://bugs.launchpad.net/starlingx/+bug/1844332 I uploaded a brief description and the logs. Thank you very much in advance for the support! Regards, Mariano El mar., 17 sept. 2019 a las 10:31, Chen, Haochuan Z (<haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>>) escribió: Hi Mariano What’s your image? You can submit a Launchpad issue, uploading tarball with these two folder /etc/ /var/log BR! Martin, Chen SSP, Software Engineer 021-61164330 From: Xie, Cindy Sent: Tuesday, September 17, 2019 6:31 PM To: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>>; starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io>; Chen, Tingjie <tingjie.chen@intel.com<mailto:tingjie.chen@intel.com>>; Chen, Haochuan Z <haochuan.z.chen@intel.com<mailto:haochuan.z.chen@intel.com>> Subject: RE: [Starlingx-discuss] Error with Ceph OSD + Tingjie who may able to provide help. From: Mariano Ucha <lw2dht@gmail.com<mailto:lw2dht@gmail.com>> Sent: Tuesday, September 17, 2019 8:15 AM To: starlingx-discuss@lists.starlingx.io<mailto:starlingx-discuss@lists.starlingx.io> Subject: [Starlingx-discuss] Error with Ceph OSD Hi! I have my controller now unlocked but i see a problem with ceph OSD that is not UP and i see this error in the logs. controller-0:/var/log/ceph$ tailf ceph-osd.0.log 2019-09-16 21:04:35.099 7fae54cc51c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 191746 2019-09-16 21:04:35.126 7fae54cc51c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:06.616 7f29425401c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 194950 2019-09-16 21:05:06.645 7f29425401c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:05:37.588 7f60d58081c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 198899 2019-09-16 21:05:37.615 7f60d58081c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:09.540 7fb94c4771c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 202282 2019-09-16 21:06:09.568 7fb94c4771c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:06:41.656 7f33b6a111c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 206090 2019-09-16 21:06:41.681 7f33b6a111c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory 2019-09-16 21:07:13.623 7f3fe3c031c0 0 ceph version 13.2.2 (00071a2d9e839b95f9439daaccd4677c5d15eaa6) mimic (stable), process ceph-osd, pid 209145 2019-09-16 21:07:13.651 7f3fe3c031c0 -1 ** ERROR: unable to open OSD superblock on /var/lib/ceph/osd/ceph-0: (2) No such file or directory The output of ceph -s controller-0:/var/log/ceph$ ceph -s cluster: id: 6cbe0ddd-f791-4226-8530-7a8347f12437 health: HEALTH_WARN Reduced data availability: 64 pgs inactive services: mon: 1 daemons, quorum controller-0 mgr: controller-0(active) osd: 1 osds: 0 up, 0 in data: pools: 1 pools, 64 pgs objects: 0 objects, 0 B usage: 0 B used, 0 B / 0 B avail pgs: 100.000% pgs unknown 64 unknown I have no osd UP. [sysadmin@controller-0 ceph(keystone_admin)]$ system host-stor-list controller-0 +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | uuid | function | osdid | state | idisk_uuid | journal_path | journal_node | journal_size_gib | tier_name | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ | c04938b2-cb80-411b-af2a-1a6b82d13df4 | osd | 0 | configured | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/disk/by-path/pci-0000:03:00 | /dev/sdb2 | 1 | storage | | | | | | | .0-scsi-0:1:0:1-part2 | | | | | | | | | | | | | | +--------------------------------------+----------+-------+------------+--------------------------------------+----------------------------------+--------------+------------------+-----------+ [sysadmin@controller-0 ceph(keystone_admin)]$ [sysadmin@controller-0 ceph(keystone_admin)]$ system host-disk-list controller-0 +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | uuid | device_no | device_ | device_ | size_ | available_ | rpm | serial_i | device_path | | | de | num | type | gib | gib | | d | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ | d04d36cf-abc2-4b0b-b911-028b9eaebf82 | /dev/sda | 2048 | HDD | 300.0 | 16.977 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:0 | | | | | | | | | H4J1HN | | | | | | | | | | | | | c99b1eb9-789e-4f55-ae65-96d3bb147224 | /dev/sdb | 2064 | HDD | 538. | 0.0 | Undetermined | PCQVU0CR | /dev/disk/by-path/pci-0000:03:00.0-scsi-0:1:0:1 | | | | | | 33 | | | H4J1HN | | | | | | | | | | | | +--------------------------------------+-----------+---------+---------+-------+------------+--------------+----------+-------------------------------------------------+ I have no more idea to do. The unlock was not clear, i have to reboot the server and then it finished all. Regards, Mariano
participants (3)
-
Chen, Haochuan Z
-
Mariano Ucha
-
Xie, Cindy