[Starlingx-discuss] Drdb fails for AIO Simplex after unlock controller-0

胡天昊 hu.tianhao at 99cloud.net
Sun Feb 16 15:19:10 UTC 2020


 I didn't reboot the node until the node unlock.
And this is my disk information:
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 0 600G 0 disk
├─sda1 8:1 0 1M 0 part
├─sda2 8:2 0 500M 0 part /boot
├─sda3 8:3 0 19.5G 0 part /
├─sda4 8:4 0 229G 0 part
│ ├─cgts--vg-scratch--lv 253:0 0 8G 0 lvm /scratch
│ ├─cgts--vg-log--lv 253:1 0 7.8G 0 lvm /var/log
│ ├─cgts--vg-extension--lv 253:2 0 1G 0 lvm
│ │ └─drbd5 147:5 0 1024M 1 disk
│ ├─cgts--vg-pgsql--lv 253:3 0 20G 0 lvm
│ ├─cgts--vg-docker--lv 253:4 0 30G 0 lvm /var/lib/docker
│ ├─cgts--vg-kubelet--lv 253:5 0 10G 0 lvm /var/lib/kubelet
│ ├─cgts--vg-etcd--lv 253:6 0 5G 0 lvm
│ │ └─drbd7 147:7 0 5G 1 disk
│ ├─cgts--vg-backup--lv 253:7 0 25G 0 lvm /opt/backups
│ ├─cgts--vg-dockerdistribution--lv 253:8 0 16G 0 lvm
│ │ └─drbd8 147:8 0 16G 1 disk
│ ├─cgts--vg-rabbit--lv 253:9 0 2G 0 lvm
│ │ └─drbd1 147:1 0 2G 1 disk
│ ├─cgts--vg-platform--lv 253:10 0 10G 0 lvm
│ │ └─drbd2 147:2 0 10G 1 disk
│ └─cgts--vg-ceph--mon--lv 253:11 0 20G 0 lvm /var/lib/ceph/mon
└─sda5 8:5 0 34G 0 part
sdb 8:16 0 200G 0 disk
├─sdb1 8:17 0 199G 0 part /var/lib/ceph/osd/ceph-0
└─sdb2 8:18 0 1G 0 part
sdc 8:32 0 200G 0 disk
sr0 11:0 1 1.9G 0 rom











发件人:"Penney, Don" <Don.Penney at windriver.com>
发送日期:2020-02-14 01:56:48
收件人:"胡天昊" <hu.tianhao at 99cloud.net>,starlingx-discuss <starlingx-discuss at lists.starlingx.io>
主题:RE: [Starlingx-discuss] Drdb fails for AIO Simplex after unlock controller-0
You’re not rebooting the node after the initial ansible playbooks before you do the config and unlock, are you?
 
How big are your disks?
 
 
From: 胡天昊 [mailto:hu.tianhao at 99cloud.net]
 Sent: Monday, February 10, 2020 3:30 AM
 To: starlingx-discuss
 Subject: [Starlingx-discuss] Drdb fails for AIO Simplex after unlock controller-0
 
Hi all,

 

When I install AIO Simplex following this guide(https://docs.starlingx.io/deploy_install_guides/r3_release/virtual/aio_simplex_install_kubernetes.html), drdb fails after unlock controller-0. And following is part of the log(/var/log/puppet/latest/puppet.log):

 

2020-01-16T09:35:07.689 Debug: 2020-01-16 09:35:07 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/unless: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5707: UserWarning:

2020-01-16T09:35:07.732 Debug: 2020-01-16 09:35:07 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/unless: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5739: UserWarning:

2020-01-16T09:36:15.967 Notice: 2020-01-16 09:36:15 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5707: UserWarning:

2020-01-16T09:36:16.507 Notice: 2020-01-16 09:36:16 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-prepare-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5739: UserWarning:

2020-01-16T09:36:20.485 Notice: 2020-01-16 09:36:20 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5707: UserWarning:

2020-01-16T09:36:20.568 Notice: 2020-01-16 09:36:20 +0000 /Stage[main]/Platform::Ceph::Osds/Platform_ceph_osd[stor-1]/Ceph::Osd[/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/Exec[ceph-osd-activate-/dev/disk/by-path/pci-0000:00:1f.2-ata-2.0]/returns: /usr/lib/python2.7/site-packages/ceph_disk/main.py:5739: UserWarning:

2020-01-16T09:36:34.615 Error: 2020-01-16 09:36:34 +0000 yes yes | drbdadm create-md drbd-pgsql -W--peer-max-bio-size=128k returned 40 instead of one of [0]

2020-01-16T09:36:34.843 Error: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Drbd::Pgsql/Platform::Drbd::Filesystem[drbd-pgsql]/Drbd::Resource[drbd-pgsql]/Drbd::Resource::Enable[drbd-pgsql]/Drbd::Resource::Up[drbd-pgsql]/Exec[initialize DRBD metadata for drbd-pgsql]/returns: change from notrun to 0 failed: yes yes | drbdadm create-md drbd-pgsql -W--peer-max-bio-size=128k returned 40 instead of one of [0]

2020-01-16T09:36:34.852 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Drbd::Pgsql/Platform::Drbd::Filesystem[drbd-pgsql]/Drbd::Resource[drbd-pgsql]/Drbd::Resource::Enable[drbd-pgsql]/Drbd::Resource::Up[drbd-pgsql]/Exec[enable DRBD resource drbd-pgsql]: Skipping because of failed dependencies

2020-01-16T09:36:35.052 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Drbd::Service/Service[drbd]: Skipping because of failed dependencies

2020-01-16T09:36:35.094 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Anchors/Anchor[platform::services]: Skipping because of failed dependencies

2020-01-16T09:36:35.122 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Helm/File[/opt/platform/helm_charts]: Skipping because of failed dependencies

2020-01-16T09:36:35.135 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Helm/Exec[restart lighttpd for helm]: Skipping because of failed dependencies

2020-01-16T09:36:35.164 Warning: 2020-01-16 09:36:34 +0000 /Stage[main]/Platform::Helm::Repositories/Platform::Helm::Repository[starlingx]/File[/www/pages/helm_charts/starlingx]: Skipping because of failed dependencies


 

It seems like that drdb config fails. And I run "drbdadm create-md drbd-pgsql -W--peer-max-bio-size=128k", I get the following result:

 

Move internal meta data from last-known position?

 

[need to type 'yes' to confirm] yes

 

md_offset 21474832384

al_offset 0

bm_offset 0

 

Found ext3 filesystem

    20971520 kB data area apparently used

           0 kB left usable by current configuration

 

Device size would be truncated, which

would corrupt data and result in

'access beyond end of device' errors.

You need to either

   * use external meta data (recommended)

   * shrink that filesystem first

   * zero out the device (destroy the filesystem)

Operation refused.

 

Command 'drbdmeta 0 v08 /dev/cgts-vg/pgsql-lv internal create-md --peer-max-bio-size=128k' terminated with exit code 40


 

I'd like to know that how to reconfig drdb or did I set some config wrong?

 

Thanks,

Tianhao

 

 

 

 





 





-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20200216/46dec0c4/attachment-0001.html>


More information about the Starlingx-discuss mailing list