[Starlingx-discuss] [Containers] Set Ceph pool replication on Simplex
Perez Carranza, Jose
jose.perez.carranza at intel.com
Tue Feb 26 21:27:59 UTC 2019
Hi Bob
I'm using an ISO based on stein.
http://mirror.starlingx.cengn.ca/mirror/starlingx/f/stein/centos/20190225T191350Z/
Regards,
José
> -----Original Message-----
> From: Church, Robert [mailto:Robert.Church at windriver.com]
> Sent: Tuesday, February 26, 2019 2:39 PM
> To: Perez Carranza, Jose <jose.perez.carranza at intel.com>; starlingx-
> discuss at lists.starlingx.io
> Subject: Re: [Starlingx-discuss] [Containers] Set Ceph pool replication on
> Simplex
>
> Hi José,
>
> I believe that this is the current behavior on the master branch. Those pools
> are created by the rados-gw.
>
> In f/stein, there is a CronJob installed in the cluster to audit and fix this
> condition.
>
> See https://git.starlingx.io/cgit/stx-
> config/commit/?h=f/stein&id=754f49a3575b78517327c3a0a7556cc25de6a18b
>
> Feel free to make the wiki change but please add a note that this is for the
> master branch only. After cut-over (when we merge f/stein into master), we'll
> need to update it again.
>
> Bob
>
> On 2/26/19, 2:08 PM, "Perez Carranza, Jose" <jose.perez.carranza at intel.com>
> wrote:
>
> Hi
>
> Today while I was setting up a Simplex configuration with support for
> containers [1] I realized that when running section " Set Ceph pool replication
> (AIO-SX only)" only one pool is listed, but after doing the unlock of the
> controller another 4 pools are listed, and hence `ceph -s` is showing warning,
> this is solved executing again the command for osd pool set. My question here
> is that if this behavior is expected? Hence I can update the wiki accordingly.
>
>
> ===================================
> - Before Unlock
>
> controller-0:~$ ceph osd pool ls
> rbd
>
>
> - After Unlock
>
> wrsroot at controller-0 ~(keystone_admin)]$ ceph osd pool ls
> rbd
> .rgw.root
> default.rgw.control
> default.rgw.data.root
> default.rgw.gc
> default.rgw.log
>
> [wrsroot at controller-0 ~(keystone_admin)]$ ceph -s
> cluster 783007cd-8d65-4e79-8905-b22262326a7c
> health HEALTH_WARN
> 320 pgs degraded
> 64 pgs stuck unclean
> 320 pgs undersized
> recovery 1116/2232 objects degraded (50.000%)
> monmap e1: 1 mons at {controller-0=192.168.204.3:6789/0}
> election epoch 4, quorum 0 controller-0
> osdmap e16: 1 osds: 1 up, 1 in
> flags sortbitwise,require_jewel_osds
> pgmap v19: 384 pgs, 6 pools, 1588 bytes data, 1116 objects
> 43308 kB used, 101283 MB / 101325 MB avail
> 1116/2232 objects degraded (50.000%)
> 320 active+undersized+degraded
> 64 active+clean
>
> ===================================
>
>
>
> 1- https://wiki.openstack.org/wiki/StarlingX/Containers/Installation
>
> Regards,
> José
>
>
> _______________________________________________
> Starlingx-discuss mailing list
> Starlingx-discuss at lists.starlingx.io
> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
>
More information about the Starlingx-discuss
mailing list