hi,
my ceph have an health warning:
[sysadmin@controller-0 ~(keystone_admin)]$ ceph -s
cluster:
id: 2d6cb518-420e-4a97-919a-f42b808f2049
health: HEALTH_WARN
2 pools have many more objects per pg than average
services:
mon: 3 daemons, quorum controller-0,controller-1,storage-0
mgr: controller-0(active), standbys: controller-1
osd: 30 osds: 30 up, 30 in
data:
pools: 5 pools, 600 pgs
objects: 97.12 k objects, 596 GiB
usage: 1.7 TiB used, 107 TiB / 109 TiB avail
pgs: 600 active+clean
io:
client: 25 MiB/s rd, 36 MiB/s wr, 169 op/s rd, 104 op/s wr
after checkout "ceph df", i can see an huge number of objects:
[sysadmin@controller-0 ~(keystone_admin)]$ ceph df
GLOBAL:
SIZE AVAIL RAW USED %RAW USED
109 TiB 107 TiB 1.7 TiB 1.58
POOLS:
NAME ID USED %USED MAX AVAIL OBJECTS
kube-rbd 1 1.5 GiB 0 33 TiB 451
images 2 437 GiB 1.28 33 TiB 55950
cinder.backups 3 0 B 0 33 TiB 0
cinder-volumes 4 151 GiB 0.45 33 TiB 39780
ephemeral 5 0 B 0 33 TiB 0
is this a normal value for round about 12 images and 8 values ? maybe there is some thrash inside this pools, how can i run an cleanup?
greez & thx,
volker.