[Starlingx-discuss] Discussion about StarlingX test cases in CEPH

Chen, Tingjie tingjie.chen at intel.com
Thu Apr 4 03:23:31 UTC 2019


The test case for StarlingX CEPH upgrade we discussed, I have file link for review: https://ethercalc.openstack.org/orb83xruwmo8
+ starlingx-discuss for collecting feedback from community...

Thanks,
Tingjie

From: Badea, Daniel [mailto:Daniel.Badea at windriver.com]
Sent: Wednesday, March 27, 2019 5:52 PM
To: Chen, Tingjie <tingjie.chen at intel.com<mailto:tingjie.chen at intel.com>>
Cc: Perez, Ricardo O <ricardo.o.perez at intel.com<mailto:ricardo.o.perez at intel.com>>; Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>; Miller, Frank <Frank.Miller at windriver.com<mailto:Frank.Miller at windriver.com>>; Poncea, Ovidiu <Ovidiu.Poncea at windriver.com<mailto:Ovidiu.Poncea at windriver.com>>; Jones, Bruce E <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>; Lara, Cesar <cesar.lara at intel.com<mailto:cesar.lara at intel.com>>
Subject: RE: Discussion about StarlingX test cases in CEPH

Hi Tingjie,

Please note that CEPH_STOR_TIER_04 ("associate services with a new storage tier") will fail because support multiple tiers is currently broken. There is a review in progress to fix it: https://review.openstack.org/#/c/632346/3

Best regards,
Daniel B.

From: Chen, Tingjie
Sent: Tuesday, March 26, 2019 4:10 PM
To: Perez, Ricardo O <ricardo.o.perez at intel.com<mailto:ricardo.o.perez at intel.com>>; Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>
Subject: RE: Discussion about StarlingX test cases in CEPH

Hi Ricardo,

For the test case of IO path, I have setup my environment and dry run, it need network configuration in VM or BM.

Steps:
1/ Make sure your VM/BM can access external network, if yes, ignore the following commands in step 1.
It is needed also in containerized configuration,  My VM setting for example, suppose your IP of controller-0 (active controller) is 10.10.10.3
In Host:
echo 1 | sudo tee /proc/sys/net/ipv4/ip_forward
sudo iptables -t nat -A POSTROUTING -s 10.10.10.0/24 -j MASQUERADE
Also if you have proxy in VM/BM please don't forget to set.

2/ Install FIO and related libraries.
[wrsroot at controller-0 ~(keystone_admin)]$ sudo yum update
If you have no repo base list, just find one from network and push into /etc/yum.repos.d/
[wrsroot at controller-0 ~(keystone_admin)]$ sudo yum install fio

3/ Create pool and rbd, start to run fio.
In fio config file (rbd.fio), assign one pool and rbd and you created manually.
[wrsroot at controller-0 ~(keystone_admin)]$ cat rbd.fio
[global]
ioengine=rbd
clientname=admin
pool=test_pool # create pool named test_pool before run fio
rbdname=test_rbd # create rbd named test_rbd (1G in my example) in test_pool before run fio
invalidate=0
rw=randwrite
bs=4k

[rbd_iodepth32]
iodepth=32

Then run the fio:
[wrsroot at controller-0 ~(keystone_admin)]$ fio rbd.fio
rbd_iodepth32: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=rbd, iodepth=32
fio-3.1
Starting 1 process
Jobs: 1 (f=1): [w(1)][99.6%][r=0KiB/s,w=4624KiB/s][r=0,w=1156 IOPS][eta 00m:01s]
rbd_iodepth32: (groupid=0, jobs=1): err= 0: pid=1694134: Tue Mar 26 00:22:04 2019
  write: IOPS=1112, BW=4448KiB/s (4555kB/s)(1024MiB/235722msec)
    slat (nsec): min=987, max=13491k, avg=4692.70, stdev=63898.52
    clat (usec): min=1688, max=242157, avg=28660.64, stdev=14844.91
     lat (usec): min=1705, max=242160, avg=28665.34, stdev=14845.14
    clat percentiles (msec):
     |  1.00th=[   10],  5.00th=[   13], 10.00th=[   15], 20.00th=[   18],
     | 30.00th=[   21], 40.00th=[   24], 50.00th=[   27], 60.00th=[   29],
     | 70.00th=[   33], 80.00th=[   38], 90.00th=[   45], 95.00th=[   53],
     | 99.00th=[   79], 99.50th=[   97], 99.90th=[  169], 99.95th=[  178],
     | 99.99th=[  213]
   bw (  KiB/s): min= 1416, max= 7016, per=99.96%, avg=4446.03, stdev=765.16, samples=471
   iops        : min=  354, max= 1754, avg=1111.41, stdev=191.28, samples=471
  lat (msec)   : 2=0.01%, 4=0.01%, 10=1.56%, 20=26.82%, 50=65.29%
  lat (msec)   : 100=5.89%, 250=0.44%
  cpu          : usr=0.63%, sys=0.29%, ctx=14440, majf=0, minf=8434
  IO depths    : 1=1.6%, 2=3.9%, 4=9.6%, 8=23.8%, 16=57.1%, 32=4.1%, >=64=0.0%
     submit    : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
     complete  : 0=0.0%, 4=96.3%, 8=0.2%, 16=0.3%, 32=3.1%, 64=0.0%, >=64=0.0%
     issued rwt: total=0,262144,0, short=0,0,0, dropped=0,0,0
     latency   : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
  WRITE: bw=4448KiB/s (4555kB/s), 4448KiB/s-4448KiB/s (4555kB/s-4555kB/s), io=1024MiB (1074MB), run=235722-235722msec

Disk stats (read/write):
  sda: ios=2664/10592, merge=0/2020, ticks=2333/40725, in_queue=42859, util=11.94%

And just refer the flow with Read / Write of CEPH internal process.
[cid:image002.jpg at 01D4EAD2.324A2A20]

So it seems the TCs we aligned mostly, what is the next process to final confirm? :)
@Ricardo, will you list the new TCs and dry-run on 2+2+2 configuration firstly?

Thanks,
Tingjie

From: Perez, Ricardo O
Sent: Tuesday, March 26, 2019 5:57 AM
To: Chen, Tingjie <tingjie.chen at intel.com<mailto:tingjie.chen at intel.com>>; Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>
Subject: RE: Discussion about StarlingX test cases in CEPH

Hi Tingjie,

Thanks lot for you shared details, please see my embedded answers below.

Regards
-Richo
From: Chen, Tingjie
Sent: Monday, March 25, 2019 8:32 AM
To: Perez, Ricardo O <ricardo.o.perez at intel.com<mailto:ricardo.o.perez at intel.com>>; Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>
Subject: RE: Discussion about StarlingX test cases in CEPH

Hi Ricardo,

For the test cases we proposed, it seems there are 2 items need to clarify per your comments.
1/ RESTful plugin:
This case is to verify CEPH-MGR via python-client with restful API, we have no automation script and yes the rest API is complex for manual commands.
I can provide the lists can be verified, first step we can go through some GET operations, the case will be passed if information shows normally and correctly.

[cid:image004.jpg at 01D4EAD2.324A2A20]
// get the user and keys.
[wrsroot at controller-0 ~(keystone_admin)]$ ceph restful list-keys
{
  "admin": "579b6a27-e019-4887-b0ce-ee2fee6c4134"
}
// get ceph-mgr service endpoint and port
[wrsroot at controller-0 ~(keystone_admin)]$ ceph mgr services
{
    "restful": "https://controller-0:5001/"
}
// get the available service link list
[wrsroot at controller-0 ~(keystone_admin)]$ curl -k https://controller-0:5001/doc
...

// for example, get monitors detail information
[wrsroot at controller-0 ~(keystone_admin)]$ curl -k -u admin:579b6a27-e019-4887-b0ce-ee2fee6c4134 https://controller-0:5001/mon -X GET
[
    {
        "addr": "192.168.204.3:6789/0",
        "in_quorum": true,
        "leader": true,
        "name": "controller-0",
        "public_addr": "192.168.204.3:6789/0",
        "rank": 0,
        "server": "controller-0"
    },
    {
        "addr": "192.168.204.4:6789/0",
        "in_quorum": true,
        "leader": false,
        "name": "controller-1",
        "public_addr": "192.168.204.4:6789/0",
        "rank": 1,
        "server": "controller-1"
    },
    {
        "addr": "192.168.204.95:6789/0",
        "in_quorum": true,
        "leader": false,
        "name": "storage-0",
        "public_addr": "192.168.204.95:6789/0",
        "rank": 2,
        "server": "storage-0"
    }
]

[Perez, Ricardo O] Thanks for sharing the commands and the expected results.
2/ IO path:
The read/write test with fio with rbd engine will go through CEPH full stack, include librados, rbd, osd, mon and messages, this is a comprehensive case and also performance impact if needed in the future.
I am preparing the FIO environment in StarlingX, since current setup does not support fio by default, more details will provide if have progress.
[Perez, Ricardo O] Then, I believe you are going to share with us when FIO environment is ready as well the steps to be executed right ?

BTW: May I ask your plan for StarlingX deployment?
[Perez, Ricardo O] Sure, by now, when we have to test something related to CEPH, we normally use a 2+2+2 configuration (2 controllers, 2 computes and 2 storage nodes). By now, we are able to use a mix of network cards, Mellanox / Intel. (in this 2+2+2 config) if more nodes are required we are able just to use the default attached cards in the servers.


Thanks,
Tingjie

From: Perez, Ricardo O
Sent: Saturday, March 23, 2019 5:49 AM
To: Chen, Tingjie <tingjie.chen at intel.com<mailto:tingjie.chen at intel.com>>; Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>
Subject: RE: Discussion about StarlingX test cases in CEPH

Hi Tingjie,

See my embedded answers below

From: Chen, Tingjie
Sent: Friday, March 22, 2019 1:58 AM
To: Cabrales, Ada <ada.cabrales at intel.com<mailto:ada.cabrales at intel.com>>; Perez, Ricardo O <ricardo.o.perez at intel.com<mailto:ricardo.o.perez at intel.com>>; Xie, Cindy <cindy.xie at intel.com<mailto:cindy.xie at intel.com>>; Zhu, Vivian <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>
Subject: Discussion about StarlingX test cases in CEPH

Hi,

Just kick-off a new thread for the discussion about CEPH test case.

1/ Previously Ada has share the link:
https://docs.google.com/spreadsheets/d/1O2zWn-R83Wj1SqmeUtxCP59_DsSM0UNZW0gRALnDn6w/edit?usp=sharing
We have some detailed comments on some cases, as purple color.
also there are questions for the test plan:
a/ We execute test suites, do you have test framework/script or use commands directly?
[Perez, Ricardo O] Currently we execute commands directly.
b/ In case of failure, how to decide whether it is blocking issue, maybe we can define the priority of test cases, how do you think?
[Perez, Ricardo O] As part of the validation conventions, normally a blocking issue its one of these things: Block you to enable / disable some feature  of your software, it fails in the way that there is no way to recover the original state, it's impossible to perform some step required to enable a further feature. About priority of the tests, sure, we can define it.

Test ID

Test Name

Test Objective

Expected behavior

Comments

1

CEPH_STOR_TIER_01

The objective of this test is to ensure that a new storage tier can be created.

Additional storage tier is successfully created.

Scenario of storage tier is not used frequently, espacially when use SSD since no need to cache pool (Tingjie).
[Perez, Ricardo O] I agree that might not be a widely used scenario, but as the feature is there, we should test it.

2

CEPH_STOR_TIER_02

The objective of this test is to ensure that a new storage tier can be associated with an OSD.

Storage tier is successfully associated with OSD



3

CEPH_STOR_TIER_03

The objective of this test is to ensure that a new storage tier can be associated with a backend.

Storage tier can be successfully associated with a backend



4

CEPH_STOR_TIER_04

The objective of this test is to ensure you can associate services with a new storage tier.

The new storage tier can be used.



5

CEPH_STOR_REP_05

The objective of this test is to ensure you can provision the system to have replication factor 3.

After replication factor 3 is enabled, there are 3 copies of the data present on the system.

Performance can vary wildly amount different Ceph clusters, it all depends on what the replication factor is set to. With a replication factor of 2 you will see roughly half the write performance compared to a replication factor of 1. The drop in write performance between replication factor 2 and 3 is also pretty dramatic. This is not surprising since replication takes time and you must wait for multiple OSDs to complete a write instead of just one.
How to verify the data present, write and wait for sync complete between OSDs? (Tingjie).
[Perez, Ricardo O] At this point, I beleive the intention of the test isn't to verify the data. Just to see that the system are still functional no matter which replication factor you use. This for sure will impact in the performance but this is out of the scope of the test.

6

CEPH_STOR_SWI_06

The objective of this test is to ensure you can enable swift on the system.

Swift should be successfully enabled at the end of this test.



7

CEPH_STOR_PROC_07

The objective of this test is to repeatedly kill the ceph monitor process and ensure they are restarted by the system.

The ceph monitor processes should alarm when expected, and should recover when killed.



8

CEPH_STOR_OSD_08

The objective of this test is to repeatedly kill the ceph osd process and ensure they are restarted by the system.

The ceph osd processes should alarm when expected, and should recover when killed.



9

CEPH_STOR_SCALABILITY_09

The objective of this test is to test the basic provisioning procedure for 8 storage node ceph systems.

The system is properly configured and functioning as expected at the end of the test.

Since currently 2-9x node CEPH storage cluster support, it is fine, but we have not deploy so many yet, maybe dry-run first. (Tingjie).
[Perez, Ricardo O] This is i show the original test was defined, for sure we can adjust, normally we use 2 storage nodes for BM.

10

CEPH_STOR_CORE_10

The objective of this test is to ensure that host reinstall of nodes running ceph-mon works properly on all supported configs.

Ceph should be healthy at the end of the test.

the robustness test should strip the influence of residual data/config, propose one Precondition: clean deployment when each test (Tingjie).
[Perez, Ricardo O] This test is just to ensure that a host can be re-installed as many times as required, if you believe such pre-condition is required we can add it. However in the real world this should work flawlessly despite of the state of the system.

11

CEPH_STOR_CORE_11

The objective of this test is to ensure that host delete and reprovision of nodes running ceph-mon works properly on all supported configs.

Ceph should be healthy at the end of the test.



12

CEPH_STOR_CORE_12

The objective of this test is to ensure that semantic checks with respect to node lock, work properly on nodes running ceph monitors.

Semantic checks should work as expected.

Not sure the meaning of semantic check about node lock. (Tingjie).
[Perez, Ricardo O] Semantic check means basically that if you lock / unlock or perform any action to an specific node, the "system" checks their current state (semantically) and allows / deny such operations depending on the defined state. Please check the detailed steps for the tests here:
https://review.openstack.org/#/c/640546/1/manual-tests/storage/storage_regression_test_plan.rst


The original test name is STOR_CORE_016

13

CEPH_STOR_CORE_13

The objective of this test is to ensure that the user can provision SSD journals.

It should be possible to modify the journal configuration on the SSD disks.

Journals is related on Filestore only and removed in Bluestore. CEPH mimic by default support Bluestore but in StarlingX still use Filestore since puppet issues, maybe switch to Bluestore in future, just share the information. (Tingjie)

14

CEPH_STOR_HW_14

The objective of this test is to ensure that the hardware disk replacement procedure for OSDs is accurate.

The system should be functional and healthy after hardware disk replacement.



15

CEPH_STOR_HW_15

The objective of this test is to ensure that the hardware disk replacement procedure for journal disks is accurate.

The system should be functional and healthy after hardware disk replacement.



16

CEPH_STOR_DOR_16

To verify the system recovers after a DOR test (dead-office-recovery).

Storage system recovers after DOR test

Not sure the context of DOR test: dead-office-recovery means? (Tingjie).
[Perez, Ricardo O] The context is, all nodes shutdown ( by a power outage or any other issue), running VMs on compute nodes, after all came back, we have that VMs are still able to resume ping and their attached storages are still funcional.

17

CEPH_STOR_FAULT_17

To verify the system can recover when there is a cable pull on the cluster network.

Storage system recovers after cable pull

Any operations defined when you pull out the cable? (Tingjie).
[Perez, Ricardo O] I'm not quite sure what do you mean with "operations"?, but here what we want is to see if after a physically disconnection and connection , the system are still working in an stable way, and able to continue providing services.

18

CEPH_STOR_FS_18

To verify that the sizes of the ceph pools can be modified.

It should be possible for the user to change the size of the ceph pools

Do you mean the max size of ceph pools, can set quota. or PGs ? or warning log threshold? (Tingjie).
[Perez, Ricardo O]  'ceph osd pool get-quota <poolname>'

19

CEPH_STOR_OPROF_19

This test validates the creation and application of storage profiles on a system.

It should be possible to apply an existing storage porfile to a new node

What is the form of storage profile, do you have any script or tools to apply the profile for creation and application? (Tingjie).
[Perez, Ricardo O] A storage profile is a configuration file that is saved using Horizon. So, no script is required. You can use such profile to setup a new node with the same configuration (if required).

20

CEPH_STOR_PART_20

This test validates that multiple partitions can be created and the partition modification/deletion behaviour is correct

Partition creation, deletion and semantic checks should work as expected.

Partition on Disk? You means deploy in bare-metal or Disk in Virtual Machine? (Tingjie).
[Perez, Ricardo O] By now, BM.

21

CEPH_STOR_FS_21

To ensure that the size of ceph-mon can be increased.

The size should be increased on both controllers

Actually we have no interface to resize the ceph-mon, or you means the warning threshold percentage? (Tingjie).
[Perez, Ricardo O] 'system ceph-mon-modify <node> ceph_mon_gib=<value>'



2/ Besides we have proposed cases with CEPH functionality interface coverage.

TC

Module

Commands

Description

ceph_status

TOTAL

ceph -s

Ceph total status and health check
[Perez, Ricardo O] I believe this is already included in the upper list.

io_path

TOTAL

fio xxx.conf

Read/Write test.
[Perez, Ricardo O] This is a good one, however, this Will allow us to see the IO of the Disk, not CEPH by itself, however I would like to hear the details about this test.

osd_add/remove/tree

OSD

ceph osd add/destroy/tree

osd common operations and verify after each commands.
[Perez, Ricardo O] This are already included in the upper list.

pool_create/modify/list

Pool

ceph osd create test-pool 64 3
ceph osd lspools

ceph pool common operations and verify after each commands, pool can be increased PGs..
[Perez, Ricardo O] This look ok.

mon_operation/status

MON



ceph mon operations: increase (add)/decrease (kill) and status check.
[Perez, Ricardo O] Already included

radosgw_status

RADOSGW



merge Ada's cases with swift: case 6
[Perez, Ricardo O] No comment here.

restful_pugin operations

MGR



restful interface common operations
[Perez, Ricardo O] Do you have scripts or tools to do this ? Rest API tests are quite complex.

rbd_create/delete/resize

RBD

rbd create --size 10G test_pool/test_rbd;
rbd ls test_pools

create and delete operation, and verify status
[Perez, Ricardo O] Looks ok to me.

rbd_snapshot

rbd snap create test_pool/test_rbd at test_rbd_snap

snapshot operations.
[Perez, Ricardo O] Looks ok to me.



Thanks,
Tingjie

SSG OTC NST Storage
Tel: +86(21)88216699
Mobile: 15901876439

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190404/a41f9db5/attachment-0001.html>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image002.jpg
Type: image/jpeg
Size: 46875 bytes
Desc: image002.jpg
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190404/a41f9db5/attachment-0002.jpg>
-------------- next part --------------
A non-text attachment was scrubbed...
Name: image004.jpg
Type: image/jpeg
Size: 23782 bytes
Desc: image004.jpg
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190404/a41f9db5/attachment-0003.jpg>


More information about the Starlingx-discuss mailing list