Hi Brent
Thanks for your comment. This is my idea.
1)
We have to keep the existing ceph implementation to allow existing users to transition post stx.4.0. The existing implementation would then be removed in stx.5.0. Therefore I do not understand what
you are proposing in point 1 below.
I already write an application, which is no submitted as wait for Scott could create a new project for me. The helm plugin for this application is
https://review.opendev.org/#/c/713084/.
$ system application-list --nowrap
+---------------------+---------+-------------------------------+---------------+----------+-----------+
| application | version | manifest name | manifest file | status | progress |
+---------------------+---------+-------------------------------+---------------+----------+-----------+
| oidc-auth-apps | 1.0-0 | oidc-auth-manifest | manifest.yaml | uploaded | completed |
| platform-integ-apps | 1.0-8 | platform-integration-manifest | manifest.yaml | uploaded | completed |
| rook-ceph-apps | 1.0-1 | rook-ceph-manifest | manifest.yaml | uploaded | completed |
+---------------------+---------+-------------------------------+---------------+----------+-----------+
$ system helm-override-update rook-ceph-apps rook-ceph kube-system --value storageValue.yaml
$ system application-apply rook-ceph-apps
This is values yaml file, by which rook will provision /dev/sdd. This is newly deploy case.
cluster:
storage:
nodes:
- devices:
- config:
journalSizeMB: 1024
storeType: bluestore
name: sdd
name: controller-0
This is my current implementation for rook.
2)
We did not want to couple the rook/ceph implementation to sysinv. Therefore why would we have it manage the volume groups for rook ?
1, rook ceph will create a ceph name prefixed volume group “ceph-xxxxx”
rook provision osd by ceph-volume, a new ceph provision tool to replace ceph-disk which is current starlingx deploy tool. Ceph-disk is depreciated, there is such log in /var/log/puppet, “This tool is now
deprecated in favor of ceph-volume”
For provision by ceph-volume, it will look up any volume-group, prefixed with “ceph-xxxxx”, and pv to this volume group, and then create lv for ceph-osd
This rook created volume group will be filtered by setting in /etc/lvm/lvm.conf, as it is not managed by sysinv.
2, Without considering rook, as ceph-disk is depreciated, if puppet-ceph upgrade, it will also turn to use ceph-volume for ceph provision. There is also such issue, a volume group name prefixed with ceph created
by ceph-volume.
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Rowsell, Brent <Brent.Rowsell@windriver.com>
Sent: Monday, March 16, 2020 9:31 PM
To: Chen, Haochuan Z <haochuan.z.chen@intel.com>; Miller, Frank <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Waines, Greg <Greg.Waines@windriver.com>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; Church, Robert <Robert.Church@windriver.com>
Subject: RE: ceph containerization patch review
Martin,
At the mid cycle meeting we established:
Brent
From: Chen, Haochuan Z [mailto:haochuan.z.chen@intel.com]
Sent: Thursday, March 12, 2020 8:06 PM
To: Miller, Frank <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Rowsell, Brent <Brent.Rowsell@windriver.com>;
Waines, Greg <Greg.Waines@windriver.com>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; Church,
Robert <Robert.Church@windriver.com>
Subject: ceph containerization patch review
Hi Brent & Frank
I already finish my new application and helm plugin for rook-ceph. But still two concern to discuss.
1, decouple ceph backend with sysinv.
Now ceph backend has been removed as default storage backend. And I plan to add rook-ceph as another storage backend which user could enable by his provision.
So what about decouple ceph backend with sysinv-conductor, which is beneficial for more storage backend enabling.
2, change restriction for volume group creation in sysinv
Currently sysinv has restriction for volume group creation, only permit create volume group with “nova-local”, “cinder-volumes”, “cgts-vg”.
What about remove this restriction? For containerized ceph cluster deployment, osd is provisioned by ceph-volume, which is based on logical volume. It will create a volume group, with “ceph” name prefixed.
I prefer this newly created volume group could also be managed by sysinv.
Currenlty ceph cluster is deployed by ceph-disk in puppet-ceph. This tool is depreciated.
Thanks!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Friday, February 21, 2020 11:14 PM
To: 'Miller, Frank' <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; 'Rowsell, Brent'
<Brent.Rowsell@windriver.com>; 'Greg.Waines@windriver.com' <Greg.Waines@windriver.com>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; 'Church,
Robert' <Robert.Church@windriver.com>
Subject: RE: ceph containerization patch review
Hi Frank & Bob & Greg & Brent
To deploy, it is same as any other helm chart. Wait for you opinion for current solution.
Thanks!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Wednesday, February 19, 2020 12:53 PM
To: 'Miller, Frank' <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Rowsell, Brent
<Brent.Rowsell@windriver.com>; 'Greg.Waines@windriver.com' <Greg.Waines@windriver.com>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; Church,
Robert <Robert.Church@windriver.com>
Subject: RE: ceph containerization patch review
Hi folks
Answer the question, with only k8s, how to deploy rook.
$ helm-upload stx-platform rook-ceph-0.1.0.tga
$ helm install stx-platform/rook-ceph -n rook-ceph-cluster -n kube-system -f value.yaml
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Miller, Frank <Frank.Miller@windriver.com>
Sent: Tuesday, February 18, 2020 10:06 PM
To: Chen, Haochuan Z <haochuan.z.chen@intel.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Rowsell, Brent <Brent.Rowsell@windriver.com>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>; Church,
Robert <Robert.Church@windriver.com>
Subject: RE: ceph containerization patch review
+Bob who is the containers TL
From: Chen, Haochuan Z [mailto:haochuan.z.chen@intel.com]
Sent: Monday, February 17, 2020 8:46 AM
To: Miller, Frank <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>; Rowsell, Brent <Brent.Rowsell@windriver.com>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>
Subject: RE: ceph containerization patch review
Hi Frank & Brent
For rook manage configuration, after introduce rook-ceph, current ceph cluster deployed by puppet will keep or be removed?
If removed, containerized ceph cluster deployed by rook-ceph will take the role of provisioner for stx-openstack.
If keeps, helm chart rbd-provisioner in platform-integ-app also keeps and takes the role of provisioner for stx-openstack. Containerized ceph cluster deployed by rook-ceph only serve as csi provider for k8s.
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Chen, Haochuan Z
Sent: Monday, February 10, 2020 11:47 PM
To: Miller, Frank <Frank.Miller@windriver.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>
Cc: Chen, Tingjie <tingjie.chen@intel.com>; Sun, Austin <austin.sun@intel.com>; Qi, Mingyuan <mingyuan.qi@intel.com>
Subject: RE: ceph containerization patch review
Hi Frank:
As synced with Tingjie, this is my understanding for Rook manage the configuration instead of sysinv.
1, Remove ceph config and status query from sysinv
2, create another tool, like rook-client to launch and provision ceph cluster
3, one helm plugin like rbd-provision, which depends on newly created tool
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330
From: Miller, Frank <Frank.Miller@windriver.com>
Sent: Saturday, February 8, 2020 4:16 AM
To: Chen, Haochuan Z <haochuan.z.chen@intel.com>; 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>
Subject: RE: ceph containerization patch review
Martin:
Thanks for posting your reviews and the update on the current status for this feature.
I was unable to attend the containerization meeting this week so wasn’t able to have Tingjie or you give an update on the open actions for the feature. From the Jan 14 minutes [1] one of the actions is to determine
“how can you have Rook do all the configuration and not have sysinv involved”
For now I suggest you add an additional item to your Tasks to do list to identify a design that does not require sysinv for the configuration and instead has Rook manage the configuration. I would like to ask
that we discuss further at the next containerization meeting and if possible review a proposal from yourself and Austin for this item.
Frank
[1] https://etherpad.openstack.org/p/stx-containerization
From: Chen, Haochuan Z [mailto:haochuan.z.chen@intel.com]
Sent: Thursday, February 06, 2020 11:02 PM
To: 'starlingx-discuss@lists.starlingx.io' <starlingx-discuss@lists.starlingx.io>
Subject: [Starlingx-discuss] ceph containerization patch review
Hi folks
I enabled containerized ceph cluster with simplex. You can begin to review my patch. I propose to build an image and deploy a simplex system with these patch to check.
https://review.opendev.org/#/c/681457/
https://review.opendev.org/#/c/687340/
https://review.opendev.org/#/c/706256/
https://review.opendev.org/#/c/699557/
https://review.opendev.org/#/c/699556/
And I know there is other story, like remove ceph as default backend storage, maybe some conflict, we can discuss together.
Tash Done:
1, disable native ceph cluster in ceph.pp
2, disable ceph daemon monitoring in service manager
3, add rook-ceph helm chart to launch ceph cluster
4, add override in stx-config to generate override with starlingx system config
5, sysinv add label in provision stage, to make containerized ceph mon and ceph mgr on designed host
6, add rook-ceph-provisioner helm chart to generate storage class, secret, config and pool for stx-application
7, enabled stx-openstack with containerized ceph
8, update ceph wrapper in stx-config to set or get containerized ceph cluster
All these tasks done is enabled with simplex only.
Task to do:
1, enable add osd runtime, after system provisioned
2, fix know issue, if system reboot, ceph cluster launch fail
3, enable bluestore and filestore, currently there is only bluestore
4, enable multi-node and duplex
5, enable swift with containerized ceph
6, enable fm alarm for containerized ceph
7, check backup and restore for containerized ceph
8, check system upgrade or how to transit from native ceph cluster to containerized ceph cluster
9, code cleanup
10, update unit test in stx-config
BR!
Martin, Chen
IOTG, Software Engineer
021-61164330