[Starlingx-discuss] Code review of cinder raw cache in stx-config

Fang, Liang A liang.a.fang at intel.com
Thu Mar 14 11:35:19 UTC 2019


Hi Bob

Thanks for your reply. So cinder raw cache feature in fact already can be enabled in current containerized Starlingx, user just need to follow the instructions you gave below.
The code review https://review.openstack.org/#/c/633400/ is not needed anymore.

Regarding adding project_id and user_id in cinder.conf in initial application install, it seems like an small enhancement now because this can be done easily via “system helm-override-update”.
I checked with cores in openstack-helm irc, Some cores feel project_id and user_id should be populated by whatevers orchestrating the deployment of the charts.
An approach(may don’t needed anymore) I’m thinking is:

1.       Submit a patch to cinder, let cinder accept internal_project_name and internal_user_name in cinder.conf, not the id.

2.       Hardcode the internal_project_name and internal_user_name in value.yaml

3.       Let job-ks-user.yaml or create a new job to create the hardcoded project and user

Regards
Liang

From: Church, Robert [mailto:Robert.Church at windriver.com]
Sent: Wednesday, March 13, 2019 10:44 PM
To: Fang, Liang A <liang.a.fang at intel.com>; Rowsell, Brent <Brent.Rowsell at windriver.com>
Cc: Poncea, Ovidiu <Ovidiu.Poncea at windriver.com>; Zhu, Vivian <vivian.zhu at intel.com>; Jones, Bruce E <bruce.e.jones at intel.com>; starlingx-discuss at lists.starlingx.io
Subject: Re: [Starlingx-discuss] Code review of cinder raw cache in stx-config

Hi Liang,

I took a look at this yesterday to understand what you are up against here.

Ideally we would want some combination of a system helm override/armada manifest change to automatically provision the Cinder image-volume cache when the stx-openstack application is installed. We do not want to use any platform specific interface changes (i.e. system backend-modify or a new API) since we want to decouple openstack k8s application dependencies from the general k8s platform provisioning.

Using the current content on master, here are the following steps required by an end user to enable this feature:


# Initial application install

system application-apply stx-openstack

# After install, the feature requires a project/user to manage the cache

openstack project create --enable --description "Block Storage Internal Tenant" cinder-internal

openstack user create --project cinder-internal cinder-internal



TENANTID=$(openstack project list | awk /cinder-internal/'{print $2}')

USERID=$(openstack user list | awk /cinder-internal/'{print $2}')



# The created project/user are needed in cinder.conf along with enabling the cache for the specific backend (we define ceph-store as part of the original application apply). Reapply the application to trigger the armada upgrade to cinder

system helm-override-update cinder openstack --reuse-values --set conf.cinder.DEFAULT.cinder_internal_tenant_project_id=$TENANTID

system helm-override-update cinder openstack --reuse-values --set conf.cinder.DEFAULT.cinder_internal_tenant_user_id=$USERID

system helm-override-update cinder openstack --reuse-values --set conf.backends.ceph-store.image_volume_cache_enabled=true

system helm-override-update cinder openstack --reuse-values --set conf.backends.ceph-store.image_volume_cache_max_size_gb=10

system helm-override-update cinder openstack --reuse-values --set conf.backends.ceph-store.image_volume_cache_max_count=5

system application-apply stx-openstack

Now, back to the desired approach. The challenge here is we need a way to create a project/user as part of the initial application install and look up the UUIDs to embed in an override in a declarative manner.  I’m not sure this is possible in a single application apply.

1)      We could enable creating the project/user as part of the Cinder bootstrap script, but these would not be created/available at the time overrides are declared for cinder’s configuration
2)      We could use an existing project/user (i.e. admin) and avoid creating a dedicated cinder project/user, but again on the initial application install they are not available at the time overrides are declared for cinder’s configuration

Looking across the charts and the helm-toolkit in the OSH projects, I have yet to see an easy way to accomplish this.

I think you should check with the OSH project cores to see if they have any suggestions on the best way to tackle this as part of the first install of the Cinder chart OR if a two pass approach is required (install then upgrade)

Bob

From: "Fang, Liang A" <liang.a.fang at intel.com<mailto:liang.a.fang at intel.com>>
Date: Wednesday, March 13, 2019 at 8:22 AM
To: "Rowsell, Brent" <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>
Cc: Ovidiu Poncea <Ovidiu.Poncea at windriver.com<mailto:Ovidiu.Poncea at windriver.com>>, "Zhu, Vivian" <vivian.zhu at intel.com<mailto:vivian.zhu at intel.com>>, "Jones, Bruce E" <bruce.e.jones at intel.com<mailto:bruce.e.jones at intel.com>>, "starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>" <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: [Starlingx-discuss] Code review of cinder raw cache in stx-config

Hi Brent

Regarding the raw cache review:
https://review.openstack.org/#/c/633400/
Currently the code is implemented according the discussion before between Ovidiu and Lisa: http://lists.starlingx.io/pipermail/starlingx-discuss/2019-January/002548.html
The raw cache is configured via “system storage-backend-modify”. e.g. system storage-backend-modify ceph-store cinder_raw_cache_gib=10

Your opinion is to decouple openstack from the storage backend. Thanks for your comment.

I take a look of system subcommand, there’s no subcommand for openstack currently, at this point “system storage-backend-modify” seems is the best choice. Should we add subcommand “openstack”?  Something like: system openstack cinder_raw_cache_gib=10
Or should we config openstack in other way? Could you please give suggestion on this? Thank you very much

Regards
Liang

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20190314/8200d003/attachment-0001.html>


More information about the Starlingx-discuss mailing list