[Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Li, Xiaoyan xiaoyan.li at intel.com
Thu Dec 6 06:05:42 UTC 2018


Hi  Brent,

Please give your suggestions.

And thank Ovidiu with the detailed summary!
One correction here:
With Cinder image cache, image_volume_cache_max_size_gb and image_volume_cache_max_count can be set 0, which means unlimited for both cache capacity and number of cached images.

Best wishes
Lisa

From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com]
Sent: Wednesday, December 5, 2018 3:42 PM
To: Li, Xiaoyan <xiaoyan.li at intel.com<mailto:xiaoyan.li at intel.com>>; Rowsell, Brent <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Cc: Miller, Frank <Frank.Miller at windriver.com<mailto:Frank.Miller at windriver.com>>; Church, Robert <Robert.Church at windriver.com<mailto:Robert.Church at windriver.com>>
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Hi Li,

Thanks for providing clarifications! So, for our use cases, main problem is that glance’s raw caching is more controllable that cinder’s. If it’s not enough we need to improve it, if we can live with it then at a minimum it needs to be enabled though sysinv configuration and then remove the raw-caching from glance.

See inline comments plus bellow summary and proposal, we need Brent’s input on this:

I see two main solutions to the problem:

A.      Always enable cache, for any backend, but only cache glance images that have a certain attribute – this needs a cinder upstream change. Cache limit has to be removed (another cinder upstream change). We may also need a way to kick-start the caching in cinder & clean up cache (periodically and/or user triggered should be enough).

B.      Make enabling cache storage backend specific and configurable (through sysinv). Once cinder’s cache is enabled for a backend, cache everything. Size of the cache should be configurable.

I would go for B. as it, most likely, doesn’t need upstream changes.
[Li, Xiaoyan] Agree with B.
But it doesn’t conflict with the requirements to set a property of an image like disable_cache, with this property Cinder won’t cache this image. I am concerned what kind of scenario/image it is suitable for?

Summary of problems, TBD if we can live with them:

·         Images are not cached on creation – if we can’t live with it we may need a trigger to cinder on image creation or a way to manually kick-start the caching process.

·         Since first volume creation is slow for larger volumes this may timeout (keystone token expiration) – we had a customer using 200GB qcow2 windows images that would timeout on conversion. I don’t see a workaround for it, just ask him to manually do the conversion when importing very large images to glance.

·         we can’t provide a 100% guarantee that, once converted, successive creation won’t need to get converted again due to cache exhaustion. Can we live with it? Users may intermittently see slowdowns and wonder what’s going on.
[Li, Xiaoyan] How about we can add a properties to this image/volume, Cinder will at last evict the cached image when cache exhausted. This need a cinder upstream to respect the property.

·         cache will waste space, if original images no longer exists there is no automated way to remove them from the cache  – admin can clean up the cache manually if he so desires. We can either:

1.       Live with it – assume that the space allocated to the cache is for the cache only or users can clean up cache by themselves.

2.       Clean up cache through a cron job (although this is a cache, some caches are supposed to clean themselves up if cached data is no longer present).

3.       Implement another mechanism to clean the cache when an image is deleted not at a later time (this is way too complex to upstream).

·         What happens with images that users don’t want to cache? Should we add a filter (glance property)?
[Li, Xiaoyan] Allow users to add a property of the image. And need cinder upstream to respect the property.

I vote for #2 as it does not seems too hard to implement. A once a day cron task can free up wasted space.
[Li, Xiaoyan] This cron task probably can’t be included in Cinder. Is it OK?

Summary of TODOs (assuming B. is chosen) before removing raw-caching (open for discussions & dependent on resolution to above issues):

·         Enable caching per backend through sysinv system storage-backend-add/modify commands though a capabilities field (this seems the simplest solution)

·         Add sysinv configuration option per storage backend to set cache size. [Clean up images in cache when size is decreased]

·         When first enabling: create shadow tenant (no need to remove it when disabling cache)

·         Support disabling cache for a backend (clean up residual images)

Regards,
Ovidiu

From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com]
Sent: Tuesday, November 27, 2018 4:30 AM
To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io'
Cc: Miller, Frank
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Hi Ovidiu,

As far as I’m concerned, Cinder image cache is an cache mechanism. So overall, users don’t need to clean it manually.
Currently when capacity for cache is full, it removes the cached image volumes with LRU policy. More detailed please see the following comments.

Best wishes
Lisa

From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com]
Sent: Monday, November 26, 2018 11:15 PM
To: Li, Xiaoyan <xiaoyan.li at intel.com<mailto:xiaoyan.li at intel.com>>; Rowsell, Brent <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Cc: Miller, Frank <Frank.Miller at windriver.com<mailto:Frank.Miller at windriver.com>>
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Hi Lisa,

Yeah, even if we refactor raw caching, it's most likely going to be rejected by upstream due to replicating existing functionality in cinder. Yet, imho, we should have an working replacement before retiring raw caching and we should have some agreed mitigations in place for cinder's disadvantages (if we can't live with them, Brent please help here). See my questions bellow & inline. Also, please correct text bellow if I made wrong assumptions as you know cinder's caching better than me.

Short comparison of the two:

Raw caching

Uses --raw-cache cli option in Glance to trigger a background process that converts the image. Once cached, new volumes get created on Ceph instantly by levereging Ceph's copy-on-write. Cache is allocated from the "images" RBD pool.

Advantages:
- user can select the images it wants to cache
- user can monitor the progress and can check used space for each image (cli + dashboard).
- on image delete the cache is also cleared if there is no volume using it. Else it is cleared with the last volume keeping the cache data in-use.
- no wasted space
- complete control by user

Disadvantages:
- There is almost no way this is going to be accepted upstream. Maybe, yet with small hopes, if we refactor everything as a 3rd party glance feature, but we may need to push some hooks upstream to make it work.
- Ceph only

Cinder's caching

Uses a "shadow" tenant to store shadow volumes. Cache is created with the first volume from that images. Next volume will be created instantly by leveraging copy-on-write if backend provides support for it (e.g. on Ceph). Space for cache is allocated on one of the cinder backends, has a configurable threshold.

Advantages:
- already upstream
- works with all backends
- all cached images are displayed for the "admin" if he changes to the shadow tenant and lists volumes.
- admin (not user, only admin) can free cache by deleting volumes of the shadow tenant (need confirmation)

Disadvantages:
1. it's either globally enabled or disabled => needs sysinv configuration option
2. it caches every image. No way to select what image to cache nor with what backend (question bellow) => space waste
3. cached images are not removed. It needs to hit a space provision to do that, and it will remove the oldest image, although that image cache may be important.
4. less control: Images are cached on first use and are removed when provisioned space hits threshold. This means that user does not have control over what images are converted and what images are in cache. So, sometimes volume creation works fast, other times it's slow. This can be a problem especially on parallel volume creation through helm charts as, if the image did not have a cache, then stack creation may timeout. Another problem may be if cache is small and images get rotated in the cache => we need alarms when threshold is hit.
5. needs the shadow tenant created before use => puppet / helm chart chart update (for --kubernates)

Mitigations of disadvantages above - possible solutions and alternatives:
#1: Customers may not want to enable it, we should allow customers to choose when to enable it (it can be added as a custom capabilities parameter to "system storage-backend-add/system storage-backend-modify")
[Li, Xiaoyan] Currently image cache can be enabled/disabled per backend storage. https://docs.openstack.org/cinder/latest/admin/blockstorage-image-volume-cache.html
I think it is enough.
[Ovi] Nice, we need a configuration option per backend in sysinv to enable it. (most likely in the capabilities fields of storage-backends table. See ‘system storage-backend-*’ commands).

#2: No workaround comes to my mind - we can probably live with it

#3: A simple solution would be to implement a cron job to clean cache periodically or a more elaborate solution would be remove the cache with the last volume that used that image (need a cinder upstream feature for it)
[Li, Xiaoyan] From doc, currently Cinder removes cached images from least used to recently used. Every time Cinder uses a cached image volume, it updates its last_used field. This is the normal policy for data eviction.
As it is cache and should be transparent for users, why do we need users to evict data?
[Ovi] If we conclude than this is enough from data usage perspectives then we are ok with it.

#4: Two options comes to mind:
    1. To get some control we should not limit the cache size, given that we do propper cleanup in #3.
[Li, Xiaoyan] Even we do cleanup, the limit can’t be removed.
[Ovi] We may need to enhance this.
:q

    2. If we limit the cache, we have to make the limit configurable and raise an alarm once cache gets near full so that admin takes preventive measures and either increases provisioned space or

#5: This is mandatory, otherwise cinder's caching won't work at all.
[Li, Xiaoyan] It has to set cinder_internal_tenant_project_id and cinder_internal_tenant_user_id before enabling cache images. As this user can manage these cached image volumes. Why can’t it work with Kubernetes?
[Ovi] I did not say it won’t work with kubernates ☺ What I said is that we need to provision the shadow tenant automatically when the feature is enabled.


Questions, (maybe if you get time to play with cinder's caching to get a better understanding):
1. How does cinder's caching behaves when multiple volumes are created in parallel from a newly created image? Will it wait for the cache to be created before creating the volumes or just start all volume creations in parallel?
[Li, Xiaoyan] Inside a volume service it is sequential to run volume creation tasks. But as we have HA. For image cache, it creates an entry in cinder db at first  and then creates volumes.  The primary key is not image_id+backend_storage.
It is possible that several entries or volumes will be created in same backend storage.
[Ovi] So, only the first volume creation is going to be slow? If that’s the case then parallel volume creation will work ok as only first volume creation will be slow.

2. What is the cinder backend that store the cache? If it is the one used by the volume, will this lead to multiple cached volumes of the same image? Can we chose the backend?
[Li, Xiaoyan] We can set whether enabled cache per backend. If users create a volume in backend ceph from an image, an cached image volume will be created in Ceph if it is enabled. Next time if users create a volume in IBM storage from the same image, it will create another image cached volume in IBM storage if it is enabled.
[Ovi] Then we need to enable it and configure cache size per backend, I guess.

3. How is cache space provisioned? Do we need to restart cinder-volume for changes to take effect?
[Li, Xiaoyan] These config needs to be done in config file. So it needs to restart cinder volume services once config are changed.
[Ovi] So after we make the changes, we re-apply the manifests and restart the services (reload the helm charts for k8s deployments)

4. Is admin able to clean up individual cached images in the shadow tenant? Maybe also user?
[Li, Xiaoyan] Admin and shadow tenants can both do cleanup.

Ovidiu

________________________________
From: Li, Xiaoyan [xiaoyan.li at intel.com]
Sent: Thursday, November 22, 2018 2:41 AM
To: Poncea, Ovidiu; Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io'
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching
Hi Brent and Ovidiu,

As this email has a long history, I re-summarize the raw cache in StarlingX and Cinder upstream image cache.
Please vote whether we can abandon raw cache in StarlingX.

StarlingX
Create an image cache in ceph when Glance creates an image.
And delete the cached image in ceph when deleting the original image in Glance.

Cinder:
When creating a volume from an image in a backend storage at the first time, Cinder creates a volume from this image, and uses it as the image cache.
So next time if users create another volume from this image in the same backend storage, Cinder at first finds out the cached image volume and clones a new volume from it.
Cinder allows capacity configuration for cached images. If the space is used up, Cinder will evict the cached image volumes.

From my viewpoint,
Cinder image cache can achieve same functionality as Raw cache in StarlingX with more enhancement.  It is for all Cinder supported backend storage, not just for Ceph.

Best wishes
Lisa

From: Li, Xiaoyan
Sent: Monday, November 19, 2018 9:44 AM
To: Poncea, Ovidiu <Ovidiu.Poncea at windriver.com<mailto:Ovidiu.Poncea at windriver.com>>; Rowsell, Brent <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Hi Ovidiu,

A cached image ( new volume from this image)  is created on a storage backend when Cinder firstly creates a volume in the same backend storage from the image.
All the information are stored in Cinder, including volume id, image id etc. https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1368
https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L82


A cached image is deleted when the configure space for cache is used up.  So currently Cinder doesn’t delete the cached image volumes even if the image is deleted. But this can be an enhancement of current cinder image cache.
https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L117
https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/manager.py#L1351

Best wishes
Lisa

From: Poncea, Ovidiu [mailto:Ovidiu.Poncea at windriver.com]
Sent: Friday, November 16, 2018 4:57 PM
To: Li, Xiaoyan <xiaoyan.li at intel.com<mailto:xiaoyan.li at intel.com>>; Rowsell, Brent <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>>
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Hi Li,

Quick question: Is cache going to be freed when an image is deleted from glance? It would be a waste to cache images that are no longer needed.

Thanks,
Ovidiu
________________________________
From: Li, Xiaoyan [xiaoyan.li at intel.com]
Sent: Tuesday, November 13, 2018 9:19 AM
To: Rowsell, Brent; 'starlingx-discuss at lists.starlingx.io'
Subject: Re: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching
Hi,

About the raw cache function in StarlingX Cinder and Glance, I would like to remove it as Cinder has similar function. Please see following detail.
And if I  would like to remove the function in StarlingX, there are two methods:

1.       Submit a patch to revert the changes in Glance and Cinder.

2.       Ignore these patches during upgrading StarlingX/Cinder to new Cinder release.
Which way do we prefer to?

Best wishes
Lisa

From: Li, Xiaoyan
Sent: Thursday, September 20, 2018 10:17 AM
To: Rowsell, Brent <Brent.Rowsell at windriver.com<mailto:Brent.Rowsell at windriver.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching


Hi, Brent



The following are mechanism of Cinder volume cache.



Creation of cached volume:

It creates a cached volume in the backend storage when creating from an image.

1.       Create_from_image:

https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L890

2.       Return image cache entry: If not existed, it creates a new entry.

https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L746

3.       Create a new image-volume and cache entry for it:

https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L872



Use a cached volume when creating a volume:

https://github.com/openstack/cinder/blob/stable/pike/cinder/volume/flows/manager/create_volume.py#L723-L735



 Delete the cache volume: When capacity and number of cache entries exceed specified limit, it deletes cache entries (cached volumes).

https://github.com/openstack/cinder/blob/stable/pike/cinder/image/cache.py#L164

Best wishes
Lisa

From: Rowsell, Brent [mailto:Brent.Rowsell at windriver.com]
Sent: Thursday, September 6, 2018 10:02 AM
To: Li, Xiaoyan <xiaoyan.li at intel.com<mailto:xiaoyan.li at intel.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: RE: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

We would need to review this feature to ensure it provides equivalent functionality first.
If it does, great, we can look at reverting and enabling this cinder functionality.

Brent

From: Li, Xiaoyan [mailto:xiaoyan.li at intel.com]
Sent: Wednesday, September 5, 2018 9:59 PM
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: [Starlingx-discuss] [StarlingX] Use Cinder generic image cache to replace raw caching

Hi all,

This email is about Raw caching function in StarlingX. This feature is to cache an image in backend storage like Ceph when we first create a volume in this backend storage.

In fact, Cinder upstream has already had a similar function in Pike release. https://specs.openstack.org/openstack/cinder-specs/specs/liberty/image-volume-cache.html
So I want to revert Raw caching function in StarlingX, and use Cinder generic image cache instead.
The problem is that we need to update Cinder config in StarlingX. Any comments?

Best wishes
Lisa



-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20181206/4859ebe7/attachment-0001.html>


More information about the Starlingx-discuss mailing list