[Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing!
Liu, ZhipengS
zhipengs.liu at intel.com
Mon Jun 8 09:03:01 UTC 2020
Hi Scott,
After discussed with Chengde,
In order not to introduce these packages version conflict in local mirror, we'd better revert the
commit 44a8a1d798dc98d4f6ffcd200237c94585b31c40<https://review.opendev.org/#/q/44a8a1d798dc98d4f6ffcd200237c94585b31c40>
with https://review.opendev.org/#/c/734035/
Please help to update cengn build script with below 2 additional repos.
build-stx-base.sh
--repo local-stx-build,... \
--repo stx-distro,... \
--repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/ \
--repo ussuri-wsgi,http://mirror.centos.org/centos/7/sclo/x86_64/rh/
Thanks!
Zhipeng
From: Liu, ZhipengS
Sent: 2020年6月6日 9:30
To: 'Scott Little' <scott.little at windriver.com>; 'starlingx-discuss at lists.starlingx.io' <starlingx-discuss at lists.starlingx.io>; 'YuChengDe' <yu.chengde at 99cloud.net>
Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing!
Hi Scott,
We have updated the patch below as you see and fixed your comment as well, thanks!
https://review.opendev.org/#/c/733426/
It has been verified by Chengde! Many thanks!!
After this patch get merged, could you do me a favor to cherry pick below patches to check if OpenStack images build can be triggered successfully by cengn script? (glance, cinder, nova, horizon)
https://review.opendev.org/#/c/712880/ Modify build-tools and stable-wheels for Ussuri upgrading
https://review.opendev.org/#/c/712862/ Update openstack docker images for stable/ussuri
You might need add below repo in your build script.
--repo ussuri-ceph,http://download.ceph.com/rpm-mimic/el7/x86_64/
Thanks a lot!
Zhipeng
From: Liu, ZhipengS
Sent: 2020年6月4日 22:36
To: Scott Little <scott.little at windriver.com<mailto:scott.little at windriver.com>>; starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: RE: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing!
Hi Scott,
For our OpenStack upgrade case, we may have one more option that is not adding this ceph 13.2.10 repo to local build repo folder. Instead, we add this ceph repo as a parameter when we run build-stx-base.sh. Then this repo only used by OpenStack build. We will verify it tomorrow.
Thanks!
Zhipeng
From: Scott Little <scott.little at windriver.com<mailto:scott.little at windriver.com>>
Sent: 2020年6月4日 22:19
To: starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
Subject: Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing!
I see https://review.opendev.org/#/c/733426/9 has been posted
With this update, layered builds should pass, and would look like this ...
* Flock and iso builds will use 13.2.2.
* All container builds uses 13.2.10.
* Do we want 13.2.10 in ALL containers?
* Any ceph dependent rpms from distro/flock builds that make it into a container (if any), will have been compiled against 13.2.2, but will run against 13.2.10. I'm more comfortable with a increment to the patch level than a decrement. I think we can live with this until we can move to 13.2.10 universally.
Monolithic will continue to build, but will remain confused ...
All lst files, including container layer lsts, are downloaded before any package is built. Most if not all packages that depend on ceph will build against 13.2.10 as mock/yum does not understand the 'prefer local'. build-iso will use 'prefer local' and ship with 13.2.2. The implications of which is unclear. One hopes that the interface is stable when the version diff is only at the patch level, but I never like to see shipped version LOWER than the complied against version.
On 2020-06-03 6:08 p.m., Saul Wold wrote:
On 6/3/20 2:01 PM, Scott Little wrote:
No I don't think that would work. We can't have two versions of the same package competing for dominance within the mock build environments. i.e. on time pkg X builds against 13.2.2, the next time against 13.2.10. The outcome dependent on the vagaries of job scheduling, build speeds, and any other number of factors. If you compile against 13.2.10, will you run ok vs 13.2.2. I wouldn't want to bet on it.
The build layering solution might be to throw it in it's own layer.
Until we are 100% committed to build layering, we need to converge on ONE version of ceph.
Ok, so one option is to move to Ceph 13.2.10 or drop the existing package list update that brings in the python3 and related Ceph packages.
Do we need to at least revert that commit in-order to get the build working again?
We might need to spend a few minutes to hash this out tomorrow morning at the PTG.
Sau!
Scott
On 2020-06-03 10:52 a.m., Saul Wold wrote:
On 6/3/20 1:47 AM, Liu, ZhipengS wrote:
Hi Scott,
For question #1,
When we built openstack ussuri image which is python3 only.
It needs python3-rbd and related dependency, so we add librados2-13.2.10 and related packages.
For local built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it is for python2.
Shouldn’t we let the build choose local build first?
Following up on this we need to be careful about which we choose, as I said in the other email is this a one-off issue or something that we see more of. So maybe an audit tool would help.
Another option is moving these packages to container layer, add rpms_centos.lst in config/centos/flock/?
I understand this option better after chatting with Zhipeng, I think this might be the best option adding the Updated Ceph / RBD related packages to the container list which will be used for the Usurri container builds but not by the platform OS.
This would mean that the containers would have Ceph 13.2.10 related packages and the platform OS would be 13.2.2. Would that cause problems or stability issues?
Sau!
Thanks!
Zhipeng
*From:*Scott Little <scott.little at windriver.com><mailto:scott.little at windriver.com>
*Sent:* 2020年6月3日15:57
*To:* starlingx-discuss at lists.starlingx.io<mailto:starlingx-discuss at lists.starlingx.io>
*Subject:* Re: [Starlingx-discuss] [build-report] STX_build_layer_flock_master_master - Build # 132 - Still Failing!
This was an interesting one.
We have been building librados2-13.2.2-0.el7.tis.25.x86_64.rpm as part of the distro layer for some time.
A recent update added librados2-13.2.10-0.el7.x86_64.rpm to the lst of the flock layer.
Now build-iso preferres locally built packages over downloaded ones, even if the downloaded on is of higher version. Now that policy is open for debate, but that is what it does.
Monolithic build uses the lst files of all layers, but having built librados2-13.2.2-0.el7.tis.25.x86_64.rpm, it selects librados2-13.2.2-0.el7.tis.25.x86_64.rpm over librados2-13.2.10-0.el7.x86_64.rpm when building the iso.
Flock layer build, downloads librados2-13.2.2-0.el7.tis.25.x86_64.rpm from the distro layer build. It doesn't build it itself. The downloads from the two sources are lumped into a common repo, so it has no reason to prefer the lower versioned rpm. It selects librados2-13.2.10-0.el7.x86_64.rpm.
The final piece of the puzzle is the transitive list of requires for librados2-13.2.10-0.el7.x86_64.rpm. It has a new dependency that pulls in lttng-ust-2.10.0-1.el7.x86_64.rpm, which in turn needs userspace-rcu-0.10.0-3.el7.x86_64.rpm, which is not present. It's wasn't included in the recent lst file changes that added librados2-13.2.10-0.el7.x86_64.rpm.
A flock layer build-iso should have caught this. I suspect build-iso was only performed on a monolithic build.
Open questions.
1) Is there a need to move to librados2-13.2.10 from librados2-13.2.2. If yes, do we still need whatever modifications were applied to librados2-13.2.2? Do they need to be ported to librados2-13.2.10 , or can we drop librados2 from the set of packages we have patches against?
2) For build-iso... should we prefer locally built packages even though there is a higher package named in an lst? If yes, then layered build needs apply the local first policy accross layers. Alternatively, perhaps drop the local first policy, but add an audit tool to detect when a locally built package is being masked in this way.
Scott
On 2020-06-02 10:30 p.m., build.starlingx at gmail.com<mailto:build.starlingx at gmail.com> <mailto:build.starlingx at gmail.com><mailto:build.starlingx at gmail.com> wrote:
Project: STX_build_layer_flock_master_master
Build #: 132
Status: Still Failing
Timestamp: 20200603T020359Z
Check logs at:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/flock/20200603T020359Z/logs
--------------------------------------------------------------------------------
Parameters
FULL_BUILD: false
FORCE_BUILD: false
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io> <mailto:Starlingx-discuss at lists.starlingx.io><mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
_______________________________________________
Starlingx-discuss mailing list
Starlingx-discuss at lists.starlingx.io<mailto:Starlingx-discuss at lists.starlingx.io>
http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.starlingx.io/pipermail/starlingx-discuss/attachments/20200608/aa59b044/attachment-0001.html>
More information about the Starlingx-discuss
mailing list