Re: [Starlingx-discuss] build-pkgs cannot complete std build
Ok, we've seen 3 ceph failures in our last 6 builds. The common factor: tpm2-tools builds on 'b0' before ceph builds. Our theory. The buildRequires of tpm2-tools causes autoconf-archive to be installed... which installs a bunch of .m4 files in /usr/share/aclocal ... which causes ceph grief when it calls aclocal. I don't really know automake or aclocal all that well. I'm assuming /usr/share/aclocal is acting something like a cache, but it's a cache whos contents are incompatible with ceph. Do we have any autotools / aclocal / m4 experts in the house? Possible fixes: - ceph: can we tell it to not use the aclocal cache... explicitly (a flag to aclocal?) ... or implicitly (update ceph's m4 files so they look 'newer' than the cache)? - tpm2-tools: Can we remove the dependence on autoconf-archive? No other package we build seems to need it. Scott On 18-09-27 04:45 PM, Saul Wold wrote: > > And of course it worked the third time! > > So, I lost the good logs. > > Sau! > > > On 09/27/2018 12:56 PM, Scott Little wrote: >> On 18-09-27 03:53 PM, Scott Little wrote: >>> Our latest build, based on code synced at 2018-09-27T15:28:00 UTC, >>> built successfully. >>> >>> It took three attempts to get ceph built. The first two passes >>> aborted quickly due to missing packages. The final pass did not >>> exhibit the 'aclocal: too many loops'**issue. >>> >>> The only build I have that exhibited the too many loops error was a >>> snapshot on 2018-09-20T15:50:40 UTC >>> >>> I do have a designer with an older snapshot that seems to hit it >>> regularly, so I'll work with him and see if we can learn more. >>> >>> I think we need more data from the community >>> - Who's build is failing on ceph with *aclocal: too many loops?* >>> - Who is building successfully ? >>> - Who can build only intermittently? >>> >>> >>> >>> Info to collect for failed builds: >> - repo sync timestamp >>> - build command used? >>> - Was it a new workspace, a cleaned workspace, or a previously used >>> workspace? >> - $MY_WORKSPACE/CONTEXT >>> - $MY_WORKSPACE/build-std.log >>> - $MY_WORKSPACE/std/results/*/ceph-*/*.log >>> >>> For successful builds, same info. Rather than full build logs, I can >>> settle for: >>> - grep '\(Success building\|iteration\|building ceph\)' >>> $MY_WORKSPACE/build-std.log >>> - grep compute_resources: build-std.log >>> >>> >>> >>> >>> >>> On 18-09-27 02:21 PM, Saul Wold wrote: >>>> On 09/26/2018 09:16 AM, Scott Little wrote: >>>>> aclocal 'too many loops' has been popping up sporadically for a >>>>> week or two now. Possibly 7.5 related. >>>>> >>>>> I suspect that there is a build order and/or race condition >>>>> element to this. It often goes away if you just run build-pkgs a >>>>> second time. >>>>> >>>> I am seeing this failure also, but it does not go away after a >>>> second rebuild. I have the lastest stx-root (build-tools) with the >>>> recent patches. >>>> >>>> Is this directly related to the fuzz issue or is there something >>>> else we need to address in CEPH itself. >>>> >>>> This is blocking my local build. >>>> >>>> Sau! >>> >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss@lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >> >> >> >> >> _______________________________________________ >> Starlingx-discuss mailing list >> Starlingx-discuss@lists.starlingx.io >> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>
Not sure if it's a valid fix but changing the two aclocal commands from aclocal -I m4 --install to aclocal -I m4 avoid to copy the macros installed by autoconf-archive into the ceph environment. -Erich On Fri, 2018-09-28 at 16:39 -0400, Scott Little wrote:
Ok, we've seen 3 ceph failures in our last 6 builds.
The common factor: tpm2-tools builds on 'b0' before ceph builds.
Our theory. The buildRequires of tpm2-tools causes autoconf-archive to be installed... which installs a bunch of .m4 files in /usr/share/aclocal ... which causes ceph grief when it calls aclocal.
I don't really know automake or aclocal all that well. I'm assuming /usr/share/aclocal is acting something like a cache, but it's a cache whos contents are incompatible with ceph.
Do we have any autotools / aclocal / m4 experts in the house?
Possible fixes: - ceph: can we tell it to not use the aclocal cache... explicitly (a flag to aclocal?) ... or implicitly (update ceph's m4 files so they look 'newer' than the cache)? - tpm2-tools: Can we remove the dependence on autoconf-archive? No other package we build seems to need it.
Scott
On 18-09-27 04:45 PM, Saul Wold wrote:
And of course it worked the third time!
So, I lost the good logs.
Sau!
On 09/27/2018 12:56 PM, Scott Little wrote:
On 18-09-27 03:53 PM, Scott Little wrote:
Our latest build, based on code synced at 2018-09-27T15:28:00 UTC, built successfully.
It took three attempts to get ceph built. The first two passes aborted quickly due to missing packages. The final pass did not exhibit the 'aclocal: too many loops'**issue.
The only build I have that exhibited the too many loops error was a snapshot on 2018-09-20T15:50:40 UTC
I do have a designer with an older snapshot that seems to hit it regularly, so I'll work with him and see if we can learn more.
I think we need more data from the community - Who's build is failing on ceph with *aclocal: too many loops?* - Who is building successfully ? - Who can build only intermittently?
Info to collect for failed builds:
- repo sync timestamp
- build command used? - Was it a new workspace, a cleaned workspace, or a previously used workspace?
- $MY_WORKSPACE/CONTEXT
- $MY_WORKSPACE/build-std.log - $MY_WORKSPACE/std/results/*/ceph-*/*.log
For successful builds, same info. Rather than full build logs, I can settle for: - grep '\(Success building\|iteration\|building ceph\)' $MY_WORKSPACE/build-std.log - grep compute_resources: build-std.log
On 18-09-27 02:21 PM, Saul Wold wrote:
On 09/26/2018 09:16 AM, Scott Little wrote:
aclocal 'too many loops' has been popping up sporadically for a week or two now. Possibly 7.5 related.
I suspect that there is a build order and/or race condition element to this. It often goes away if you just run build-pkgs a second time.
I am seeing this failure also, but it does not go away after a second rebuild. I have the lastest stx-root (build-tools) with the recent patches.
Is this directly related to the fuzz issue or is there something else we need to address in CEPH itself.
This is blocking my local build.
Sau!
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-di scuss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-disc uss
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On 09/28/2018 01:39 PM, Scott Little wrote: > Ok, we've seen 3 ceph failures in our last 6 builds. > > The common factor: tpm2-tools builds on 'b0' before ceph builds. > > Our theory. The buildRequires of tpm2-tools causes autoconf-archive to > be installed... which installs a bunch of .m4 files in > /usr/share/aclocal ... which causes ceph grief when it calls aclocal. > > I don't really know automake or aclocal all that well. I'm assuming > /usr/share/aclocal is acting something like a cache, but it's a cache > whos contents are incompatible with ceph. > > Do we have any autotools / aclocal / m4 experts in the house? > > Possible fixes: > - ceph: can we tell it to not use the aclocal cache... explicitly (a > flag to aclocal?) ... or implicitly (update ceph's m4 files so they > look 'newer' than the cache)? Not sure about that, I would have to dig deeper into aclocal, it's been a while since I dug into that. > - tpm2-tools: Can we remove the dependence on autoconf-archive? No other > package we build seems to need it. > A quick scan show that the autoconf-archive was put in there for travis support, and goes away this past March upstream when they coverted to using a container for travis. If we could use a newer version of tpm2 that might solve this. Maybe Erich's solution can work Sau! > Scott > > > > On 18-09-27 04:45 PM, Saul Wold wrote: >> >> And of course it worked the third time! >> >> So, I lost the good logs. >> >> Sau! >> >> >> On 09/27/2018 12:56 PM, Scott Little wrote: >>> On 18-09-27 03:53 PM, Scott Little wrote: >>>> Our latest build, based on code synced at 2018-09-27T15:28:00 UTC, >>>> built successfully. >>>> >>>> It took three attempts to get ceph built. The first two passes >>>> aborted quickly due to missing packages. The final pass did not >>>> exhibit the 'aclocal: too many loops'**issue. >>>> >>>> The only build I have that exhibited the too many loops error was a >>>> snapshot on 2018-09-20T15:50:40 UTC >>>> >>>> I do have a designer with an older snapshot that seems to hit it >>>> regularly, so I'll work with him and see if we can learn more. >>>> >>>> I think we need more data from the community >>>> - Who's build is failing on ceph with *aclocal: too many loops?* >>>> - Who is building successfully ? >>>> - Who can build only intermittently? >>>> >>>> >>>> >>>> Info to collect for failed builds: >>> - repo sync timestamp >>>> - build command used? >>>> - Was it a new workspace, a cleaned workspace, or a previously used >>>> workspace? >>> - $MY_WORKSPACE/CONTEXT >>>> - $MY_WORKSPACE/build-std.log >>>> - $MY_WORKSPACE/std/results/*/ceph-*/*.log >>>> >>>> For successful builds, same info. Rather than full build logs, I can >>>> settle for: >>>> - grep '\(Success building\|iteration\|building ceph\)' >>>> $MY_WORKSPACE/build-std.log >>>> - grep compute_resources: build-std.log >>>> >>>> >>>> >>>> >>>> >>>> On 18-09-27 02:21 PM, Saul Wold wrote: >>>>> On 09/26/2018 09:16 AM, Scott Little wrote: >>>>>> aclocal 'too many loops' has been popping up sporadically for a >>>>>> week or two now. Possibly 7.5 related. >>>>>> >>>>>> I suspect that there is a build order and/or race condition >>>>>> element to this. It often goes away if you just run build-pkgs a >>>>>> second time. >>>>>> >>>>> I am seeing this failure also, but it does not go away after a >>>>> second rebuild. I have the lastest stx-root (build-tools) with the >>>>> recent patches. >>>>> >>>>> Is this directly related to the fuzz issue or is there something >>>>> else we need to address in CEPH itself. >>>>> >>>>> This is blocking my local build. >>>>> >>>>> Sau! >>>> >>>> >>>> >>>> >>>> _______________________________________________ >>>> Starlingx-discuss mailing list >>>> Starlingx-discuss@lists.starlingx.io >>>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >>> >>> >>> >>> _______________________________________________ >>> Starlingx-discuss mailing list >>> Starlingx-discuss@lists.starlingx.io >>> http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss >>> >
participants (3)
-
Cordoba Malibran, Erich
-
Saul Wold
-
Scott Little