[Starlingx-discuss] Broken build due to removal of irqbalance
Hi Today's build failed In the build-iso stage due to irqbalance is missing. The package was removed here[0]. The fix is easy, just to remove the irqbalance package from the image.inc file, however before that I just want to confirm that this removal is ok. Thanks -Erich [0] https://review.openstack.org/#/c/587832/
It will be restored by https://review.openstack.org/588043. It seems to be hung up in zuul due to a faulty verification script. Working it now ... Scott On 18-08-03 01:43 AM, Cordoba Malibran, Erich wrote:
Hi
Today's build failed In the build-iso stage due to irqbalance is missing. The package was removed here[0]. The fix is easy, just to remove the irqbalance package from the image.inc file, however before that I just want to confirm that this removal is ok.
Thanks
-Erich [0] https://review.openstack.org/#/c/587832/
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On Fri, Aug 3, 2018 at 8:31 AM, Scott Little <scott.little@windriver.com> wrote:
It will be restored by https://review.openstack.org/588043.
That appears to be at the bottom of the stack while your fix https://review.openstack.org/588565 is at the top. Either 588565 (rebased on master) or https://review.openstack.org/588534 (making the Zuul job non-voting) needs to merge first, then the rest stack that is blocked need to be rebased. Also, https://review.openstack.org/588566 should not be necessary as that job is already non-voting and everything in that queue merged. dt -- Dean Troyer dtroyer@gmail.com
Ok, lets go with that. I'm still not clear on how to debug these blockages. Is there a centralized place to view the work queue and what each job is waiting on ? Scott On 18-08-03 09:51 AM, Dean Troyer wrote:
On Fri, Aug 3, 2018 at 8:31 AM, Scott Little <scott.little@windriver.com> wrote:
It will be restored by https://review.openstack.org/588043. That appears to be at the bottom of the stack while your fix https://review.openstack.org/588565 is at the top. Either 588565 (rebased on master) or https://review.openstack.org/588534 (making the Zuul job non-voting) needs to merge first, then the rest stack that is blocked need to be rebased.
Also, https://review.openstack.org/588566 should not be necessary as that job is already non-voting and everything in that queue merged.
dt
On Fri, Aug 3, 2018 at 8:57 AM, Scott Little <scott.little@windriver.com> wrote:
I'm still not clear on how to debug these blockages. Is there a centralized place to view the work queue and what each job is waiting on ?
http://zuul.openstack.org/ is the starting point, that shows _everything_ that Zuul is doing. So first thing is to put 'stx' or some other sub-string to search for repo names in the filter. There is where you see the specific jobs. For example, right now you can see 588565 in the check queue with the 5 reviews below it (from a stack perspective like Gerrit displays, that graph shows oldest at the top). That was my first clue to look at the review orders in Gerrit. Then it was just following both the right-most pane in the Gerrit review screen and looking at parent commits to confirm. I've found, more often than not, blockage in the queues is due to things not being in the order you expect. dt -- Dean Troyer dtroyer@gmail.com
On Fri, Aug 3, 2018 at 9:25 AM, Dean Troyer <dtroyer@gmail.com> wrote:
On Fri, Aug 3, 2018 at 8:57 AM, Scott Little <scott.little@windriver.com> wrote:
I'm still not clear on how to debug these blockages. Is there a centralized place to view the work queue and what each job is waiting on ?
http://zuul.openstack.org/ is the starting point, that shows _everything_ that Zuul is doing. So first thing is to put 'stx' or some other sub-string to search for repo names in the filter. There is where you see the specific jobs.
One other bit about the Zuul status screen, click on the review box and it expands to the list of jobs being run. Clicking on those once they have started will take you to the live log screen for a running job (like watching paint dry sometimes!) or to the same log directory you get from the Gerrit review screen. Our jobs are very uninteresting right now, to get more of a feel for this put 'neutron' or another project you are familiar with into the filter and click around a bit. dt -- Dean Troyer dtroyer@gmail.com
http://zuul.openstack.org/ So it seems there can be multiple queues per git. Withing a queue there are dependencies, but different queues can make progress independently? Or is there a queue of queues for the git? On top of that, it seems like there are a finite number of execution engines. If no execution engines are available, none of your queues will progress. Is that about right? Scott On 18-08-03 09:57 AM, Scott Little wrote:
Ok, lets go with that.
I'm still not clear on how to debug these blockages. Is there a centralized place to view the work queue and what each job is waiting on ?
Scott
On 18-08-03 09:51 AM, Dean Troyer wrote:
On Fri, Aug 3, 2018 at 8:31 AM, Scott Little <scott.little@windriver.com> wrote:
It will be restored by https://review.openstack.org/588043. That appears to be at the bottom of the stack while your fix https://review.openstack.org/588565 is at the top. Either 588565 (rebased on master) or https://review.openstack.org/588534 (making the Zuul job non-voting) needs to merge first, then the rest stack that is blocked need to be rebased.
Also, https://review.openstack.org/588566 should not be necessary as that job is already non-voting and everything in that queue merged.
dt
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
On Fri, Aug 3, 2018 at 9:31 AM, Scott Little <scott.little@windriver.com> wrote:
So it seems there can be multiple queues per git. Withing a queue there are dependencies, but different queues can make progress independently? Or is there a queue of queues for the git?
[0] is the Zuul concept page, but basically it has a set of pipelines for different job types, the two we care the most about are check and gate. Check jobs run on every Gerrit submission, gate jobs run after Workflow +1 is set. There is overlap between those job sets (check may have run last week) but some things like non-voting jobs don't run in the gate. The other pipeline we'll be using is experimental, for on-demand runs, that's where I plan to put the initial py3 jobs for example, so we can see where we are but not waste resource running them all the time.
On top of that, it seems like there are a finite number of execution engines. If no execution engines are available, none of your queues will progress. Is that about right?
Not just about, that is it exactly. An older Zuul status showed the VM allocation graphs at the bottom, I'm not sure where those went after the Zuul v3 upgrade... All of OpenStack CI is run on donated cloud resources (mostly single-use VMs) from places like Rackspace, Vexxhost, OVH, Dreamcloud and about 7 more that I don't remember offhand. We have a quota on each cloud and the load during North American working hours usually puts us way over. This is a big part of why Zuul was born, to dynamically manage that ever-changing pool of test resources (VMs). It turns out that hosting OpenStack CI is an excellent cloud load test, we've found more than a few scaling problems this way. dt [0] https://zuul-ci.org/docs/zuul/user/concepts.html -- Dean Troyer dtroyer@gmail.com
On Fri, Aug 3, 2018 at 8:57 AM, Scott Little <scott.little@windriver.com> wrote:
Ok, lets go with that.
OK, https://review.openstack.org/588534has merged, so starting with https://review.openstack.org/#/c/588043/ you should be able to rebase each of the stuck reviews directly in Gerrit on that and they should merge (if 588534 actually fixes the problem). There is currently a job for 588043 in the check queue, doing a rebase will kill that and start it over. Also, another way to kick a review to run the jobs again is to put 'recheck' as a review comment. Anyone can do this, not just those who can W+1 a review to try it again. dt -- Dean Troyer dtroyer@gmail.com
On Fri, Aug 3, 2018 at 9:50 AM, Dean Troyer <dtroyer@gmail.com> wrote:
OK, https://review.openstack.org/588534has merged, so starting with https://review.openstack.org/#/c/588043/ you should be able to rebase each of the stuck reviews directly in Gerrit on that and they should merge (if 588534 actually fixes the problem). There is currently a job for 588043 in the check queue, doing a rebase will kill that and start it over.
I'm pleading Friday post-flight brain fog... I need to correct myself here that the rebases are not necessary as there is no conflicting code with the fix review. Just a recheck, or as in the case of 588043, as long as the job starts _after_ the fix merges, life is good... dt -- Dean Troyer dtroyer@gmail.com
participants (3)
-
Cordoba Malibran, Erich
-
Dean Troyer
-
Scott Little