On 2019-03-15 09:26:55 -0400 (-0400), Andy Ning wrote: [...]
Good to know that we can somehow control the job by the .zuul.yaml file. I would think adding a nodeset to the job should be a temporary workaround.
Yes, I think this is what Dean was going to propose for both the jobs you noted.
Overall I'm not sure we want to specify where a particular job is running (will that be a load balancing issue for Zuul for example?).
The nodeset doesn't specify a location, just what sort of environment should be booted for the system in which the job will be run. Aside from some highly-specialized nodesets we have which are provider-specific or specific to non x86-64 processors, our generic $distro-$release nodesets can be booted in any of our Nodepool providers.
Plus most of us want to focus on the production code, so hiding Zuul job details may not be a bad idea for developers (maybe that's the reason why .zuul.yaml is hidden file?)
Zuul will load[0] zuul.yaml or zuul.d/*.yaml with or without a leading '.' so it's not an architectural choice to make those files hidden, and they can be renamed to remove the leading '.' in the file or directory name with no change in behavior. The goal with Zuul is that your job definitions are part of your repository so they can be available to anyone to inspect and alter (and with a few security-related exceptions, will even run speculatively on proposed alterations to those configuration files so they can be proven to work before they're merged).
In terms of self-testing, I usually run tox locally on our build machine and that works fairly well. Is there a way we can trigger Zuul jobs on our change before we sumit the review? The idea is developers run tox in the same environment as Zuul runs it.
As the errors you raised demonstrate, the details/dependencies of some tests rely strongly on the characteristics of the system on which they're being run. You can of course download[1] the images we build for our test systems and boot one in a virtual machine context or run a script[2] we provide to build one yourself with or without modifications. However, it's not just the images themselves which can affect job characteristics but also the underlying machine, so we provide a breakdown[3] of the most relevant known (and unknown) properties for the providers/flavors we use. At present, exactly replicating every detail of a Zuul job without running Zuul itself is nontrivial, since job definitions are often distributed and components inherited from multiple git repositories. There is some work underway to provide tooling to make this task much easier, but most times it's sufficient for tox-based jobs to just emulate them in an appropriate system (with images described above) by checking out the repository in question, installing any system packages bindep says are missing for its "test" profile, running any additional tools/test-setup.sh script that project provides, and then invoking tox with the desired parameters. That said, if you're looking to have the Zuul service we're operating test your changes before you push them to Gerrit for review, I don't see the point. We standardize on making a "work in progress" option available to all change owners (currently implemented as a -1 vote for the Workflow label) so they can communicate to reviewers that a change is not yet ready to be reviewed. Zuul will still run all configured jobs on such changes and report results back in a review comment just like for any other proposed change. [0] https://zuul-ci.org/docs/zuul/user/config.html#configuration-loading [1] https://nb01.openstack.org/images/ [2] https://opendev.org/openstack-infra/project-config/src/branch/master/tools/b... [3] https://docs.openstack.org/infra/manual/testing.html -- Jeremy Stanley