[Starlingx-discuss] [multios][build] Build flock services with plan mock
Hello team/Scott Last week during the building meeting I took the AR to experiment and if possible fix all the missing build requirements for the flock services. Why am I interested in this? To enable the community to bein able to build the core technology of starling x using the build system for spec/srpms they prefer. The one that I used for this case is a plain mock. As we know mock is a tool for building RPM packages. You can use mock to build packages for many different versions of CentOS/Red Hat and Fedora. The main advantage of using mock to build RPMs instead of rpmbuild is that mock builds RPMs in a cleanroom environment. mock does this by creating a chroot and performing the RPM build in the chroot. In my case, I don't have at home a really powerful workstation, so I decided to create a solution for my HW limitations. Here is a simple solution to build the SRPMS from the flock services using containers. https://github.com/VictorRodriguez/stx-packaging/tree/build_w_docker_centos/... The docker image provided is a plain vanilla centos 7 w/ the necessary packages for mock and rpmbuild. It also add local-centos-7-x86_64.cfg which point to regular vanilla centos 7 yum repo[0] as well as the stx yum input/output repos [1][2] I am testing this on my regular laptop w/ docker and works fine. The docker image builds one flock service at the time with the command (using an example): $ make upstream-pkg SRPM=mtce-1.0-154.tis.src.rpm MOCK_CONFIG=local-centos-7-x86_64 Here an update of the flock services that I have tested so far and the errors I have found. https://docs.google.com/spreadsheets/d/1kWrV3A28tTc3xgKiYtbir3ymcI4pew3VosE0... Scott, I have one question, on the IRC channel I ask about why sometimes the http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... link shows as forbidden or down. Is this because I catch it in the middle of an image creation? [0] http://mirror.centos.org/centos/7/extras/x86_64/ [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... [2]http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... TODO: Enable from Makefile to build the flock service if I provide not the SRPM but the tarball and the spec file. The problem that I have is at the handling of the Flock Package Version when I create the tarballs by myself. I hope that this works for someone else. Regards Victor Rodriguez
I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn. In the file system latest_build is a symbolic link to one of the timestamped build directories. I only change it at the end of a successful build when the timestamped build directory is fully populated. During a build, the symlink should be pointing you at the previous build. Deleting the old link and creating the new one should only take a fraction of a second. How many folks have seen this? What was the time of the event? How long did it persist? Please report events in UTC. Scott On 2019-08-14 2:24 p.m., Victor Rodriguez wrote:
Scott, I have one question, on the IRC channel I ask about why sometimes the http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... link shows as forbidden or down. Is this because I catch it in the middle of an image creation?
On Wed, Aug 14, 2019 at 2:16 PM Scott Little <scott.little@windriver.com> wrote:
I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn.
In the file system latest_build is a symbolic link to one of the timestamped build directories. I only change it at the end of a successful build when the timestamped build directory is fully populated. During a build, the symlink should be pointing you at the previous build. Deleting the old link and creating the new one should only take a fraction of a second.
Ok, the failure I had was : failure: repodata/repomd.xml from stx-cengn: [Errno 256] No more mirrors to try. http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build...: [Errno 14] HTTP Error 403 - Forbidden
How many folks have seen this?
So far, only me, and I double tested it
What was the time of the event?
Around 6 PM in UTC
How long did it persist?
Less than 10 min
Please report events in UTC.
Got it, I will do the next time. In the meantime, I will leave my script to try to build all the output packages and see if everyone can build or if this error came back Thanks for the help Victor R
Scott
On 2019-08-14 2:24 p.m., Victor Rodriguez wrote:
Scott, I have one question, on the IRC channel I ask about why sometimes the http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... link shows as forbidden or down. Is this because I catch it in the middle of an image creation?
On Wed, Aug 14, 2019 at 2:18 PM Scott Little <scott.little@windriver.com> wrote:
I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn. [...] How many folks have seen this? What was the time of the event? How long did it persist? Please report events in UTC.
So I've been poking at this for the last few minutes, so around 2200-2230 UTC These links work: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033... These do not: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053... Until I tried them again to write this email, then they swapped. Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough. dt -- Dean Troyer dtroyer@gmail.com
I can see it also and it's easily reproducible with this line: $ while true; do curl -I -q http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ && sleep 1; done HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:46 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr03, 1.1 jfintpr01 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:48 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:49 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:51 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:52 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:53 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive On 8/14/19, 5:43 PM, "Dean Troyer" <dtroyer@gmail.com> wrote: On Wed, Aug 14, 2019 at 2:18 PM Scott Little <scott.little@windriver.com> wrote: > I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > outages attributed to know issues at cengn. [...] > How many folks have seen this? What was the time of the event? How > long did it persist? Please report events in UTC. So I've been poking at this for the last few minutes, so around 2200-2230 UTC These links work: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033... These do not: http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053... Until I tried them again to write this email, then they swapped. Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough. dt -- Dean Troyer dtroyer@gmail.com _______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
Thanks a lot to everyone that help us to test and verify this issue During the building meeting, Scott agreed to help us to talk with CENGN to fix this issue In the meantime a local repo with the RPMs from [1] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... [2] http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/latest_build... is the solution. If you download them you just need to run createrepo -c <path to your repo>. This is just a temporary solution since the idea is that anyone of us can build w/o the need for heavy workstations. Thanks a lot Scott Regards Victor R On Wed, Aug 14, 2019 at 10:08 PM Cordoba Malibran, Erich < erich.cordoba.malibran@intel.com> wrote:
I can see it also and it's easily reproducible with this line:
$ while true; do curl -I -q http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/ && sleep 1; done
HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:46 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr03, 1.1 jfintpr01 Proxy-Connection: Keep-Alive Connection: Keep-Alive
HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:48 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive
HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:49 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive
HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:51 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive
HTTP/1.1 200 OK Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:52 GMT Content-Type: text/html Vary: Accept-Encoding Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive
HTTP/1.1 403 Forbidden Server: nginx/1.15.8 Date: Thu, 15 Aug 2019 02:03:53 GMT Content-Type: text/html Content-Length: 153 Via: 1.1 jfdmzpr04, 1.1 jfintpr02 Proxy-Connection: Keep-Alive Connection: Keep-Alive
On 8/14/19, 5:43 PM, "Dean Troyer" <dtroyer@gmail.com> wrote:
On Wed, Aug 14, 2019 at 2:18 PM Scott Little < scott.little@windriver.com> wrote: > I've never seen a 404 or 403 myself, outside of the 3 or 4 extended > outages attributed to know issues at cengn. [...] > How many folks have seen this? What was the time of the event? How > long did it persist? Please report events in UTC.
So I've been poking at this for the last few minutes, so around 2200-2230 UTC
These links work:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053...
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033...
These do not:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033...
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053...
Until I tried them again to write this email, then they swapped.
Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough.
dt
-- Dean Troyer dtroyer@gmail.com
_______________________________________________ Starlingx-discuss mailing list Starlingx-discuss@lists.starlingx.io http://lists.starlingx.io/cgi-bin/mailman/listinfo/starlingx-discuss
The server multi-thread, and only one server thread had lost connectivity of the ceph back end. It's fixed now. Scott On 2019-08-14 6:42 p.m., Dean Troyer wrote:
On Wed, Aug 14, 2019 at 2:18 PM Scott Little <scott.little@windriver.com> wrote:
I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn. [...] How many folks have seen this? What was the time of the event? How long did it persist? Please report events in UTC. So I've been poking at this for the last few minutes, so around 2200-2230 UTC
These links work:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033...
These do not:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053...
Until I tried them again to write this email, then they swapped.
Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough.
dt
Awesome, thanks! On Mon, Aug 19, 2019 at 9:35 AM Scott Little <scott.little@windriver.com> wrote:
The server multi-thread, and only one server thread had lost connectivity of the ceph back end. It's fixed now.
Scott
On 2019-08-14 6:42 p.m., Dean Troyer wrote:
On Wed, Aug 14, 2019 at 2:18 PM Scott Little <scott.little@windriver.com> wrote:
I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn. [...] How many folks have seen this? What was the time of the event? How long did it persist? Please report events in UTC. So I've been poking at this for the last few minutes, so around 2200-2230 UTC
These links work:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033...
These do not:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033... http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053...
Until I tried them again to write this email, then they swapped.
Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough.
dt
HI team/Marcela Following this experiment, here are the results of building the stx SRPMs with simple mock build system: https://docs.google.com/spreadsheets/d/1kWrV3A28tTc3xgKiYtbir3ymcI4pew3VosE0... Marcela, we can work on fixing them one by one. If I am missing something on the list of SRPMs that need to be built, please let me know I also updated the script and Makefile based on the patch from Dean ( thanks ) Regards Victor Rodriguez On Mon, Aug 19, 2019 at 12:03 PM Victor Rodriguez <vm.rod25@gmail.com> wrote:
Awesome, thanks!
On Mon, Aug 19, 2019 at 9:35 AM Scott Little <scott.little@windriver.com> wrote:
The server multi-thread, and only one server thread had lost connectivity of the ceph back end. It's fixed now.
Scott
On 2019-08-14 6:42 p.m., Dean Troyer wrote:
On Wed, Aug 14, 2019 at 2:18 PM Scott Little <
scott.little@windriver.com> wrote:
I've never seen a 404 or 403 myself, outside of the 3 or 4 extended outages attributed to know issues at cengn. [...] How many folks have seen this? What was the time of the event? How long did it persist? Please report events in UTC. So I've been poking at this for the last few minutes, so around 2200-2230 UTC
These links work:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190811T053...
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190813T033...
These do not:
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190812T033...
http://mirror.starlingx.cengn.ca/mirror/starlingx/master/centos/20190814T053...
Until I tried them again to write this email, then they swapped.
Is there perchance a load balancer in front of multiple web servers and one of the backends is having trouble? Even if that isn't the case that seems to describe the observed behaviour well enough.
dt
participants (4)
-
Cordoba Malibran, Erich
-
Dean Troyer
-
Scott Little
-
Victor Rodriguez