Hi Bin,
Worker nodes can access external network through controller nodes. In our deployment, we use a local registry which is accessible via OAM. Worker nodes do can pull images from the local registry. So I suggest
you check
1. if your worker nodes can access external network through oam
2. check the docker configuration “/etc/docker/daemon.json” on your worker nodes.
Thanks.
From: Yang, Bin [mailto:Bin.Yang@windriver.com]
Sent: Wednesday, July 24, 2019 8:21 AM
To: Xie, Cindy <cindy.xie@intel.com>; starlingx-discuss@lists.starlingx.io
Subject: Re: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node
Hi Cindy,
Thanks for the response. I do locate in China, and I have setup a proxy to overcome the connectivity issue. That was verified to work by installing controllers, and by executing docker pull image on controller
nodes.
The issue I am experiencing is that ‘a worker node’ have no access to oam network but trying to pull docker image while I am trying to deploy openstack helm using ‘system application-apply stx-openstack’ commands.
Thanks
Best Regards,
Bin Yang, Solution Engineering Team,
Wind River
ONAP Multi-VIM/Cloud PTL
Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189
Skype: yangbincs993
From: Xie, Cindy [mailto:cindy.xie@intel.com]
Sent: Tuesday, July 23, 2019 10:00 PM
To: Yang, Bin; starlingx-discuss@lists.starlingx.io
Subject: RE: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node
Hi, Bin
I guess you’re located in China and have firewall blocking some docker image, right? You may have to setup your local registry in your lab.
Thx. - cindy
From: Yang, Bin [mailto:Bin.Yang@windriver.com]
Sent: Tuesday, July 23, 2019 8:53 AM
To: starlingx-discuss@lists.starlingx.io
Subject: [Starlingx-discuss] [Containers] Unable to pull docker image from a worker node
Dear experts,
I have been trying to install stx milestone3 over virtualbox with standard modes, I managed to install 2 controller nodes and 1 worker node, and provisioned them according to the instructions on wiki. Then I
uploaded openstack helm charts then apply it, then it failed to accomplish that operation.
As I investigate the root cause, I found out that the worker node is not in ready status:
[sysadmin@controller-0 ~(keystone_admin)]$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
compute-0 NotReady <none> 3h32m v1.13.5
controller-0 Ready master 14d v1.13.5
controller-1 Ready master 13d v1.13.5
The root cause is that worker node is lacking of access to external network directly, hence unable to pull docker image via the proxy:
[sysadmin@controller-0 ~(keystone_admin)]$ kubectl -n kube-system describe pods kube-proxy-pftk8
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedCreatePodSandBox 38s (x462 over 3h36m) kubelet, compute-0 Failed create pod sandbox: rpc error: code = Unknown desc = failed pulling image "k8s.gcr.io/pause:3.1": Error response
from daemon: Get https://k8s.gcr.io/v2/: net/http: request canceled while waiting
for connection (Client.Timeout exceeded while awaiting headers)
compute-0:~# docker pull k8s.gcr.io/pause:3.1
Error response from daemon: Get
https://k8s.gcr.io/v2/: net/http: request canceled while waiting for connection
(Client.Timeout exceeded while awaiting headers)
The traceroute to the docker proxy :
compute-0:~# cat /etc/systemd/system/docker.service.d/http-proxy.conf
[Service]
Environment="HTTP_PROXY=http://128.224.230.5:9090"
Environment="HTTPS_PROXY=http://128.224.230.5:9090"
Environment="NO_PROXY=localhost,127.0.0.1,registry.local,192.168.204.2,192.168.204.3,10.0.2.25,10.0.2.26,192.168.204.4,10.0.2.27"
compute-0:~# traceroute 128.224.230.5
traceroute to 128.224.230.5 (128.224.230.5), 30 hops max, 60 byte packets
1 controller-0 (192.168.204.3) 0.200 ms 0.256 ms 0.208 ms
2 * * *
3 * * *
Can anybody help explain how a worker node without access to external network could pull docker images? How can I workaround this issue?
Thanks
Best Regards,
Bin Yang, Solution Engineering Team,
Wind River
ONAP Multi-VIM/Cloud PTL
Direct +86,10,84777126 Mobile +86,13811391682 Fax +86,10,64398189
Skype: yangbincs993
From: Perez Ibarra, Maria G [mailto:maria.g.perez.ibarra@intel.com]
Sent: Tuesday, July 23, 2019 7:49 AM
To: starlingx-discuss@lists.starlingx.io
Subject: [Starlingx-discuss] [Containers] Sanity Test - ISO 20190722
Status of the Sanity Test for last CENGN ISO: bootimage.iso from 2019-Jul-22
(link)
Status:
RED
===========================================
Sanity Test is executed in a
Containers – Bare Metal Environment
AIO –
Simplex
Setup 04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack 49
TCs [PASS]
Sanity Platform 07 TCs [PASS]
TOTAL:
[ 61 TCs PASS ]
AIO –
Duplex
Setup 04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack 52
TCs [PASS]
Sanity Platform 07 TCs [PASS]
TOTAL:
[ 64 TCs PASS ]
Standard
- Dedicated Storage (2+2+2)
Setup
04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack 52
TCs [PASS] 24 TCs FAIL
Sanity Platform 09 TCs [PASS]
TOTAL:
[ 66 TCs PASS ]
===========================================
Sanity Test is executed in a
Containers – Virtual Environment
AIO –
Simplex
Setup 04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack 49
TCs [PASS]
Sanity Platform 07 TCs [PASS]
TOTAL:
[ 61 TCs PASS ]
AIO –
Duplex
Setup 04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack 52
TCs [PASS] 24 TCs FAIL
Sanity Platform 07 TCs [PASS]
TOTAL:
[ 64 TCs PASS ]
Standard - Local Storage (2+2)
Setup 04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack
52 TCs [PASS]
Sanity Platform 08 TCs [PASS]
TOTAL:
[ 65 TCs PASS ]
Standard - External Storage (2+2+2)
Setup 04 TCs [PASS]
Provisioning 01 TCs [PASS]
Sanity OpenStack
52 TCs [PASS] 24 TCs FAIL
Sanity Platform 08 TCs [PASS]
TOTAL:
[ 65 TCs PASS ]
-----------------------------------------------------------------------------------
Create instance from Image or from Volume fails
https://bugs.launchpad.net/starlingx/+bug/1837241
Regards
Maria G.