Hi, Austin:

According to your suggestion, I ran the command ¡°kubectl describe po -n kube-system $(kubectl get po -n kube-system | grep ic-nginx-ingress | awk '{print $1}')", and the failed image are calico/kube-controllers:v3.12.0 and registry.local:9001/k8s.gcr.io/coredns:1.6.7. What should I do now to solve this problem£¿


Following is the error message in the returned result£º
Events:
  Type     Reason            Age                From                   Message
  ----     ------            ----               ----                   -------
  Warning  FailedScheduling  22m (x3 over 22m)  default-scheduler      0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  Normal   Scheduled         22m                default-scheduler      Successfully assigned kube-system/calico-kube-controllers-5cd4695574-8nmcj to controller-0
  Normal   Pulled            22m                kubelet, controller-0  Container image "registry.local:9001/quay.io/calico/kube-controllers:v3.12.0" already present on machine
  Normal   Created           22m                kubelet, controller-0  Created container calico-kube-controllers
  Normal   Started           22m                kubelet, controller-0  Started container calico-kube-controllers

Events:
  Type     Reason            Age                 From               Message
  ----     ------            ----                ----               -------
  Warning  FailedScheduling  22m (x3 over 22m)   default-scheduler  0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  Warning  FailedScheduling  83s (x17 over 22m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod affinity/anti-affinity, 1 node(s) didn't satisfy existing pods anti-affinity rules.

Events:
  Type     Reason            Age                From                   Message
  ----     ------            ----               ----                   -------
  Warning  FailedScheduling  22m (x3 over 22m)  default-scheduler      0/1 nodes are available: 1 node(s) had taint {node.kubernetes.io/not-ready: }, that the pod didn't tolerate.
  Normal   Scheduled         22m                default-scheduler      Successfully assigned kube-system/coredns-78d9fd7cb9-r289w to controller-0
  Normal   Pulled            22m                kubelet, controller-0  Container image "registry.local:9001/k8s.gcr.io/coredns:1.6.7" already present on machine
  Normal   Created           22m                kubelet, controller-0  Created container coredns
  Normal   Started           22m                kubelet, controller-0  Started container coredns

===============================================================

Best regards with you,

Longqian



At 2021-04-20 20:59:43, "Sun, Austin" <austin.sun@intel.com> wrote:

Hi Longqian:

      Would you like check ¡°kubectl describe po -n kube-system $(kubectl get po -n kube-system | grep ic-nginx-ingress | awk '{print $1}')" Result ?

 

Is it because docker images pull failure or other issue ?

 

BR
Austin Sun.

 

From: Longqian Zhao <zhaolongqian456@163.com>
Sent: Tuesday, April 20, 2021 7:01 PM
To: starlingx-discuss@lists.starlingx.io; allain.legacy@windriver.com; thorsten.steuer@acdp.at
Subject: [Starlingx-discuss] Bootstrap controller-0 failed as task [bootstrap/bringup-bootstrap-applications] failed

 

Hi,

 

I according to https://docs.starlingx.io/deploy_install_guides/r4_release/virtual/controller_storage_install_kubernetes.html, task [bootstrap/bringup-bootstrap-applications : Wait until application is in the applied state] failed. Could you please help me? Thanks.

=====================================

My Environment:

1. I install StarlingX in Virtual Environment.

2. StarlingX version: R4.0

3. stx-tools: master

4. docker.elastic.co network delay: about 230ms (using ping docker.elastic.co)

5. Steps:

             a. Set system password succesfully.

             b. Run following script in console:

                        export CONTROLLER0_OAM_CIDR=10.10.10.3/24

                        export DEFAULT_OAM_GATEWAY=10.10.10.1

                        sudo ip address add $CONTROLLER0_OAM_CIDR dev eth1000

                        sudo ip link set up dev eth1000

                        sudo ip route add default via $DEFAULT_OAM_GATEWAY dev eth1000

             c. cp /usr/share/ansible/stx-ansible/playbooks/host_vars/bootstrap/default.yml $HOME/localhost.yml

             d. I submit the required image to Huawei Container Image Service in advance. For details, please refer to the following link: You can refer to the URL: https://www.huaweicloud.com/product/swr.html

             e. modify and add following informations in $HOME/localhost.yml:

                        system_mode: duplex

                        external_oam_node_0_address: 10.10.10.3

                        external_oam_node_1_address: 10.10.10.4

                        admin_password: Password123$%^

                        ansible_become_pass: Password123$%^

                        docker_registries:

                            k8s.gcr.io:

                            gcr.io:

                            quay.io:

                            docker.io:

                            docker.elastic.co:

                            defaults:

                                   url: You can refer to the URL: https://www.huaweicloud.com/product/swr.html

                                   username: You can refer to the URL: https://www.huaweicloud.com/product/swr.html

                                   password: You can refer to the URL: https://www.huaweicloud.com/product/swr.html

             f. ansible-playbook /usr/share/ansible/stx-ansible/playbooks/bootstrap.yml

      •  All images downloaded and pushed to the local registry in 1191.2941618 seconds
      • Some steps were successfully executed
      • TASK [bootstrap/bringup-bootstrap-applications : Wait until application is in the uploaded state] ***

                               FAILED - RETRYING: Wait until application is in the uploaded state (3 retries left).

                               changed: [localhost]

                               TASK [bootstrap/bringup-bootstrap-applications : Apply overrides for application] ***

                               TASK [bootstrap/bringup-bootstrap-applications : Apply application] ************

                               changed: [localhost]

                               TASK [bootstrap/bringup-bootstrap-applications : Wait until application is in the applied state] ***

                               FAILED - RETRYING: Wait until application is in the applied state (30 retries left).

                               ....

                               fatal: [localhost]: FAILED! => {"attempts": 30, "changed": true, "cmd": "source /etc/platform/openrc; system application-show nginx-ingress-controller --column status --format value", "delta": "0:00:01.670146", "end": "2021-04-20 09:03:21.549258", "rc": 0, "start": "2021-04-20 09:03:19.879112", "stderr": "", "stderr_lines": [], "stdout": "apply-failed", "stdout_lines": ["apply-failed"]}

                               PLAY RECAP *********************************************************************

                               localhost                  : ok=330  changed=167  unreachable=0    failed=1

 

Issues:

1. FAILED - RETRYING: Wait until application is in the uploaded state (3 retries left). and TASK [bootstrap/bringup-bootstrap-applications : Wait until application is in the applied state]

2. Do I have to set up proxy for docker to solve this problem£¿

===============================================================

Best regards with you,

Longqian