We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Liveness probe failed after helm install.
Liveness probe failed: command "/opt/bitnami/scripts/etcd/healthcheck.sh" timed out
Readiness probe failed after helm install.
Readiness probe failed: command "/opt/bitnami/scripts/etcd/healthcheck.sh" timed out
get command:
$ kubectl get pods -n apisix NAME READY STATUS RESTARTS AGE apisix-6c677bb5d9-n574n 0/1 Init:0/1 0 72s apisix-dashboard-5db89db87-qgxpc 0/1 CrashLoopBackOff 2 (66s ago) 72s apisix-etcd-0 0/1 Running 0 72s apisix-etcd-1 0/1 Running 0 72s apisix-etcd-2 0/1 Running 0 72s apisix-ingress-controller-b74979dbd-c844d 0/1 Init:0/1 0 72s nfs-subdir-external-provisioner-cb58bf75-8pc2j 1/1 Running 0 71m
describe:
$ kubectl describe pod apisix-etcd-0 -n apisix Name: apisix-etcd-0 Namespace: apisix Priority: 0 Service Account: default Node: limbo-master/192.168.50.102 Start Time: Mon, 03 Apr 2023 15:16:47 +0800 Labels: app.kubernetes.io/instance=apisix app.kubernetes.io/managed-by=Helm app.kubernetes.io/name=etcd controller-revision-hash=apisix-etcd-867d4bdb9b helm.sh/chart=etcd-8.7.7 statefulset.kubernetes.io/pod-name=apisix-etcd-0 Annotations: checksum/token-secret: 0bc5b9cfba0b1b614f534a305f2d7cebd1a26d6cf0dd85fd6f6fd116ecb44452 Status: Running IP: 172.16.16.46 IPs: IP: 172.16.16.46 Controlled By: StatefulSet/apisix-etcd Containers: etcd: Container ID: containerd://e86d6c2bf713c0955bd442b3314c79eda7d24060d0fa11bdf6bfa1af7b0e4e48 Image: docker.io/bitnami/etcd:3.5.7-debian-11-r14 Image ID: docker.io/bitnami/etcd@sha256:0825cafa1c5f0c97d86009f3af8c0f5a9d4279fcfdeb0a2a09b84a1eb7893a13 Ports: 2379/TCP, 2380/TCP Host Ports: 0/TCP, 0/TCP State: Running Started: Mon, 03 Apr 2023 15:16:48 +0800 Ready: False Restart Count: 0 Liveness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=30s #success=1 #failure=5 Readiness: exec [/opt/bitnami/scripts/etcd/healthcheck.sh] delay=60s timeout=5s period=10s #success=1 #failure=5 Environment: BITNAMI_DEBUG: false MY_POD_IP: (v1:status.podIP) MY_POD_NAME: apisix-etcd-0 (v1:metadata.name) MY_STS_NAME: apisix-etcd ETCDCTL_API: 3 ETCD_ON_K8S: yes ETCD_START_FROM_SNAPSHOT: no ETCD_DISASTER_RECOVERY: no ETCD_NAME: $(MY_POD_NAME) ETCD_DATA_DIR: /bitnami/etcd/data ETCD_LOG_LEVEL: info ALLOW_NONE_AUTHENTICATION: yes ETCD_AUTH_TOKEN: jwt,priv-key=/opt/bitnami/etcd/certs/token/jwt-token.pem,sign-method=RS256,ttl=10m ETCD_ADVERTISE_CLIENT_URLS: http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2379,http://apisix-etcd.apisix.svc.cluster.local:2379 ETCD_LISTEN_CLIENT_URLS: http://0.0.0.0:2379 ETCD_INITIAL_ADVERTISE_PEER_URLS: http://$(MY_POD_NAME).apisix-etcd-headless.apisix.svc.cluster.local:2380 ETCD_LISTEN_PEER_URLS: http://0.0.0.0:2380 ETCD_INITIAL_CLUSTER_TOKEN: etcd-cluster-k8s ETCD_INITIAL_CLUSTER_STATE: new ETCD_INITIAL_CLUSTER: apisix-etcd-0=http://apisix-etcd-0.apisix-etcd-headless.apisix.svc.cluster.local:2380,apisix-etcd-1=http://apisix-etcd-1.apisix-etcd-headless.apisix.svc.cluster.local:2380,apisix-etcd-2=http://apisix-etcd-2.apisix-etcd-headless.apisix.svc.cluster.local:2380 ETCD_CLUSTER_DOMAIN: apisix-etcd-headless.apisix.svc.cluster.local Mounts: /bitnami/etcd from data (rw) /opt/bitnami/etcd/certs/token/ from etcd-jwt-token (ro) /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ggqsd (ro) Conditions: Type Status Initialized True Ready False ContainersReady False PodScheduled True Volumes: data: Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace) ClaimName: data-apisix-etcd-0 ReadOnly: false etcd-jwt-token: Type: Secret (a volume populated by a Secret) SecretName: apisix-etcd-jwt-token Optional: false kube-api-access-ggqsd: Type: Projected (a volume that contains injected data from multiple sources) TokenExpirationSeconds: 3607 ConfigMapName: kube-root-ca.crt ConfigMapOptional: <nil> DownwardAPI: true QoS Class: BestEffort Node-Selectors: <none> Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s node.kubernetes.io/unreachable:NoExecute op=Exists for 300s Events: Type Reason Age From Message ---- ------ ---- ---- ------- Normal Scheduled 2m9s default-scheduler Successfully assigned apisix/apisix-etcd-0 to limbo-master Normal Pulled 2m8s kubelet Container image "docker.io/bitnami/etcd:3.5.7-debian-11-r14" already present on machine Normal Created 2m8s kubelet Created container etcd Normal Started 2m8s kubelet Started container etcd Warning Unhealthy 3s (x7 over 53s) kubelet Readiness probe failed: command "/opt/bitnami/scripts/etcd/healthcheck.sh" timed out Warning Unhealthy 3s (x2 over 33s) kubelet Liveness probe failed: command "/opt/bitnami/scripts/etcd/healthcheck.sh" timed out
$ kubectl get svc -n apisix -o wide NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE SELECTOR apisix-admin ClusterIP 10.105.55.222 <none> 9180/TCP 8m24s app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix apisix-dashboard ClusterIP 10.100.159.52 <none> 80/TCP 8m24s app.kubernetes.io/instance=apisix,app.kubernetes.io/name=dashboard apisix-etcd ClusterIP 10.101.231.29 <none> 2379/TCP,2380/TCP 8m24s app.kubernetes.io/instance=apisix,app.kubernetes.io/name=etcd apisix-etcd-headless ClusterIP None <none> 2379/TCP,2380/TCP 8m24s app.kubernetes.io/instance=apisix,app.kubernetes.io/name=etcd apisix-gateway NodePort 10.109.193.104 <none> 80:31754/TCP 8m24s app.kubernetes.io/instance=apisix,app.kubernetes.io/name=apisix apisix-ingress-controller ClusterIP 10.109.254.157 <none> 80/TCP 8m24s app.kubernetes.io/instance=apisix,app.kubernetes.io/name=ingress-controller
helm install apisix apisix/apisix \ --set allow.ipList="{0.0.0.0/0}" \ --set gateway.type=NodePort \ --set ingress-controller.enabled=true \ --namespace apisix \ --set ingress-controller.config.apisix.serviceNamespace=apisix
$ uname -a Linux limbo-master 5.10.0-21-amd64 #1 SMP Debian 5.10.162-1 (2023-01-21) x86_64 GNU/Linux
$ kubectl version --output=yaml clientVersion: buildDate: "2023-02-22T13:39:03Z" compiler: gc gitCommit: fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b gitTreeState: clean gitVersion: v1.26.2 goVersion: go1.19.6 major: "1" minor: "26" platform: linux/amd64 kustomizeVersion: v4.5.7 serverVersion: buildDate: "2023-02-22T13:32:22Z" compiler: gc gitCommit: fc04e732bb3e7198d2fa44efa5457c7c6f8c0f5b gitTreeState: clean gitVersion: v1.26.2 goVersion: go1.19.6 major: "1" minor: "26" platform: linux/amd64
The text was updated successfully, but these errors were encountered:
No branches or pull requests
Issue description
Liveness probe failed after helm install.
Readiness probe failed after helm install.
get command:
describe:
Install
Environment
The text was updated successfully, but these errors were encountered: