Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

sys-auth/sssd: Add missing /var/log/sssd tmpfiles entry #949

Merged
merged 1 commit into from
Jun 29, 2023

Conversation

pothos
Copy link
Member

@pothos pothos commented Jun 28, 2023

The folders are not created through "keepdir" which results in tmpfiles rules but an explict tmpfiles file. This is error prone and we should try to move to "keepdir" instead but for the backport, just add the missing line.

How to use

Backport to Beta

Testing done

Checked that the folder is there. With the default config, sssd.service can't start but the error message is different.

  • Changelog entries added in the respective changelog/ directory (user-facing change, bug fix, security fix, update)
  • Inspected CI output for image differences: /boot and /usr size, packages, list files for any missing binaries, kernel modules, config files, kernel modules, etc.

The folders are not created through "keepdir" which results in tmpfiles
rules but an explict tmpfiles file. This is error prone and we should
try to move to "keepdir" instead but for the backport, just add the
missing line.
@pothos pothos temporarily deployed to development June 28, 2023 12:49 — with GitHub Actions Inactive
@pothos pothos self-assigned this Jun 28, 2023
@github-actions
Copy link

Test report for 3648.0.0+nightly-20230627-2100 / amd64 arm64

Platforms tested : qemu_uefi-amd64 qemu_update-amd64 qemu_uefi-arm64 qemu_update-arm64

ok bpf.execsnoop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok bpf.local-gadget 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cgroupv1 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.basic 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.multipart-mime 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.cloudinit.script 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid0.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.data 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.disk.raid1.root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.discovery 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.etcdctlv3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.etcd-member.v2-backup-restore 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.filesystem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.flannel.udp 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.flannel.vxlan 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.instantiated.enable-unit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.kargs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.luks 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.indirect.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.regular.new 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.reuse 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.oem.wipe 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.symlink 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.translation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v1.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.btrfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.ext4root 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.users 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2.xfsroot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.ext4checkexisting 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.swap 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.ignition.v2_1.vfat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.install.cloudinit 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.internet 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.locksmith.cluster 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.misc.falco 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.network.initramfs.second-boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.listeners 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.network.wireguard 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.omaha.ping 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.osreset.ignition-rerun 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.overlay.cleanup 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.swap_activation 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.boot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.sysext.fallbackdownload # SKIP 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.toolbox.dnf-install 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.badverity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.update.grubnop 🟢 Succeeded: qemu_uefi-amd64 (1)

ok cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok cl.update.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.users.shells 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok cl.verity 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.auth.verify 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.groups 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.once 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.local 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.resource.s3.versioned 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.security.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.sethostname 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.ignition.systemd.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.reboot 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.locksmith.tls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.boolean 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.selinux.enforce 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.tls.fetch-urls 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok coreos.update.badusr 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok devcontainer.systemd-nspawn 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.btrfs-storage 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.containerd-restart 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.lib-coreos-dockerd-compat 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.network 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.selinux 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.torcx-manifest-pkgs 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok docker.userns 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok extra-test.[first_dual].cl.update.payload 🟢 Succeeded: qemu_update-amd64 (1); qemu_update-arm64 (1)

ok kubeadm.v1.24.14.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (4) ❌ Failed: qemu_uefi-arm64 (1, 2, 3)

                Diagnostic output for qemu_uefi-arm64, run 3
    L1: " Error: _kubeadm.go:279: unable to setup cluster: unable to create master node: machine __8e824bb6-938d-40f2-9d65-d5d325777fe2__ failed to start: ssh journalctl failed: time limit exceeded: dial tcp 10?.0.0.3:22: connect: no route to host_"
    L2: " "
                Diagnostic output for qemu_uefi-arm64, run 2
    L1: " Error: _cluster.go:117: I0628 16:22:40.332853    1491 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L9: "cluster.go:117: I0628 16:22:51.101004    1656 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.3?]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 7.504313 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: 5jcl5f.06ob0ccbitx0ym5j"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.3:6443 --token 5jcl5f.06ob0ccbitx0ym5j _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:b0b6678bd80df1b941954955be6b30fbaebaefee22584c1bcacc7372838377b2 "
    L78: "cluster.go:117: namespace/tigera-operator created"
    L79: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:117: serviceaccount/tigera-operator created"
    L101: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: deployment.apps/tigera-operator created"
    L104: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:117: installation.operator.tigera.io/default created"
    L107: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.24.14.calico.base/nginx_deployment (93.72s)"
    L110: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
                Diagnostic output for qemu_uefi-arm64, run 1
    L1: "  "
    L2: " Error: _cluster.go:117: I0628 16:11:51.357314    1497 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.24.15"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.24.15"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.24.15"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.24.15"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.7"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L9: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.8.6"
    L10: "cluster.go:117: I0628 16:12:02.882769    1664 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.24"
    L11: "cluster.go:117: [init] Using Kubernetes version: v1.24.15"
    L12: "cluster.go:117: [preflight] Running pre-flight checks"
    L13: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L14: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L15: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L16: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L17: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L18: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L19: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.6?5]"
    L20: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L22: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L27: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L28: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L29: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L30: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L35: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L36: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L37: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L40: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L41: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L42: "cluster.go:117: [apiclient] All control plane components are healthy after 8.502734 seconds"
    L43: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L44: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L45: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L47: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule node-role.kubernetes.io/control-plane:NoSchedule]"
    L48: "cluster.go:117: [bootstrap-token] Using token: xxs72s.5b9dagqez4td3bca"
    L49: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L53: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L54: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L55: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L56: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L57: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L58: "cluster.go:117: "
    L59: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L60: "cluster.go:117: "
    L61: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L62: "cluster.go:117: "
    L63: "cluster.go:117:   mkdir -p $HOME/.kube"
    L64: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L65: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L66: "cluster.go:117: "
    L67: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L68: "cluster.go:117: "
    L69: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L70: "cluster.go:117: "
    L71: "cluster.go:117: You should now deploy a pod network to the cluster."
    L72: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L73: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L74: "cluster.go:117: "
    L75: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L76: "cluster.go:117: "
    L77: "cluster.go:117: kubeadm join 10.0.0.65:6443 --token xxs72s.5b9dagqez4td3bca _"
    L78: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:81caba2e99a65d37a4f33f4adf58b9ee72e3103e2cd4a7c665953ac84735e983 "
    L79: "cluster.go:117: namespace/tigera-operator created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L100: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L101: "cluster.go:117: serviceaccount/tigera-operator created"
    L102: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L104: "cluster.go:117: deployment.apps/tigera-operator created"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L106: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L107: "cluster.go:117: installation.operator.tigera.io/default created"
    L108: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L109: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L110: "--- FAIL: kubeadm.v1.24.14.calico.base/nginx_deployment (94.11s)"
    L111: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L112: " "

ok kubeadm.v1.24.14.calico.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.cilium.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.24.14.flannel.cgroupv1.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.25.10.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:117: I0628 16:05:55.066712    1499 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.25"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.25.11"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.25.11"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.25.11"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.25.11"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.8"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.6-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.9.3"
    L9: "cluster.go:117: I0628 16:06:06.231966    1666 version.go:256] remote version is much newer: v1.27.3; falling back to: stable-1.25"
    L10: "cluster.go:117: [init] Using Kubernetes version: v1.25.11"
    L11: "cluster.go:117: [preflight] Running pre-flight checks"
    L12: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L13: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L14: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?6]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 5.502801 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: psj18b.stwjktrcha0z3nn4"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.16:6443 --token psj18b.stwjktrcha0z3nn4 _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:15763b49beac77d1f6ffe4e5d07515f373a753e576993425109fa0ce9db631f5 "
    L78: "cluster.go:117: i  Using Cilium version 1.12.1"
    L79: "cluster.go:117: ? Auto-detected cluster name: kubernetes"
    L80: "cluster.go:117: ? Auto-detected datapath mode: tunnel"
    L81: "cluster.go:117: ? Auto-detected kube-proxy has been installed"
    L82: "cluster.go:117: i  helm template --namespace kube-system cilium cilium/cilium --version 1.12.1 --set cluster.id=0,cluster.name=kubernetes,encryption.nodeEncryption=false,extraConfig.cluster-pool-ipv4-?cidr=192.168.0.0/17,extraConfig.enable-endpoint-routes=true,kubeProxyReplacement=disabled,operator.replicas=1,serviceAccounts.cilium.name=cilium,serviceAccounts.operator.name=cilium-operator,tunnel=vx?lan"
    L83: "cluster.go:117: i  Storing helm values file in kube-system/cilium-cli-helm-values Secret"
    L84: "cluster.go:117: ? Created CA in secret cilium-ca"
    L85: "cluster.go:117: ? Generating certificates for Hubble..."
    L86: "cluster.go:117: ? Creating Service accounts..."
    L87: "cluster.go:117: ? Creating Cluster roles..."
    L88: "cluster.go:117: ? Creating ConfigMap for Cilium version 1.12.1..."
    L89: "cluster.go:117: i Manual overwrite in ConfigMap: enable-endpoint-routes=true"
    L90: "cluster.go:117: i Manual overwrite in ConfigMap: cluster-pool-ipv4-cidr=192.168.0.0/17"
    L91: "cluster.go:117: ? Creating Agent DaemonSet..."
    L92: "cluster.go:117: ? Creating Operator Deployment..."
    L93: "cluster.go:117: ? Waiting for Cilium to be installed and ready..."
    L94: "cluster.go:117: ? Cilium was successfully installed! Run _cilium status_ to view installation health"
    L95: "cluster.go:117: daemonset.apps/cilium patched"
    L96: "cluster.go:117: ?[33m    /??_"
    L97: "cluster.go:117: ?[36m /???[33m___/?[32m??_?[0m    Cilium:         ?[32mOK?[0m"
    L98: "cluster.go:117: ?[36m ___?[31m/??_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L99: "cluster.go:117: ?[32m /???[31m___/?[35m??_?[0m    Hubble:         ?[36mdisabled?[0m"
    L100: "cluster.go:117: ?[32m ___?[34m/??_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L101: "cluster.go:117: ?[34m    ___/"
    L102: "cluster.go:117: ?[0m"
    L103: "cluster.go:117: Deployment       cilium-operator    "
    L104: "cluster.go:117: DaemonSet        cilium             "
    L105: "cluster.go:117: Containers:      cilium             "
    L106: "cluster.go:117:                  cilium-operator    "
    L107: "cluster.go:117: Cluster Pods:    0/0 managed by Cilium"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.25.10.cilium.base/node_readiness (92.03s)"
    L110: "kubeadm.go:295: nodes are not ready: ready nodes should be equal to 2: 1"
    L111: "--- FAIL: kubeadm.v1.25.10.cilium.base/IPSec_encryption (66.14s)"
    L112: "cluster.go:117: Error: Unable to determine status:  timeout while waiting for status to become successful: context deadline exceeded"
    L113: "cluster.go:130: __/opt/bin/cilium status --wait --wait-duration 1m__ failed: output ?[33m    /????_"
    L114: "?[36m /?????[33m___/?[32m????_?[0m    Cilium:         ?[31m1 errors?[0m, ?[33m1 warnings?[0m"
    L115: "?[36m ___?[31m/????_?[32m__/?[0m    Operator:       ?[32mOK?[0m"
    L116: "?[32m /?????[31m___/?[35m????_?[0m    Hubble:         ?[36mdisabled?[0m"
    L117: "?[32m ___?[34m/????_?[35m__/?[0m    ClusterMesh:    ?[36mdisabled?[0m"
    L118: "?[34m    ___/"
    L119: "?[0m"
    L120: "DaemonSet         cilium             Desired: 2, Ready: ?[33m1/2?[0m, Available: ?[33m1/2?[0m, Unavailable: ?[31m1/2?[0m"
    L121: "Deployment        cilium-operator    Desired: 1, Ready: ?[32m1/1?[0m, Available: ?[32m1/1?[0m"
    L122: "Containers:       cilium             Pending: ?[32m1?[0m, Running: ?[32m1?[0m"
    L123: "cilium-operator    Running: ?[32m1?[0m"
    L124: "Cluster Pods:     3/3 managed by Cilium"
    L125: "Image versions    cilium             quay.io/cilium/cilium:v1.12.1@sha256:ea2db1ee21b88127b5c18a96ad155c25485d0815a667ef77c2b7c7f31cab601b: 2"
    L126: "cilium-operator    quay.io/cilium/operator-generic:v1.12.1@sha256:93d5aaeda37d59e6c4325ff05030d7b48fabde6576478e3fdbfb9bb4a68ec4a1: 1"
    L127: "Errors:           cilium             cilium          1 pods of DaemonSet cilium are not ready"
    L128: "Warnings:         cilium             cilium-68lnx    pod is pending, status Process exited with status 1_"
    L129: " "
    L130: "  "

ok kubeadm.v1.25.10.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.calico.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.26.5.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.calico.base 🟢 Succeeded: qemu_uefi-amd64 (2); qemu_uefi-arm64 (1) ❌ Failed: qemu_uefi-amd64 (1)

                Diagnostic output for qemu_uefi-amd64, run 1
    L1: " Error: _cluster.go:117: W0628 16:17:02.289430    1535 images.go:80] could not find officially supported version of etcd for Kubernetes v1.27.3, falling back to the nearest etcd version (3.5.7-0)"
    L2: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-apiserver:v1.27.3"
    L3: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-controller-manager:v1.27.3"
    L4: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-scheduler:v1.27.3"
    L5: "cluster.go:117: [config/images] Pulled registry.k8s.io/kube-proxy:v1.27.3"
    L6: "cluster.go:117: [config/images] Pulled registry.k8s.io/pause:3.9"
    L7: "cluster.go:117: [config/images] Pulled registry.k8s.io/etcd:3.5.7-0"
    L8: "cluster.go:117: [config/images] Pulled registry.k8s.io/coredns/coredns:v1.10.1"
    L9: "cluster.go:117: [init] Using Kubernetes version: v1.27.3"
    L10: "cluster.go:117: [preflight] Running pre-flight checks"
    L11: "cluster.go:117: [preflight] Pulling images required for setting up a Kubernetes cluster"
    L12: "cluster.go:117: [preflight] This might take a minute or two, depending on the speed of your internet connection"
    L13: "cluster.go:117: [preflight] You can also perform this action in beforehand using _kubeadm config images pull_"
    L14: "cluster.go:117: W0628 16:17:12.781434    1695 checks.go:835] detected that the sandbox image __registry.k8s.io/pause:3.6__ of the container runtime is inconsistent with that used by kubeadm. It is rec?ommended that using __registry.k8s.io/pause:3.9__ as the CRI sandbox image."
    L15: "cluster.go:117: [certs] Using certificateDir folder __/etc/kubernetes/pki__"
    L16: "cluster.go:117: [certs] Generating __ca__ certificate and key"
    L17: "cluster.go:117: [certs] Generating __apiserver__ certificate and key"
    L18: "cluster.go:117: [certs] apiserver serving cert is signed for DNS names [kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local localhost] and IPs [10.96.0.1 10.0.0.1?49]"
    L19: "cluster.go:117: [certs] Generating __apiserver-kubelet-client__ certificate and key"
    L20: "cluster.go:117: [certs] Generating __front-proxy-ca__ certificate and key"
    L21: "cluster.go:117: [certs] Generating __front-proxy-client__ certificate and key"
    L22: "cluster.go:117: [certs] External etcd mode: Skipping etcd/ca certificate authority generation"
    L23: "cluster.go:117: [certs] External etcd mode: Skipping etcd/server certificate generation"
    L24: "cluster.go:117: [certs] External etcd mode: Skipping etcd/peer certificate generation"
    L25: "cluster.go:117: [certs] External etcd mode: Skipping etcd/healthcheck-client certificate generation"
    L26: "cluster.go:117: [certs] External etcd mode: Skipping apiserver-etcd-client certificate generation"
    L27: "cluster.go:117: [certs] Generating __sa__ key and public key"
    L28: "cluster.go:117: [kubeconfig] Using kubeconfig folder __/etc/kubernetes__"
    L29: "cluster.go:117: [kubeconfig] Writing __admin.conf__ kubeconfig file"
    L30: "cluster.go:117: [kubeconfig] Writing __kubelet.conf__ kubeconfig file"
    L31: "cluster.go:117: [kubeconfig] Writing __controller-manager.conf__ kubeconfig file"
    L32: "cluster.go:117: [kubeconfig] Writing __scheduler.conf__ kubeconfig file"
    L33: "cluster.go:117: [kubelet-start] Writing kubelet environment file with flags to file __/var/lib/kubelet/kubeadm-flags.env__"
    L34: "cluster.go:117: [kubelet-start] Writing kubelet configuration to file __/var/lib/kubelet/config.yaml__"
    L35: "cluster.go:117: [kubelet-start] Starting the kubelet"
    L36: "cluster.go:117: [control-plane] Using manifest folder __/etc/kubernetes/manifests__"
    L37: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-apiserver__"
    L38: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-controller-manager__"
    L39: "cluster.go:117: [control-plane] Creating static Pod manifest for __kube-scheduler__"
    L40: "cluster.go:117: [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory __/etc/kubernetes/manifests__. This can take up to 30m0s"
    L41: "cluster.go:117: [apiclient] All control plane components are healthy after 4.501180 seconds"
    L42: "cluster.go:117: [upload-config] Storing the configuration used in ConfigMap __kubeadm-config__ in the __kube-system__ Namespace"
    L43: "cluster.go:117: [kubelet] Creating a ConfigMap __kubelet-config__ in namespace kube-system with the configuration for the kubelets in the cluster"
    L44: "cluster.go:117: [upload-certs] Skipping phase. Please see --upload-certs"
    L45: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]"
    L46: "cluster.go:117: [mark-control-plane] Marking the node localhost as control-plane by adding the taints [node-role.kubernetes.io/control-plane:NoSchedule]"
    L47: "cluster.go:117: [bootstrap-token] Using token: ehsk72.1yqhr5r9pktqjruw"
    L48: "cluster.go:117: [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles"
    L49: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes"
    L50: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials"
    L51: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token"
    L52: "cluster.go:117: [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster"
    L53: "cluster.go:117: [bootstrap-token] Creating the __cluster-info__ ConfigMap in the __kube-public__ namespace"
    L54: "cluster.go:117: [kubelet-finalize] Updating __/etc/kubernetes/kubelet.conf__ to point to a rotatable kubelet client certificate and key"
    L55: "cluster.go:117: [addons] Applied essential addon: CoreDNS"
    L56: "cluster.go:117: [addons] Applied essential addon: kube-proxy"
    L57: "cluster.go:117: "
    L58: "cluster.go:117: Your Kubernetes control-plane has initialized successfully!"
    L59: "cluster.go:117: "
    L60: "cluster.go:117: To start using your cluster, you need to run the following as a regular user:"
    L61: "cluster.go:117: "
    L62: "cluster.go:117:   mkdir -p $HOME/.kube"
    L63: "cluster.go:117:   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config"
    L64: "cluster.go:117:   sudo chown $(id -u):$(id -g) $HOME/.kube/config"
    L65: "cluster.go:117: "
    L66: "cluster.go:117: Alternatively, if you are the root user, you can run:"
    L67: "cluster.go:117: "
    L68: "cluster.go:117:   export KUBECONFIG=/etc/kubernetes/admin.conf"
    L69: "cluster.go:117: "
    L70: "cluster.go:117: You should now deploy a pod network to the cluster."
    L71: "cluster.go:117: Run __kubectl apply -f [podnetwork].yaml__ with one of the options listed at:"
    L72: "cluster.go:117:   https://kubernetes.io/docs/concepts/cluster-administration/addons/"
    L73: "cluster.go:117: "
    L74: "cluster.go:117: Then you can join any number of worker nodes by running the following on each as root:"
    L75: "cluster.go:117: "
    L76: "cluster.go:117: kubeadm join 10.0.0.149:6443 --token ehsk72.1yqhr5r9pktqjruw _"
    L77: "cluster.go:117:  --discovery-token-ca-cert-hash sha256:8963490175f7249e98473b82135ee401b67980d037321b78785c6e1bfb335088 "
    L78: "cluster.go:117: namespace/tigera-operator created"
    L79: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created"
    L80: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created"
    L81: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created"
    L82: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/caliconodestatuses.crd.projectcalico.org created"
    L83: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created"
    L84: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created"
    L85: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created"
    L86: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created"
    L87: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created"
    L88: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created"
    L89: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created"
    L90: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created"
    L91: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created"
    L92: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/ipreservations.crd.projectcalico.org created"
    L93: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/kubecontrollersconfigurations.crd.projectcalico.org created"
    L94: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created"
    L95: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created"
    L96: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io created"
    L97: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/imagesets.operator.tigera.io created"
    L98: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io created"
    L99: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/tigerastatuses.operator.tigera.io created"
    L100: "cluster.go:117: serviceaccount/tigera-operator created"
    L101: "cluster.go:117: clusterrole.rbac.authorization.k8s.io/tigera-operator created"
    L102: "cluster.go:117: clusterrolebinding.rbac.authorization.k8s.io/tigera-operator created"
    L103: "cluster.go:117: deployment.apps/tigera-operator created"
    L104: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/installations.operator.tigera.io condition met"
    L105: "cluster.go:117: customresourcedefinition.apiextensions.k8s.io/apiservers.operator.tigera.io condition met"
    L106: "cluster.go:117: installation.operator.tigera.io/default created"
    L107: "cluster.go:117: apiserver.operator.tigera.io/default created"
    L108: "cluster.go:117: Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service ??? /etc/systemd/system/kubelet.service."
    L109: "--- FAIL: kubeadm.v1.27.2.calico.base/nginx_deployment (92.47s)"
    L110: "kubeadm.go:313: nginx is not deployed: ready replicas should be equal to 1: null_"
    L111: " "
    L112: "  "

ok kubeadm.v1.27.2.cilium.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok kubeadm.v1.27.2.flannel.base 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v3 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.nfs.v4 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok linux.ntp 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok packages 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.remote 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.journal.user 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysext.custom-docker 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysext.custom-oem 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysext.simple 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok systemd.sysusers.gshadow 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

ok torcx.enable-service 🟢 Succeeded: qemu_uefi-amd64 (1); qemu_uefi-arm64 (1)

@pothos pothos requested a review from a team June 29, 2023 09:59
@pothos pothos merged commit 2372874 into main Jun 29, 2023
@pothos pothos deleted the kai/sssd-var-log branch June 29, 2023 12:12
pothos added a commit that referenced this pull request Jun 29, 2023
sys-auth/sssd: Add missing /var/log/sssd tmpfiles entry
pothos added a commit that referenced this pull request Jun 29, 2023
sys-auth/sssd: Add missing /var/log/sssd tmpfiles entry
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants