kubeadm init --config=kubeadm.yml --experimental-upload-certs | tee kubeadm-init.log
# 安装成功则会有如下输出 [init] Using Kubernetes version: v1.14.1 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [preflight] Pulling images required for setting up a Kubernetes cluster [preflight] This might take a minute or two, depending on the speed of your internet connection [preflight] You can also perform this action in beforehand using 'kubeadm config images pull' [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Activating the kubelet service [certs] Using certificateDir folder "/etc/kubernetes/pki" [certs] Generating "ca" certificate and key [certs] Generating "apiserver" certificate and key [certs] apiserver serving cert is signed for DNS names [kubernetes-master kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 192.168.141.130] [certs] Generating "apiserver-kubelet-client" certificate and key [certs] Generating "front-proxy-ca" certificate and key [certs] Generating "front-proxy-client" certificate and key [certs] Generating "etcd/ca" certificate and key [certs] Generating "etcd/peer" certificate and key [certs] etcd/peer serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1] [certs] Generating "etcd/server" certificate and key [certs] etcd/server serving cert is signed for DNS names [kubernetes-master localhost] and IPs [192.168.141.130 127.0.0.1 ::1] [certs] Generating "etcd/healthcheck-client" certificate and key [certs] Generating "apiserver-etcd-client" certificate and key [certs] Generating "sa" key and public key [kubeconfig] Using kubeconfig folder "/etc/kubernetes" [kubeconfig] Writing "admin.conf" kubeconfig file [kubeconfig] Writing "kubelet.conf" kubeconfig file [kubeconfig] Writing "controller-manager.conf" kubeconfig file [kubeconfig] Writing "scheduler.conf" kubeconfig file [control-plane] Using manifest folder "/etc/kubernetes/manifests" [control-plane] Creating static Pod manifest for"kube-apiserver" [control-plane] Creating static Pod manifest for"kube-controller-manager" [control-plane] Creating static Pod manifest for"kube-scheduler" [etcd] Creating static Pod manifest forlocal etcd in"/etc/kubernetes/manifests" [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s [apiclient] All control plane components are healthy after 20.003326 seconds [upload-config] storing the configuration used in ConfigMap "kubeadm-config"in the "kube-system" Namespace [kubelet] Creating a ConfigMap "kubelet-config-1.14"in namespace kube-system with the configuration for the kubelets in the cluster [upload-certs] Storing the certificates in ConfigMap "kubeadm-certs"in the "kube-system" Namespace [upload-certs] Using certificate key: 2cd5b86c4905c54d68cc7dfecc2bf87195e9d5d90b4fff9832d9b22fc5e73f96 [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the label "node-role.kubernetes.io/master=''" [mark-control-plane] Marking the node kubernetes-master as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule] [bootstrap-token] Using token: abcdef.0123456789abcdef [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles [bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials [bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token [bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster [bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace [addons] Applied essential addon: CoreDNS [addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
You should now deploy a pod network to the cluster. Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at: https://kubernetes.io/docs/concepts/cluster-administration/addons/
Then you can join any number of worker nodes by running the following on each as root:
root@kubernetes-slave1:~# kubeadm join 192.168.41.132:6443 --token abcdef.0123456789abcdef \ > --discovery-token-ca-cert-hash sha256:f4afc656c3beb88b5d8949c10b1ac1237b45d15b5d5285b441efe569e0eb0889 [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ [WARNING Hostname]: hostname "kubernetes-slave1" could not be reached [WARNING Hostname]: hostname "kubernetes-slave1": lookup kubernetes-slave1 on 192.168.41.2:53: no such host [preflight] Reading configuration from the cluster... [preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml' [kubelet-start] Downloading configuration for the kubelet from the "kubelet-config-1.15" ConfigMap in the kube-system namespace [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml" [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env" [kubelet-start] Activating the kubelet service [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap...
This node has joined the cluster: * Certificate signing request was sent to apiserver and a response was received. * The Kubelet was informed of the new secure connection details.
Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
root@kubernetes-slave1:~# cd /etc/network root@kubernetes-slave1:/etc/network# vi interfaces
更改文件内容:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16
# This file describes the network interfaces available on your system # and how to activate them. For more information, see interfaces(5).
source /etc/network/interfaces.d/*
# The loopback network interface auto lo iface lo inet loopback
# The primary network interface auto ens33 iface ens33 inet static address 192.168.41.134 netmask 255.255.255.0 gateway 192.168.41.2 dns-nameserver 192.168.42.2
之后重启服务器即可。
测试网络
通过ping www.baidu.com测试网络连接,发现无数据。
1 2
root@kubernetes-slave1:/etc/network# cd /etc root@kubernetes-slave1:/etc# vi resolv.conf
更改nameserver:
1 2 3
# Dynamic resolv.conf(5) file for glibc resolver(3) generated by resolvconf(8) # DO NOT EDIT THIS FILE BY HAND -- YOUR CHANGES WILL BE OVERWRITTEN nameserver 114.114.114.114
root@kubernetes-slave1:/etc# ping www.baidu.com PING www.a.shifen.com (39.156.66.14) 56(84) bytes of data. 64 bytes from 39.156.66.14: icmp_seq=1 ttl=128 time=18.2 ms 64 bytes from 39.156.66.14: icmp_seq=2 ttl=128 time=18.3 ms 64 bytes from 39.156.66.14: icmp_seq=3 ttl=128 time=18.3 ms 64 bytes from 39.156.66.14: icmp_seq=4 ttl=128 time=17.8 ms 64 bytes from 39.156.66.14: icmp_seq=5 ttl=128 time=17.5 ms
CNI 的初衷是创建一个框架,用于在配置或销毁容器时动态配置适当的网络配置和资源。插件负责为接口配置和管理 IP 地址,并且通常提供与 IP 管理、每个容器的 IP 分配、以及多主机连接相关的功能。容器运行时会调用网络插件,从而在容器启动时分配 IP 地址并配置网络,并在删除容器时再次调用它以清理这些资源。
运行时或协调器决定了容器应该加入哪个网络以及它需要调用哪个插件。然后,插件会将接口添加到容器网络命名空间中,作为一个 veth 对的一侧。接着,它会在主机上进行更改,包括将 veth 的其他部分连接到网桥。再之后,它会通过调用单独的 IPAM(IP地址管理)插件来分配 IP 地址并设置路由。
# 安装时显示如下输出 configmap/calico-config created customresourcedefinition.apiextensions.k8s.io/felixconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamblocks.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/blockaffinities.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamhandles.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ipamconfigs.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgppeers.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/bgpconfigurations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/ippools.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/hostendpoints.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/clusterinformations.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/globalnetworksets.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networkpolicies.crd.projectcalico.org created customresourcedefinition.apiextensions.k8s.io/networksets.crd.projectcalico.org created clusterrole.rbac.authorization.k8s.io/calico-kube-controllers created clusterrolebinding.rbac.authorization.k8s.io/calico-kube-controllers created clusterrole.rbac.authorization.k8s.io/calico-node created clusterrolebinding.rbac.authorization.k8s.io/calico-node created daemonset.extensions/calico-node created serviceaccount/calico-node created deployment.extensions/calico-kube-controllers created serviceaccount/calico-kube-controllers created
确认安装是否成功
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15
watch kubectl get pods --all-namespaces
# 需要等待所有状态为 Running,注意时间可能较久,3 - 5 分钟的样子 Every 2.0s: kubectl get pods --all-namespaces kubernetes-master: Fri May 10 18:16:51 2019
# 输出如下 NAME STATUS MESSAGE ERROR # 调度服务,主要作用是将 POD 调度到 Node scheduler Healthy ok # 自动化修复服务,主要作用是 Node 宕机后自动修复 Node 回到正常的工作状态 controller-manager Healthy ok # 服务注册与发现 etcd-0 Healthy {"health":"true"}
检查 Master 状态
1 2 3 4 5 6 7 8 9
kubectl cluster-info
# 输出如下 # 主节点状态 Kubernetes master is running at https://192.168.141.130:6443 # DNS 状态 KubeDNS is running at https://192.168.141.130:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
检查 Nodes 状态
1 2 3 4 5 6 7
kubectl get nodes
# 输出如下,STATUS 为 Ready 即为正常状态 NAME STATUS ROLES AGE VERSION kubernetes-master Ready master 44h v1.14.1 kubernetes-slave1 Ready <none> 3h38m v1.14.1 kubernetes-slave2 Ready <none> 3h37m v1.14.1
# 输出如下 kubectl run --generator=deployment/apps.v1 is DEPRECATED and will be removed in a future version. Use kubectl run --generator=run-pod/v1 or kubectl create instead. deployment.apps/nginx created
查看全部 Pods 的状态
1 2 3 4 5 6
kubectl get pods
# 输出如下,需要等待一小段实践,STATUS 为 Running 即为运行成功 NAME READY STATUS RESTARTS AGE nginx-755464dd6c-qnmwp 1/1 Running 0 90m nginx-755464dd6c-shqrp 1/1 Running 0 90m
查看已部署的服务
1 2 3 4 5
kubectl get deployment
# 输出如下 NAME READY UP-TO-DATE AVAILABLE AGE nginx 2/2 2 2 91m