k8s测试环境搭建

一、虚拟机搭建:

1.推荐VirtualBox,使用 VirtualBox 新建一个虚机,名称建议命名为master.k8s(可以根据自己喜好命名,便于和后边node节点区分记忆即可),类型为linux,版本RedHat 64位。内存根据你的宿主机大小划分,后边还需要创建两个node,请规划好内存分配,我这里master、node1、node2都是划分为2G。

下一步中虚拟硬盘选择现在创建,然后选择虚机磁盘位置和大小,一般测试选择8G即可,然后点击下方创建按钮创建虚拟机;

2.右键刚刚创建的虚拟机,选择设置,在设置-存储中配置centos7的镜像位置(也可以直接开机,在开机界面选择镜像进行启动和安装),设置好点击确定,单击启动按钮,进行centos安装。

3.安装过程中注意installation destination选项,点进去 installation destination并不需要做任何修改,直接点击左上角Done即可 。

单击网络配置选项,右上角开关切换为ON,左下角hostname命名为master.k8s,接下来网络配置你可以选择DHCP的方式,也可以点击configure进行手动配置,为了便于管理,我选择的手动进行配置IP信息(选择手动配置需要注意三·3步修改node的IP),点击Done返回安装界面。点击begin installation开始进行安装。安装过程中你可以给root设置密码或者新建用户,安装完毕点击reboot进行重启;

二、环境配置:

通过ssh工具(建议)的方式或者在virtual box界面中进入到上边安装完毕的虚机内部系统,接下来配置环境。

1.禁用selinux和防火墙

vi /etc/selinux/config,修改SELINUX=permissive,ESC然后wq保存退出,执行下setenforce 0,然后执行systemctl disable firewalld && systemctl stop firewalld

2.配置kubernetes源

安装kubernetes的时候,需要安装kubelet, kubeadm等包,但k8s官网给的yum源是packages.cloud.google.com,国内访问不了,推荐使用阿里云的yum仓库镜像,执行如下操作:

cat < /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://mirrors.aliyun.com/kubernetes/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=0
repo_gpgcheck=0
gpgkey=http://mirrors.aliyun.com/kubernetes/yum/doc/yum-key.gpg
http://mirrors.aliyun.com/kubernetes/yum/doc/rpm-package-key.gpg
EOF

然后执行 yum makecache 生成缓存

3.禁用交换分区

执行swapoff -a && sed -i ‘/ swap / s/^/#/’ /etc/fstab

4.安装docker、kubelet、kubeadm、kubectl、kubernetes-cni

yum install docker kubelet kubeadm kubectl kubernetes-cni 

需要注意一点,在执行上边安装操作前,不要提前安装kubernetes,不然会报错,需要卸载kubernetes后才能执行如上操作。

安装完毕后执行如下操作设置docker和kubelet开机启动

 systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
sysctl -w net.bridge.bridge-nf-call-iptables=1
echo "net.bridge.bridge-nf-call-iptables=1" > /etc/sysctl.d/k8s.conf

三、克隆虚拟机:

1.刚刚创建好的虚拟机进行关机操作 shutdown now

2.右键已经完成关机的虚拟机选择复制两次(复制时注意勾选初始化网卡MAC地址信息),分别命名为node1和node2,选择完整克隆,进行复制克隆;

3.分别启动刚刚创建的两台虚拟机,并修改网卡IP信息和主机名;

hostnamectl --static set-hostname ***

四、配置主节点:

1.kubeadm init初始化主节点

直接执行kubeadm init进行初始化主节点一般会报错,因为无法直接获取初始化时需要的镜像。同时如果创建虚拟机时CPU数少于2也会报错CPU需求未满足。

先执行 如下操作获取镜像信息和下载镜像

kubeadm config images list  
拉取镜像通过如下两种方式: kubeadm config images list |sed -e 's/^/docker pull /g' -e 's#k8s.gcr.io#mirrorgooglecontainers#g' |sh -x

docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10
docker pull registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1
修改镜像标签
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-scheduler:v1.14.2 k8s.gcr.io/kube-scheduler:v1.14.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-controller-manager:v1.14.2 k8s.gcr.io/kube-controller-manager:v1.14.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-apiserver:v1.14.2 k8s.gcr.io/kube-apiserver:v1.14.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/kube-proxy:v1.14.2 k8s.gcr.io/kube-proxy:v1.14.2
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/pause:3.1 k8s.gcr.io/pause:3.1
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/etcd:3.3.10 k8s.gcr.io/etcd:3.3.10
docker tag registry.cn-hangzhou.aliyuncs.com/google_containers/coredns:1.3.1 k8s.gcr.io/coredns:1.3.1

docker images | grep mirrorgooglecontainers | awk '{print "docker rmi " $1":"$2}' | sh -x #删除无用镜像

这里注意一点!主节点拉取镜像后,work节点的镜像也是需要拉取的,否则节点状态kubectl describe可能会一直闲置not ready

进行主节点初始化

[root@master yum.repos.d]# kubeadm init --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU

I0518 22:14:19.884492   23328 version.go:96] could not fetch a Kubernetes version from the internet: unable to get URL "https://dl.k8s.io/release/stable-1.txt": Get https://dl.k8s.io/release/stable-1.txt: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)
I0518 22:14:19.884808   23328 version.go:97] falling back to the local client version: v1.14.2
[init] Using Kubernetes version: v1.14.2
[preflight] Running pre-flight checks
        [WARNING NumCPU]: the number of available CPUs 1 is less than the required 2
[preflight] Pulling images required for setting up a Kubernetes cluster
[preflight] This might take a minute or two, depending on the speed of your internet connection
[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
[kubelet-start] Activating the kubelet service
[certs] Using certificateDir folder "/etc/kubernetes/pki"
[certs] Generating "ca" certificate and key
[certs] Generating "apiserver" certificate and key
[certs] apiserver serving cert is signed for DNS names [master.k8s kubernetes kubernetes.default kubernetes.default.svc kubernetes.default.svc.cluster.local] and IPs [10.96.0.1 10.10.0.81]
[certs] Generating "apiserver-kubelet-client" certificate and key
[certs] Generating "front-proxy-ca" certificate and key
[certs] Generating "front-proxy-client" certificate and key
[certs] Generating "etcd/ca" certificate and key
[certs] Generating "etcd/healthcheck-client" certificate and key
[certs] Generating "etcd/server" certificate and key
[certs] etcd/server serving cert is signed for DNS names [master.k8s localhost] and IPs [10.10.0.81 127.0.0.1 ::1]
[certs] Generating "etcd/peer" certificate and key
[certs] etcd/peer serving cert is signed for DNS names [master.k8s localhost] and IPs [10.10.0.81 127.0.0.1 ::1]
[certs] Generating "apiserver-etcd-client" certificate and key
[certs] Generating "sa" key and public key
[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
[kubeconfig] Writing "admin.conf" kubeconfig file
[kubeconfig] Writing "kubelet.conf" kubeconfig file
[kubeconfig] Writing "controller-manager.conf" kubeconfig file
[kubeconfig] Writing "scheduler.conf" kubeconfig file
[control-plane] Using manifest folder "/etc/kubernetes/manifests"
[control-plane] Creating static Pod manifest for "kube-apiserver"
[control-plane] Creating static Pod manifest for "kube-controller-manager"
[control-plane] Creating static Pod manifest for "kube-scheduler"
[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
[apiclient] All control plane components are healthy after 23.572196 seconds
[upload-config] storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
[kubelet] Creating a ConfigMap "kubelet-config-1.14" in namespace kube-system with the configuration for the kubelets in the cluster
[upload-certs] Skipping phase. Please see --experimental-upload-certs
[mark-control-plane] Marking the node master.k8s as control-plane by adding the label "node-role.kubernetes.io/master=''"
[mark-control-plane] Marking the node master.k8s as control-plane by adding the taints [node-role.kubernetes.io/master:NoSchedule]
[bootstrap-token] Using token: sapn49.*****
[bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
[bootstrap-token] configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
[bootstrap-token] configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
[bootstrap-token] configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
[bootstrap-token] creating the "cluster-info" ConfigMap in the "kube-public" namespace
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy

Your Kubernetes control-plane has initialized successfully!

To start using your cluster, you need to run the following as a regular user:

  mkdir -p $HOME/.kube
  sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
  sudo chown $(id -u):$(id -g) $HOME/.kube/config

You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
  https://kubernetes.io/docs/concepts/cluster-administration/addons/

Then you can join any number of worker nodes by running the following on each as root:

kubeadm join 10.10.0.81:6443 --token ***** \
    --discovery-token-ca-cert-hash ***** 

init初始化token可以记下来留作备用,也可以通过其他方式再次获取。默认24小时过期,过期后TOKEN值要重新生成

kubeadm token list

2.测试初始化是否正常

执行export KUBECONFIG=/etc/kubernetes/admin.conf

执行kubectl get po -n kube-system看是否可以获取的system POD信息

执行kubectl get node列出节点信息,可以看到主节点状态

五、kubeadm配置工作节点:

根据上边主节点初始化最后信息,在工作节点上执行kubeadm join加入主节点即可。

稍等片刻,回到主节点上执行 kubectl get nodes 可以看到工作节点的信息已经都显示出来了

不过目前可以看到状态都是NotReady的状态,可以通过kubectl describe node master.k8s 查看到具体原因。

[root@master kubernetes]# kubectl describe node master.k8s
Name:               master.k8s
Roles:              master
Labels:             beta.kubernetes.io/arch=amd64
                    beta.kubernetes.io/os=linux
                    kubernetes.io/arch=amd64
                    kubernetes.io/hostname=master.k8s
                    kubernetes.io/os=linux
                    node-role.kubernetes.io/master=
Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: /var/run/dockershim.sock
                    node.alpha.kubernetes.io/ttl: 0
                    volumes.kubernetes.io/controller-managed-attach-detach: true
CreationTimestamp:  Sat, 18 May 2019 22:14:44 +0800
Taints:             node-role.kubernetes.io/master:NoSchedule
                    node.kubernetes.io/not-ready:NoSchedule
Unschedulable:      false
Conditions:
  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
  ----             ------  -----------------                 ------------------                ------                       -------
  MemoryPressure   False   Sat, 18 May 2019 22:52:37 +0800   Sat, 18 May 2019 22:14:38 +0800   KubeletHasSufficientMemory   kubelet has sufficient memory available
  DiskPressure     False   Sat, 18 May 2019 22:52:37 +0800   Sat, 18 May 2019 22:14:38 +0800   KubeletHasNoDiskPressure     kubelet has no disk pressure
  PIDPressure      False   Sat, 18 May 2019 22:52:37 +0800   Sat, 18 May 2019 22:14:38 +0800   KubeletHasSufficientPID      kubelet has sufficient PID available
  Ready            False   Sat, 18 May 2019 22:52:37 +0800   Sat, 18 May 2019 22:14:38 +0800   KubeletNotReady              runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
Addresses:
  InternalIP:  10.10.0.81
  Hostname:    master.k8s

可以看到报错
message:docker: network plugin is not ready: cni config uninitialized
Addresses

需要安装网络插件,这里选择Flannel  ,其他可以参考
https://kubernetes.io/zh/docs/concepts/cluster-administration/addons/

kubectl apply -f http://dreamdragonlog.com/src_mirror/kube-flannel.yml

网络插件安装完毕稍等片刻,正常master和node节点会ready。

如果出现问题,可以通过kubectl describe node 节点名查看具体原因并调整。

发表评论

电子邮件地址不会被公开。 必填项已用*标注

此站点使用Akismet来减少垃圾评论。了解我们如何处理您的评论数据