一步步安装 kubernetes v1.5.2

Kubernetes 是Google开源的容器集群管理系统。它构建Docker技术之上,为容器化的应用提供资源调度、部署运行、服务发现、扩容缩容等整一套功能。基于谷歌过去15年在生产环境中运行容器的管理经验,集合了社区中先进的理念和实战技术。

最近有个新项目需要部署,因服务器资源不足,相对于openstack,我选择容器技术,以充分利用硬件资源,为了更方便管理容器集群,所以我决定搭建一个kubernetes集群。

这个文档是我在测试环境下记录的,非正式环境,某些命令可能执行过多次所以结果可能会有出入。

 

所有物理节点:

kubemaster    192.168.1.158
kubenode0    192.168.1.116
kubenode1    192.168.1.173

 

第 1 步,配置所有主机的初始化环境。

[root@all ~]# cat <<EOF >> /etc/hosts
192.168.1.158 kubemaster
192.168.1.116 kubenode0
192.168.1.173 kubenode1
61.91.161.217 google.com
61.91.161.217 gcr.io                                                      #在国内,你可能需要指定gcr.io的host,如不指定你需要手动下载google-containers. 如何下载其他文档说的方法挺多
61.91.161.217 www.gcr.io
61.91.161.217 console.cloud.google.com
61.91.161.217 storage.googleapis.com
EOF

[root@all ~]# vi /etc/selinux/config
SELINUX=permissive
[root@all ~]# setenforce 0
[root@all ~]# getenforce

Permissive
[root@all ~]# systemctl stop firewalld
[root@all ~]# systemctl disable firewalld

 

 

第 2 步,删除docker.

Docker会随kubernetes自动安装, 避免冲突你需要删除docker.

[root@all ~]# yum list installed | grep docker

[root@all ~]# yum remove -y docker-engine.x86_64 docker-engine-selinux.noarch

 

 

第 3 步,开始安装kubernetes.

有两个方法

方法1(这个在我这比较慢)

[root@all ~]# cat <<EOF > /etc/yum.repos.d/virt7-docker-common-candidate.repo               #你可能发现以下内容不同于官方文档(virt7-docker-common-release),这个地址提供的包更新一些.
[virt7-docker-common-candidate]
name=virt7-docker-common-candidate
baseurl=https://cbs.centos.org/repos/virt7-docker-common-candidate/x86_64/os/
enabled=1
gpgcheck=0
EOF

[root@all ~]# yum -y install kubernetes etcd flannel

 

方法 2(我用的这个)

首先使用工具下载这里的所有的rpm包https://cbs.centos.org/repos/virt7-docker-common-candidate/x86_64/os/Packages/ ,然后执行:

[root@all ~]# rm -rf /etc/yum.repos.d/virt7-docker-common-candidate.repo
[root@all ~]# yum install -y createrepo
[root@all ~]# cd /data/softs/
[root@all softs]# mkdir localyum
[root@all softs]# ll /data/softs/localyum

total 228528
-rw-r--r--. 1 root root   563688 Feb 13 17:50 atomic-1.8-5.gitcc5997a.el7.x86_64.rpm
-rw-r--r--. 1 root root    64584 Feb 13 17:44 atomicapp-0.1.11-1.el7.noarch.rpm
-rw-r--r--. 1 root root     2856 Feb 13 17:44 centos-release-docker-1-2.el7.x86_64.rpm
-rw-r--r--. 1 root root    24012 Feb 13 17:44 container-selinux-2.2-3.el7.noarch.rpm
-rw-r--r--. 1 root root 25600000 Feb 13 19:04 docker-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root    70700 Feb 13 17:44 docker-common-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root  2942740 Feb 13 17:53 docker-distribution-2.3.0-2.el7.x86_64.rpm
-rw-r--r--. 1 root root    74904 Feb 13 17:44 docker-fish-completion-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root 27053040 Feb 13 19:02 docker-latest-1.13-27.git6cd0bbe.el7.x86_64.rpm
-rw-r--r--. 1 root root    70640 Feb 13 17:45 docker-latest-fish-completion-1.13-27.git6cd0bbe.el7.x86_64.rpm
-rw-r--r--. 1 root root    65652 Feb 13 17:45 docker-latest-logrotate-1.13-27.git6cd0bbe.el7.x86_64.rpm
-rw-r--r--. 1 root root    64928 Feb 13 17:44 docker-latest-rhsubscription-1.13-27.git6cd0bbe.el7.x86_64.rpm
-rw-r--r--. 1 root root    65856 Feb 13 17:47 docker-latest-vim-1.13-27.git6cd0bbe.el7.x86_64.rpm
-rw-r--r--. 1 root root    80832 Feb 13 17:44 docker-latest-zsh-completion-1.13-27.git6cd0bbe.el7.x86_64.rpm
-rw-r--r--. 1 root root    69948 Feb 13 17:45 docker-logrotate-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root  2289868 Feb 13 17:52 docker-lvm-plugin-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root  2020712 Feb 13 17:49 docker-novolume-plugin-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root  2111384 Feb 13 17:49 docker-rhel-push-plugin-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root    69252 Feb 13 17:45 docker-rhsubscription-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root  3112932 Feb 13 17:52 docker-v1.10-migrator-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root    70180 Feb 13 17:45 docker-vim-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root    83640 Feb 13 17:44 docker-zsh-completion-1.12.6-14.gitf499e8b.el7.x86_64.rpm
-rw-r--r--. 1 root root  1765592 Feb 13 17:50 flannel-0.5.1-2.el7.x86_64.rpm
-rw-r--r--. 1 root root   592008 Feb 13 17:46 go-bindata-3.0.7-8.gita0ff256.el7.x86_64.rpm
-rw-r--r--. 1 root root     3108 Feb 13 17:44 go-compilers-golang-compiler-1-3.el7.x86_64.rpm
-rw-r--r--. 1 root root  1887304 Feb 13 17:50 godep-27-3.el7.x86_64.rpm
-rw-r--r--. 1 root root  1209580 Feb 13 17:48 golang-1.7.4-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 45834632 Feb 13 20:11 golang-bin-1.7.4-1.el7.x86_64.rpm
-rw-r--r--. 1 root root  2439656 Feb 13 17:51 golang-docs-1.7.4-1.el7.noarch.rpm
-rw-r--r--. 1 root root   685652 Feb 13 17:46 golang-github-cpuguy83-go-md2man-1.0.4-2.el7.x86_64.rpm
-rw-r--r--. 1 root root    70808 Feb 13 17:45 golang-github-russross-blackfriday-devel-1.2-7.el7.noarch.rpm
-rw-r--r--. 1 root root     5540 Feb 13 17:44 golang-github-shurcooL-sanitized_anchor_name-devel-0-0.3.git8e87604.el7.noarch.rpm
-rw-r--r--. 1 root root   217324 Feb 13 17:46 golang-golangorg-crypto-devel-0-0.13.gitc10c31b.el7.noarch.rpm
-rw-r--r--. 1 root root   217976 Feb 13 17:46 golang-googlecode-go-crypto-devel-0-0.13.gitc10c31b.el7.noarch.rpm
-rw-r--r--. 1 root root   545684 Feb 13 17:46 golang-misc-1.7.4-1.el7.noarch.rpm
-rw-r--r--. 1 root root  4565052 Feb 13 18:03 golang-src-1.7.4-1.el7.noarch.rpm
-rw-r--r--. 1 root root  4595296 Feb 13 17:58 golang-tests-1.7.4-1.el7.noarch.rpm
-rw-r--r--. 1 root root   641904 Feb 13 17:46 gomtree-0.3.0-1.el7.x86_64.rpm
-rw-r--r--. 1 root root     3128 Feb 13 17:45 go-srpm-macros-2-3.el7.noarch.rpm
-rw-r--r--. 1 root root    37796 Feb 13 17:45 kubernetes-1.5.2-2.el7.x86_64.rpm
-rw-r--r--. 1 root root 16498760 Feb 13 18:39 kubernetes-client-1.5.2-2.el7.x86_64.rpm
-rw-r--r--. 1 root root 28891956 Feb 13 19:08 kubernetes-master-1.5.2-2.el7.x86_64.rpm
-rw-r--r--. 1 root root 16274400 Feb 13 18:34 kubernetes-node-1.5.2-2.el7.x86_64.rpm
-rw-r--r--. 1 root root 19881040 Feb 13 18:41 kubernetes-unit-test-1.5.2-2.el7.x86_64.rpm
-rw-r--r--. 1 root root    55636 Feb 13 17:45 libseccomp-2.3.0-1.el7.x86_64.rpm
-rw-r--r--. 1 root root    63568 Feb 13 17:46 libseccomp-devel-2.3.0-1.el7.x86_64.rpm
-rw-r--r--. 1 root root    36172 Feb 13 17:45 libseccomp-static-2.3.0-1.el7.x86_64.rpm
-rw-r--r--. 1 root root 14006536 Feb 13 18:33 ocid-0-0.7.git2e6070f.el7.x86_64.rpm
-rw-r--r--. 1 root root  1060952 Feb 13 17:49 oci-register-machine-0-1.11.gitdd0daef.el7.x86_64.rpm
-rw-r--r--. 1 root root    28536 Feb 13 17:45 oci-systemd-hook-0.1.4-8.git45455fe.el7.x86_64.rpm
-rw-r--r--. 1 root root   782360 Feb 13 17:49 pytest-2.7.2-1.el7.noarch.rpm
-rw-r--r--. 1 root root   212368 Feb 13 17:46 python-coverage-4.0-0.10.b1.el7.x86_64.rpm
-rw-r--r--. 1 root root   104152 Feb 13 17:46 python-docker-py-1.10.6-1.el7.noarch.rpm
-rw-r--r--. 1 root root   189836 Feb 13 17:47 python-py-1.4.30-2.el7.noarch.rpm
-rw-r--r--. 1 root root    57532 Feb 13 17:46 python-websocket-client-0.34.0-3.el7.noarch.rpm
-rw-r--r--. 1 root root  1610980 Feb 13 17:52 runc-1.0.0-3.rc2.gitc91b5be.el7.x86_64.rpm
-rw-r--r--. 1 root root  2233268 Feb 13 17:54 skopeo-0.1.17-1.el7.x86_64.rpm
-rw-r--r--. 1 root root     6504 Feb 13 17:46 skopeo-containers-0.1.17-1.el7.x86_64.rpm
[root@all softs]# createrepo -v localyum

[root@kubenode1 softs]# cat <<EOF > /etc/yum.repos.d/local.repo
[local]
name=local
baseurl=file:///data/softs/localyum
enabled=1
gpgcheck=0
EOF

[root@all softs]# yum clean all
[root@all softs]# yum makecache

[root@all softs]# yum install -y kubernetes etcd flannel

Dependencies Resolved
=================================================================================================================================================
 Package                                                           Arch                                            Version                                                             Repository                                       Size
=================================================================================================================================================
Installing:
 etcd                                                              x86_64                                          3.0.15-1.el7                                                        extras                                          9.2 M
 flannel                                                           x86_64                                          0.5.5-2.el7                                                         extras                                          2.4 M
 kubernetes                                                        x86_64                                          1.5.2-2.el7                                                         local                                            37 k
Installing for dependencies:
 conntrack-tools                                                   x86_64                                          1.4.3-1.el7                                                         base                                            175 k
 container-selinux                                                 noarch                                          2:2.2-3.el7                                                         local                                            23 k
 docker                                                            x86_64                                          2:1.12.6-14.gitf499e8b.el7                                          local                                            24 M
 docker-common                                                     x86_64                                          2:1.12.6-14.gitf499e8b.el7                                          local                                            69 k
 docker-rhel-push-plugin                                           x86_64                                          2:1.12.6-14.gitf499e8b.el7                                          local                                           2.0 M
 kubernetes-client                                                 x86_64                                          1.5.2-2.el7                                                         local                                            16 M
 kubernetes-master                                                 x86_64                                          1.5.2-2.el7                                                         local                                            28 M
 kubernetes-node                                                   x86_64                                          1.5.2-2.el7                                                         local                                            16 M
 libnetfilter_cthelper                                             x86_64                                          1.0.0-9.el7                                                         base                                             18 k
 libnetfilter_cttimeout                                            x86_64                                          1.0.0-6.el7                                                         base                                             18 k
 libnetfilter_queue                                                x86_64                                          1.0.2-2.el7                                                         epel                                             23 k
 oci-register-machine                                              x86_64                                          1:0-1.11.gitdd0daef.el7                                             extras                                          1.1 M
 oci-systemd-hook                                                  x86_64                                          1:0.1.4-9.git671c428.el7                                            extras                                           29 k
 python-rhsm-certificates                                          x86_64                                          1.17.9-1.el7                                                        base                                             38 k
 skopeo-containers                                                 x86_64                                          1:0.1.17-1.el7                                                      extras                                          7.4 k
 socat                                                             x86_64                                          1.7.2.2-5.el7                                                       base                                            255 k
 yajl                                                              x86_64                                          2.0.4-4.el7                                                         base                                             39 k

Transaction Summary
=================================================================================================================================================
Install  3 Packages (+17 Dependent packages)

Total download size: 99 M
Installed size: 529 M

Installed:
  etcd.x86_64 0:3.0.15-1.el7                                                  flannel.x86_64 0:0.5.5-2.el7                                                  kubernetes.x86_64 0:1.5.2-2.el7

Dependency Installed:
  conntrack-tools.x86_64 0:1.4.3-1.el7                               container-selinux.noarch 2:2.2-3.el7                    docker.x86_64 2:1.12.6-14.gitf499e8b.el7          docker-common.x86_64 2:1.12.6-14.gitf499e8b.el7
  docker-rhel-push-plugin.x86_64 2:1.12.6-14.gitf499e8b.el7          kubernetes-client.x86_64 0:1.5.2-2.el7                  kubernetes-master.x86_64 0:1.5.2-2.el7            kubernetes-node.x86_64 0:1.5.2-2.el7
  libnetfilter_cthelper.x86_64 0:1.0.0-9.el7                         libnetfilter_cttimeout.x86_64 0:1.0.0-6.el7             libnetfilter_queue.x86_64 0:1.0.2-2.el7           oci-register-machine.x86_64 1:0-1.11.gitdd0daef.el7
  oci-systemd-hook.x86_64 1:0.1.4-9.git671c428.el7                   python-rhsm-certificates.x86_64 0:1.17.9-1.el7          skopeo-containers.x86_64 1:0.1.17-1.el7           socat.x86_64 0:1.7.2.2-5.el7
  yajl.x86_64 0:2.0.4-4.el7

Complete!
 

(可选)配置下docker

[root@all ~]# rm -rf /etc/systemd/system/docker.service.d/docker.conf
[root@all ~]# cat /usr/lib/systemd/system/docker.service

[Unit]
Description=Docker Application Container Engine
Documentation=http://docs.docker.com
After=network.target docker-containerd.service
Wants=docker-storage-setup.service
Requires=docker-containerd.service rhel-push-plugin.socket

[Service]
Type=notify
EnvironmentFile=-/etc/sysconfig/docker
EnvironmentFile=-/etc/sysconfig/docker-storage
EnvironmentFile=-/etc/sysconfig/docker-network
Environment=GOTRACEBACK=crash
ExecStart=/usr/bin/dockerd-current \
          --add-runtime oci=/usr/libexec/docker/docker-runc-current \
          --default-runtime=oci \
          --authorization-plugin=rhel-push-plugin \
          --containerd /run/containerd.sock \
          --exec-opt native.cgroupdriver=systemd \
          --userland-proxy-path=/usr/libexec/docker/docker-proxy-current \
          $OPTIONS \
          $DOCKER_STORAGE_OPTIONS \
          $DOCKER_NETWORK_OPTIONS \
          $ADD_REGISTRY \
          $BLOCK_REGISTRY \
          $INSECURE_REGISTRY
ExecReload=/bin/kill -s HUP $MAINPID
LimitNOFILE=1048576
LimitNPROC=1048576
LimitCORE=infinity
TimeoutStartSec=0
Restart=on-abnormal

[Install]
WantedBy=multi-user.target
[root@all ~]# vi /etc/sysconfig/docker
# /etc/sysconfig/docker
# Modify these options if you want to change the way the docker daemon runs
OPTIONS='--selinux-enabled --log-driver=journald --graph=/data/env/docker --insecure-registry kubemaster:5000'
[root@all ~]# mkdir -p /data/env/docker
[root@all ~]# systemctl enable docker && systemctl restart docker
[root@all ~]# systemctl status docker

 

 

 

第 4  步, 配置kubernetes。

#在所有主机配置kubernetes.
[root@all ~]# mv /etc/kubernetes/config /etc/kubernetes/config.bak
[root@all ~]# vi /etc/kubernetes/config

KUBE_ETCD_SERVERS="--etcd-servers=http://kubemaster:2379"
KUBE_LOGTOSTDERR="--logtostderr=true"
KUBE_LOG_LEVEL="--v=0"
KUBE_ALLOW_PRIV="--allow-privileged=false"
KUBE_MASTER="--master=http://kubemaster:8080"

 

#在master节点配置kubernetes.
[root@kubemaster ~]# mv /etc/etcd/etcd.conf /etc/etcd/etcd.conf.bak
[root@kubemaster ~]# vi /etc/etcd/etcd.conf

# [member]
ETCD_NAME=default
ETCD_DATA_DIR="/var/lib/etcd/default.etcd"
ETCD_LISTEN_CLIENT_URLS="http://0.0.0.0:2379"

#[cluster]
ETCD_ADVERTISE_CLIENT_URLS="http://0.0.0.0:2379"
[root@kubemaster softs]# mv /etc/kubernetes/apiserver /etc/kubernetes/apiserver.bak
[root@kubemaster softs]# vi /etc/kubernetes/apiserver

# The address on the local server to listen to.
KUBE_API_ADDRESS="--address=0.0.0.0"

# The port on the local server to listen on.
KUBE_API_PORT="--port=8080"

# Port kubelets listen on
KUBELET_PORT="--kubelet-port=10250"

# Address range to use for services
KUBE_SERVICE_ADDRESSES="--service-cluster-ip-range=10.254.0.0/16"
KUBE_ETCD_SERVERS="--etcd-servers=http://127.0.0.1:2379"

KUBE_ADMISSION_CONTROL="--admission-control=NamespaceLifecycle,NamespaceExists,LimitRanger,SecurityContextDeny,ServiceAccount,ResourceQuota"

# Add your own!
KUBE_API_ARGS=""
[root@kubemaster ~]# systemctl start etcd
[root@kubemaster ~]# etcdctl mkdir /kube-centos/network
[root@kubemaster ~]# etcdctl mk /kube-centos/network/config "{ \"Network\": \"172.30.0.0/16\", \"SubnetLen\": 24, \"Backend\": { \"Type\": \"vxlan\" } }"

{ "Network": "172.30.0.0/16", "SubnetLen": 24, "Backend": { "Type": "vxlan" } }
[root@kubemaster softs]# mv /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
[root@kubemaster softs]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://kubemaster:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""
[root@kubemaster softs]# for SERVICES in etcd kube-apiserver kube-controller-manager kube-scheduler flanneld; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

[root@kubemaster softs]# ps -ef | grep kube
kube       3101      1  0 13:40 ?        00:00:05 /usr/bin/kube-apiserver --logtostderr=true --v=0 --etcd-servers=http://kubemaster:2379 --address=0.0.0.0 --port=8080 --kubelet-port=10250 --allow-privileged=false --service-cluster-ip-range=10.254.0.0/16
kube       3130      1  1 13:40 ?        00:00:07 /usr/bin/kube-controller-manager --logtostderr=true --v=0 --master=http://kubemaster:8080
kube       3156      1  0 13:40 ?        00:00:01 /usr/bin/kube-scheduler --logtostderr=true --v=0 --master=http://kubemaster:8080
root       3182      1  0 13:40 ?        00:00:00 /usr/bin/flanneld -etcd-endpoints=http://kubemaster:2379 -etcd-prefix=/kube-centos/network
root       3225   2110  0 13:51 pts/0    00:00:00 grep --color=auto kube

 

#在node节点配置kubernetese.(kubenode1&kubenode0)

[root@kubenode0 softs]# mv /etc/kubernetes/kubelet /etc/kubernetes/kubelet.bak
[root@kubenode0 softs]# vi /etc/kubernetes/kubelet

# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=kubenode0"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://kubemaster:8080"

# Add your own!
KUBELET_ARGS=""
[root@kubenode1 softs]# vi /etc/kubernetes/kubelet
# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=kubenode1"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://kubemaster:8080"

# Add your own!
KUBELET_ARGS=""
[root@kubenode0 softs]# mv /etc/sysconfig/flanneld /etc/sysconfig/flanneld.bak
[root@kubenode0 softs]# vi /etc/sysconfig/flanneld

# Flanneld configuration options

# etcd url location.  Point this to the server where etcd runs
FLANNEL_ETCD_ENDPOINTS="http://kubemaster:2379"

# etcd config key.  This is the configuration key that flannel queries
# For address range assignment
FLANNEL_ETCD_PREFIX="/kube-centos/network"

# Any additional options that you want to pass
#FLANNEL_OPTIONS=""

[root@kubenode0 ~]# for SERVICES in kube-proxy kubelet flanneld docker; do
    systemctl restart $SERVICES
    systemctl enable $SERVICES
    systemctl status $SERVICES
done

[root@kubenode0 ~]# ps -ef | grep kube
root       2976      1  3 15:10 ?        00:00:00 /usr/bin/kube-proxy --logtostderr=true --v=0 --master=http://kubemaster:8080
root       3063      1  0 15:10 ?        00:00:00 /usr/bin/flanneld -etcd-endpoints=http://kubemaster:2379 -etcd-prefix=/kube-centos/network
root       3214      1  5 15:10 ?        00:00:00 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://kubemaster:8080 --address=0.0.0.0 --port=10250 --hostname-override=kubenode0 --allow-privileged=false
root       3357   2569  0 15:10 pts/0    00:00:00 grep --color=auto kube

 

#Configure kubectl
[root@kubenode0 ~]# kubectl config set-cluster default-cluster --server=http://kubemaster:8080
[root@kubenode0 ~]# kubectl config set-context default-context --cluster=default-cluster --user=default-admin
[root@kubenode0 ~]# kubectl config use-context default-context

 

 

第 5 步,现在集群应该处于running状态! 

[root@kubemaster ~]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@kubemaster ~]# kubectl get deployment --namespace=kube-system
No resources found.
[root@kubemaster ~]# kubectl get nodes
NAME        STATUS    AGE
kubenode0   Ready     4m
kubenode1   Ready     2m
[root@kubenode0 ~]# kubectl get nodes
NAME        STATUS    AGE
kubenode0   Ready     4m
kubenode1   Ready     2m
[root@kubenode1 ~]# kubectl get nodes
NAME        STATUS    AGE
kubenode0   Ready     4m
kubenode1   Ready     2m

[root@kubenode0 ~]# kubectl version
Client Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-19T19:39:41Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"5", GitVersion:"v1.5.2", GitCommit:"08e099554f3c31f6e6f07b448ab3ed78d0520507", GitTreeState:"clean", BuildDate:"2017-01-19T19:39:41Z", GoVersion:"go1.7.4", Compiler:"gc", Platform:"linux/amd64"}

 

现在你可以用windows访问:http://192.168.1.158:8080/api/:
{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.1.158:6443"
    }
  ]
}

访问https://192.168.1.158:6443/api:

{
  "kind": "APIVersions",
  "versions": [
    "v1"
  ],
  "serverAddressByClientCIDRs": [
    {
      "clientCIDR": "0.0.0.0/0",
      "serverAddress": "192.168.1.158:6443"
    }
  ]
}

 

第 6 步 ,部署Kube-dns插件。

Kubernetes DNS 会在集群分配一个DNS Pod和Service,然后配置kubelets的DNS Service’s IP 及resolve DNS names. 推荐安装.

[root@kubemaster ~]# mkdir -p /data/k8s/addons/dns
[root@kubemaster ~]# cd /data/k8s/addons/dns
[root@kubemaster dns]# for item in Makefile kubedns-controller.yaml.base kubedns-svc.yaml.base transforms2salt.sed transforms2sed.sed; do
    wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns/$item
done
[root@kubemaster dns]# ll

total 24
-rw-r--r--. 1 root root 5064 Feb 16 13:05 kubedns-controller.yaml.base
-rw-r--r--. 1 root root  990 Feb 16 13:05 kubedns-svc.yaml.base
-rw-r--r--. 1 root root 1138 Feb 16 13:05 Makefile
-rw-r--r--. 1 root root  318 Feb 16 13:05 transforms2salt.sed
-rw-r--r--. 1 root root  251 Feb 16 13:05 transforms2sed.sed

[root@kubemaster dns]# cat transforms2sed.sed
s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/g
s/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN/g
/__PILLAR__FEDERATIONS__DOMAIN__MAP__/d
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

[root@kubemaster dns]# kubectl get svc --all-namespaces
NAMESPACE   NAME         CLUSTER-IP   EXTERNAL-IP   PORT(S)   AGE
default     kubernetes   10.254.0.1   <none>        443/TCP   2d
[root@kubemaster dns]# DNS_SERVER_IP=10.254.0.10
[root@kubemaster dns]# DNS_DOMAIN=cluster.local
[root@kubemaster dns]# cat <<EOF > transforms2sed.sed
s/__PILLAR__DNS__SERVER__/$DNS_SERVER_IP/g
s/__PILLAR__DNS__DOMAIN__/$DNS_DOMAIN/g
/__PILLAR__FEDERATIONS__DOMAIN__MAP__/d
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g
EOF

[root@kubemaster dns]# cat transforms2sed.sed
s/__PILLAR__DNS__SERVER__/10.254.0.10/g
s/__PILLAR__DNS__DOMAIN__/cluster.local/g
/__PILLAR__FEDERATIONS__DOMAIN__MAP__/d
s/__MACHINE_GENERATED_WARNING__/Warning: This is a file generated from the base underscore template file: __SOURCE_FILENAME__/g

[root@kubemaster dns]# make
sed -f transforms2salt.sed kubedns-controller.yaml.base | sed s/__SOURCE_FILENAME__/kubedns-controller.yaml.base/g > kubedns-controller.yaml.in
sed -f transforms2salt.sed kubedns-svc.yaml.base | sed s/__SOURCE_FILENAME__/kubedns-svc.yaml.base/g > kubedns-svc.yaml.in
sed -f transforms2sed.sed kubedns-controller.yaml.base  | sed s/__SOURCE_FILENAME__/kubedns-controller.yaml.base/g > kubedns-controller.yaml.sed
sed -f transforms2sed.sed kubedns-svc.yaml.base  | sed s/__SOURCE_FILENAME__/kubedns-svc.yaml.base/g > kubedns-svc.yaml.sed

 

#现在编辑文件kubedns-controller.yaml.sed删除volume的配置,以避免两个问题:
#1.错误: error validating "kubedns-controller.yaml.sed.bak": error validating data: found invalid field optional for v1.ConfigMapVolumeSource; if you choose to ignore these errors, turn validation off with --validate=false
#2.创建容器后, tail /var/log/messages, 显示错误: configmaps "kube-dns" not found 
[root@kubemaster dns]# vi kubedns-controller.yaml.sed

# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Should keep target in cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
# in sync with this file.

# Warning: This is a file generated from the base underscore template file: kubedns-controller.yaml.base

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
spec:
  # replicas: not specified here:
  # 1. In order to make Addon Manager do not reconcile this replicas parameter.
  # 2. Default is 1.
  # 3. Will be tuned in real time if DNS horizontal auto-scaling is turned on.
  strategy:
    rollingUpdate:
      maxSurge: 10%
      maxUnavailable: 0
  selector:
    matchLabels:
      k8s-app: kube-dns
  template:
    metadata:
      labels:
        k8s-app: kube-dns
      annotations:
        scheduler.alpha.kubernetes.io/critical-pod: ''
        scheduler.alpha.kubernetes.io/tolerations: '[{"key":"CriticalAddonsOnly", "operator":"Exists"}]'
    spec:
#      volumes:
#      - name: kube-dns-config
#        configMap:
#          name: kube-dns
#          optional: true

      containers:
      - name: kubedns
        image: gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.12.1
        resources:
          # TODO: Set memory limits when we've profiled the container for large
          # clusters, then set request = limit to keep this container in
          # guaranteed class. Currently, this container falls into the
          # "burstable" category so the kubelet doesn't backoff from restarting it.
          limits:
            memory: 170Mi
          requests:
            cpu: 100m
            memory: 70Mi
        livenessProbe:
          httpGet:
            path: /healthcheck/kubedns
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        readinessProbe:
          httpGet:
            path: /readiness
            port: 8081
            scheme: HTTP
          # we poll on pod startup for the Kubernetes master service and
          # only setup the /readiness HTTP server once that's available.
          initialDelaySeconds: 3
          timeoutSeconds: 5
        args:
        - --domain=cluster.local.
        - --dns-port=10053
        - --config-dir=/kube-dns-config
        - --v=2
        env:
        - name: PROMETHEUS_PORT
          value: "10055"
        ports:
        - containerPort: 10053
          name: dns-local
          protocol: UDP
        - containerPort: 10053
          name: dns-tcp-local
          protocol: TCP
        - containerPort: 10055
          name: metrics
          protocol: TCP
#        volumeMounts:
#        - name: kube-dns-config
#          mountPath: /kube-dns-config

      - name: dnsmasq
        image: gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.12.1
        livenessProbe:
          httpGet:
            path: /healthcheck/dnsmasq
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --cache-size=1000
        - --server=/cluster.local/127.0.0.1#10053
        - --server=/in-addr.arpa/127.0.0.1#10053
        - --server=/ip6.arpa/127.0.0.1#10053
        - --log-facility=-
        ports:
        - containerPort: 53
          name: dns
          protocol: UDP
        - containerPort: 53
          name: dns-tcp
          protocol: TCP
        # see: https://github.com/kubernetes/kubernetes/issues/29055 for details
        resources:
          requests:
            cpu: 150m
            memory: 10Mi
      - name: sidecar
        image: gcr.io/google_containers/k8s-dns-sidecar-amd64:1.12.1
        livenessProbe:
          httpGet:
            path: /metrics
            port: 10054
            scheme: HTTP
          initialDelaySeconds: 60
          timeoutSeconds: 5
          successThreshold: 1
          failureThreshold: 5
        args:
        - --v=2
        - --logtostderr
        - --probe=kubedns,127.0.0.1:10053,kubernetes.default.svc.cluster.local,5,A
        - --probe=dnsmasq,127.0.0.1:53,kubernetes.default.svc.cluster.local,5,A
        ports:
        - containerPort: 10054
          name: metrics
          protocol: TCP
        resources:
          requests:
            memory: 20Mi
            cpu: 10m
      dnsPolicy: Default  # Don't use cluster DNS.
[root@kubemaster dns]# cat kubedns-svc.yaml.sed
# Copyright 2016 The Kubernetes Authors.
#
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
#     http://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.

# Warning: This is a file generated from the base underscore template file: kubedns-svc.yaml.base

apiVersion: v1
kind: Service
metadata:
  name: kube-dns
  namespace: kube-system
  labels:
    k8s-app: kube-dns
    kubernetes.io/cluster-service: "true"
    kubernetes.io/name: "KubeDNS"
spec:
  selector:
    k8s-app: kube-dns
  clusterIP: 10.254.0.10
  ports:
  - name: dns
    port: 53
    protocol: UDP
  - name: dns-tcp
    port: 53
    protocol: TCP

[root@kubemaster dns]# vi /etc/kubernetes/controller-manager                            #若忘记修改可能会报错:No API token found for service account "default"
KUBE_CONTROLLER_MANAGER_ARGS="--service-account-private-key-file=/var/run/kubernetes/apiserver.key --root-ca-file=/var/run/kubernetes/apiserver.crt"
[root@kubemaster dns]# systemctl restart kube-controller-manager && systemctl status kube-controller-manager
[root@kubemaster dns]# kubectl create -f kubedns-controller.yaml.sed
deployment "kube-dns" created
[root@kubemaster dns]# kubectl create -f kubedns-svc.yaml.sed
service "kube-dns" created
[root@kubemaster ~]# tail -f /var/log/messages
Feb 16 13:31:34 kubemaster kube-apiserver: W0216 13:31:34.202781    2559 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist
Feb 16 13:31:34 kubemaster kube-controller-manager: I0216 13:31:34.205627    2570 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns", UID:"2dce52e0-f409-11e6-bd2a-00155d01bd15", APIVersion:"extensions", ResourceVersion:"66591", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-4127456819 to 1
Feb 16 13:31:34 kubemaster kube-controller-manager: I0216 13:31:34.314793    2570 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-4127456819", UID:"2dd060ed-f409-11e6-bd2a-00155d01bd15", APIVersion:"extensions", ResourceVersion:"66592", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-4127456819-82lgs
Feb 16 13:31:34 kubemaster kube-scheduler: I0216 13:31:34.369384    2582 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-4127456819-82lgs", UID:"2dd40e52-f409-11e6-bd2a-00155d01bd15", APIVersion:"v1", ResourceVersion:"66595", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-4127456819-82lgs to kubenode1
[root@kubenode0 ~]# tail -f /var/log/messages
Feb 16 16:39:22 kubenode0 journal: I0216 08:39:22.271586       1 dns.go:462] Added SRV record &{Host:kubernetes.default.svc.cluster.local. Port:443 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:}
Feb 16 16:39:22 kubenode0 journal: I0216 08:39:22.271648       1 dns.go:264] New service: kube-dns
Feb 16 16:39:22 kubenode0 journal: I0216 08:39:22.271694       1 dns.go:462] Added SRV record &{Host:kube-dns.kube-system.svc.cluster.local. Port:53 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:}
Feb 16 16:39:22 kubenode0 journal: I0216 08:39:22.271743       1 dns.go:462] Added SRV record &{Host:kube-dns.kube-system.svc.cluster.local. Port:53 Priority:10 Weight:10 Text: Mail:false Ttl:30 TargetStrip:0 Group: Key:}
Feb 16 16:39:32 kubenode0 journal: E0216 08:39:32.268986       1 sync_dir.go:68] Error loading config from /kube-dns-config: lstat /kube-dns-config: no such file or directory
Feb 16 16:39:38 kubenode0 kubelet: I0216 16:39:38.034635    2365 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/119e9bc4-f41d-11e6-bd2a-00155d01bd15-default-token-hqkj1" (spec.Name: "default-token-hqkj1") pod "119e9bc4-f41d-11e6-bd2a-00155d01bd15" (UID: "119e9bc4-f41d-11e6-bd2a-00155d01bd15").
Feb 16 16:39:42 kubenode0 journal: E0216 08:39:42.268349       1 sync_dir.go:68] Error loading config from /kube-dns-config: lstat /kube-dns-config: no such file or directory
Feb 16 16:38:42 kubenode0 journal: E0216 08:38:42.268428       1 sync_dir.go:68] Error loading config from /kube-dns-config: lstat /kube-dns-config: no such file or directory                #遗留问题,未解
Feb 16 16:38:52 kubenode0 journal: E0216 08:38:52.268351       1 sync_dir.go:68] Error loading config from /kube-dns-config: lstat /kube-dns-config: no such file or directory
Feb 16 16:39:02 kubenode0 journal: E0216 08:39:02.268340       1 sync_dir.go:68] Error loading config from /kube-dns-config: lstat /kube-dns-config: no such file or directory
[root@kubenode0 ~]# docker images                          #you should see the images from gcr.io, if not, you may docker pull them by yourself.
REPOSITORY                                        TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/k8s-dns-sidecar-amd64    1.12.1              ee26c4c79910        2 weeks ago         13 MB
gcr.io/google_containers/k8s-dns-kube-dns-amd64   1.12.1              eebb1533941f        2 weeks ago         52.34 MB
gcr.io/google_containers/k8s-dns-dnsmasq-amd64    1.12.1              d54965d35d2f        2 weeks ago         5.15 MB
kubemaster:5000/nginx-https                       latest              4290b082ed77        3 months ago        571.8 MB
gcr.io/google-containers/pause-amd64              3.0                 99e59f495ffa        9 months ago        746.9 kB
gcr.io/google_containers/pause-amd64              3.0                 99e59f495ffa        9 months ago        746.9 kB
[root@kubenode0 ~]# docker ps -a
CONTAINER ID        IMAGE                                                    COMMAND                  CREATED             STATUS              PORTS               NAMES
25716ebb42bc        gcr.io/google_containers/k8s-dns-sidecar-amd64:1.12.1    "/sidecar --v=2 --log"   3 minutes ago       Up 3 minutes        k8s_sidecar.e51859d_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_012ede0f
005670d6e5f4        gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.12.1    "/usr/sbin/dnsmasq --"   3 minutes ago       Up 3 minutes        k8s_dnsmasq.3ad0a30e_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_e6192381
c97989d0c9b7        gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.12.1   "/kube-dns --domain=c"   4 minutes ago       Up 4 minutes        k8s_kubedns.95f1aa26_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_15e442e5
01c67511d71d        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 9 minutes ago       Up 9 minutes        k8s_POD.8950c4fd_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_8c713105
[root@kubemaster dns]# kubectl get deploy --all-namespaces
NAMESPACE     NAME       DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-system   kube-dns   1         1         1            1           28m
[root@kubemaster dns]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns

To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@kubemaster dns]# kubectl get services --namespace=kube-system
NAME       CLUSTER-IP    EXTERNAL-IP   PORT(S)         AGE
kube-dns   10.254.0.10   <none>        53/UDP,53/TCP   9m
[root@kubemaster dns]# kubectl get pods --namespace=kube-system
NAME                        READY     STATUS    RESTARTS   AGE
kube-dns-3019842428-0x0v9   3/3       Running   0          43m

#配置kubelet
[root@kubenode0 ~]# vi /etc/kubernetes/kubelet

# The address for the info server to serve on
KUBELET_ADDRESS="--address=0.0.0.0"

# The port for the info server to serve on
KUBELET_PORT="--port=10250"

# You may leave this blank to use the actual hostname
# Check the node number!
KUBELET_HOSTNAME="--hostname-override=kubenode0"

# Location of the api-server
KUBELET_API_SERVER="--api-servers=http://kubemaster:8080"

# Add your own!
KUBELET_ARGS="--cluster_dns=10.254.0.10 --cluster_domain=cluster.local"
[root@kubenode0 ~]# systemctl restart kubelet && systemctl status kubelet
● kubelet.service - Kubernetes Kubelet Server
   Loaded: loaded (/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: disabled)
   Active: active (running) since Thu 2017-02-16 17:29:41 CST; 44ms ago
     Docs: https://github.com/GoogleCloudPlatform/kubernetes
 Main PID: 12799 (kubelet)
   Memory: 26.6M
   CGroup: /system.slice/kubelet.service
           ├─ 2410 journalctl -k -f
           └─12799 /usr/bin/kubelet --logtostderr=true --v=0 --api-servers=http://kubemaster:8080 --address=0.0.0.0 --port=10250 --hostname-override=kubenode0 --allow-privileged=false --cluster_dns=10.254.0.10 --cluster_domain=cluster.local...

Feb 16 17:29:41 kubenode0 systemd[1]: Started Kubernetes Kubelet Server.
Feb 16 17:29:41 kubenode0 systemd[1]: Starting Kubernetes Kubelet Server...
[root@kubemaster dns]# kubectl run nginx-first --image=greatbsky/nginx --replicas=2 --port=80
deployment "nginx-first" created
[root@kubenode0 ~]# docker ps
CONTAINER ID        IMAGE                                                    COMMAND                  CREATED              STATUS              NAMES
7f11df9a2b96        kubemaster:5000/nginx-https                              "nginx -g 'daemon off"   About a minute ago   Up About a minute   k8s_nginx-first.841f396a_nginx-first-1279706673-zphw8_default_e3036dfa-f423-11e6-bd2a-00155d01bd15_90da1d26
da0db2779d2a        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 About a minute ago   Up About a minute   k8s_POD.b2390301_nginx-first-1279706673-zphw8_default_e3036dfa-f423-11e6-bd2a-00155d01bd15_88254877
25716ebb42bc        gcr.io/google_containers/k8s-dns-sidecar-amd64:1.12.1    "/sidecar --v=2 --log"   43 minutes ago       Up 43 minutes       k8s_sidecar.e51859d_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_012ede0f
005670d6e5f4        gcr.io/google_containers/k8s-dns-dnsmasq-amd64:1.12.1    "/usr/sbin/dnsmasq --"   43 minutes ago       Up 43 minutes       k8s_dnsmasq.3ad0a30e_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_e6192381
c97989d0c9b7        gcr.io/google_containers/k8s-dns-kube-dns-amd64:1.12.1   "/kube-dns --domain=c"   44 minutes ago       Up 44 minutes       k8s_kubedns.95f1aa26_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_15e442e5
01c67511d71d        gcr.io/google_containers/pause-amd64:3.0                 "/pause"                 49 minutes ago       Up 49 minutes       k8s_POD.8950c4fd_kube-dns-3019842428-0x0v9_kube-system_119e9bc4-f41d-11e6-bd2a-00155d01bd15_8c713105
[root@kubenode1 ~]# docker ps
CONTAINER ID        IMAGE                                      COMMAND                  CREATED              STATUS              PORTS               NAMES
bcd44ec0cdba        kubemaster:5000/nginx-https                "nginx -g 'daemon off"   47 seconds ago       Up 46 seconds       k8s_nginx-first.841f396a_nginx-first-1279706673-304xc_default_e301d2a0-f423-11e6-bd2a-00155d01bd15_ffcd3be4
304f7aae58b9        gcr.io/google_containers/pause-amd64:3.0   "/pause"                 About a minute ago   Up About a minute   k8s_POD.b2390301_nginx-first-1279706673-304xc_default_e301d2a0-f423-11e6-bd2a-00155d01bd15_2f55df7b
[root@kubemaster dns]# kubectl get deploy
NAME          DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
nginx-first   2         2         2            2           2m
[root@kubemaster dns]# kubectl expose deployment nginx-first --target-port=80
service "nginx-first" exposed
[root@kubemaster dns]# kubectl get services
NAME          CLUSTER-IP      EXTERNAL-IP   PORT(S)   AGE
kubernetes    10.254.0.1      <none>        443/TCP   2d
nginx-first   10.254.96.245   <none>        80/TCP    5s
[root@kubemaster dns]# kubectl get pods
NAME                           READY     STATUS    RESTARTS   AGE
nginx-first-1279706673-304xc   1/1       Running   0          3m
nginx-first-1279706673-zphw8   1/1       Running   0          3m
[root@kubenode0 ~]# docker exec -it 7f11df9a2b96 /bin/bash
[root@nginx-first-1279706673-zphw8 /]# yum install bind-utils -y
[root@nginx-first-1279706673-zphw8 /]# nslookup nginx-first

Server:         10.254.0.10
Address:        10.254.0.10#53

Name:   nginx-first.default.svc.cluster.local
Address: 10.254.96.245
[root@nginx-first-1279706673-nm91f /]# nslookup kubernetes.default
Server:         10.254.0.10
Address:        10.254.0.10#53

Non-authoritative answer:
Name:   kubernetes.default.svc.cluster.local
Address: 10.254.0.1
[root@nginx-first-1279706673-zphw8 /]# cat /etc/resolv.conf
search default.svc.cluster.local svc.cluster.local cluster.local
nameserver 10.254.0.10

options ndots:5
[root@nginx-first-1279706673-kp5kq /]# yum -y install openssh-clients
[root@nginx-first-1279706673-kp5kq /]# ssh 192.168.1.213

root@192.168.1.213's password:
^[[3~^[[3~Last login: Thu Feb 16 03:19:13 2017 from 192.168.1.88
[root@cloud2 ~]# exit
logout
Connection to 192.168.1.213 closed.
[root@nginx-first-1279706673-kp5kq /]# ping 192.168.1.88
PING 192.168.1.88 (192.168.1.88) 56(84) bytes of data.
64 bytes from 192.168.1.88: icmp_seq=1 ttl=63 time=1.23 ms
^C
--- 192.168.1.88 ping statistics ---
1 packets transmitted, 1 received, 0% packet loss, time 0ms
rtt min/avg/max/mdev = 1.235/1.235/1.235/0.000 ms
[root@kubemaster dns]# wget 172.30.63.3
--2017-02-16 17:38:57--  http://172.30.63.3/
Connecting to 172.30.63.3:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: 612 [text/html]
Saving to: ‘index.html’

100%[=====================================================================>] 612         --.-K/s   in 0.01s

2017-02-16 17:38:57 (41.9 KB/s) - ‘index.html’ saved [612/612]
 

 

第 7 部, 安装dns-horizontal-autoscaling插件。

在集群里负责扩展DNS服务器

[root@kubemaster dns]# cd ..
[root@kubemaster addons]# mkdir dns-horizontal-autoscaler
[root@kubemaster addons]# cd dns-horizontal-autoscaler/
[root@kubemaster dns-horizontal-autoscaler]# wget https://raw.githubusercontent.com/kubernetes/kubernetes/master/cluster/addons/dns-horizontal-autoscaler/dns-horizontal-autoscaler.yaml
[root@kubemaster dns-horizontal-autoscaler]# kubectl create -f dns-horizontal-autoscaler.yaml

deployment "kube-dns-autoscaler" created
[root@kubemaster ~]# tail -f /var/log/messages
Feb 16 18:11:23 kubemaster kube-apiserver: W0216 18:11:23.303307    2559 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist
Feb 16 18:11:23 kubemaster kube-controller-manager: I0216 18:11:23.305539    2570 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kube-dns-autoscaler", UID:"44e5391c-f430-11e6-bd2a-00155d01bd15", APIVersion:"extensions", ResourceVersion:"87034", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kube-dns-autoscaler-2715466192 to 1
Feb 16 18:11:23 kubemaster kube-controller-manager: I0216 18:11:23.332477    2570 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kube-dns-autoscaler-2715466192", UID:"44e6c822-f430-11e6-bd2a-00155d01bd15", APIVersion:"extensions", ResourceVersion:"87035", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kube-dns-autoscaler-2715466192-vbb9t
Feb 16 18:11:23 kubemaster kube-scheduler: I0216 18:11:23.349409    2582 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kube-dns-autoscaler-2715466192-vbb9t", UID:"44e95551-f430-11e6-bd2a-00155d01bd15", APIVersion:"v1", ResourceVersion:"87038", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kube-dns-autoscaler-2715466192-vbb9t to kubenode1

[root@kubenode1 ~]# tail -f /var/log/messages
Feb 16 18:18:55 kubenode1 dockerd-current: time="2017-02-16T18:18:55.635423200+08:00" level=info msg="{Action=start, LoginUID=4294967295, PID=10518}"
Feb 16 18:18:55 kubenode1 kernel: overlayfs: upper fs needs to support d_type. This is an invalid configuration.
Feb 16 18:18:55 kubenode1 systemd: Scope libcontainer-43545-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 16 18:18:55 kubenode1 systemd: Scope libcontainer-43545-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 16 18:18:55 kubenode1 systemd: Started docker container b5bbf2368df70f1ae59b1c3fe0a57f5e691de0cdb0a2d913d117053bf24e512f.
Feb 16 18:18:55 kubenode1 systemd: Starting docker container b5bbf2368df70f1ae59b1c3fe0a57f5e691de0cdb0a2d913d117053bf24e512f.
Feb 16 18:18:55 kubenode1 kernel: SELinux: mount invalid.  Same superblock, different security settings for (dev mqueue, type mqueue)
Feb 16 18:18:55 kubenode1 systemd: Scope libcontainer-43563-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 16 18:18:55 kubenode1 systemd: Scope libcontainer-43563-systemd-test-default-dependencies.scope has no PIDs. Refusing.
Feb 16 18:18:55 kubenode1 journal: I0216 10:18:55.828489       1 autoscaler.go:49] Scaling Namespace: kube-system, Target: deployment/kube-dns, Mode: linear
Feb 16 18:18:55 kubenode1 kubelet: E0216 18:18:55.842117   10518 docker_manager.go:1770] Failed to create symbolic link to the log file of pod "kube-dns-autoscaler-2715466192-vbb9t_kube-system(44e95551-f430-11e6-bd2a-00155d01bd15)" container "autoscaler": symlink  /var/log/containers/kube-dns-autoscaler-2715466192-vbb9t_kube-system_autoscaler-b5bbf2368df70f1ae59b1c3fe0a57f5e691de0cdb0a2d913d117053bf24e512f.log: no such file or directory
Feb 16 18:18:55 kubenode1 dockerd-current: time="2017-02-16T18:18:55.848736300+08:00" level=error msg="Handler for GET /containers/83f4b799fbf3f3299c1d9e8bbd69814c2f5a948af1127fb197dad2285ce3e736/json returned error: No such container: 83f4b799fbf3f3299c1d9e8bbd69814c2f5a948af1127fb197dad2285ce3e736"
Feb 16 18:18:55 kubenode1 dockerd-current: time="2017-02-16T18:18:55.849158900+08:00" level=error msg="Handler for GET /containers/83f4b799fbf3f3299c1d9e8bbd69814c2f5a948af1127fb197dad2285ce3e736/json returned error: No such container: 83f4b799fbf3f3299c1d9e8bbd69814c2f5a948af1127fb197dad2285ce3e736"
Feb 16 18:18:55 kubenode1 journal: I0216 10:18:55.924126       1 autoscaler_server.go:142] ConfigMap not found: configmaps "kube-dns-autoscaler" not found, will create one with default params
Feb 16 18:18:55 kubenode1 journal: I0216 10:18:55.989505       1 k8sclient.go:120] Created ConfigMap kube-dns-autoscaler in namespace kube-system
Feb 16 18:18:55 kubenode1 journal: I0216 10:18:55.989551       1 linear_controller.go:53] ConfigMap version change (old:  new: 87614) - rebuilding params
Feb 16 18:18:55 kubenode1 journal: I0216 10:18:55.989564       1 linear_controller.go:54] Params from apiserver:
Feb 16 18:18:55 kubenode1 journal: {"coresPerReplica":256,"min":1,"nodesPerReplica":16}
Feb 16 18:18:56 kubenode1 kubelet: I0216 18:18:56.320899   10518 operation_executor.go:917] MountVolume.SetUp succeeded for volume "kubernetes.io/secret/44e95551-f430-11e6-bd2a-00155d01bd15-default-token-hqkj1" (spec.Name: "default-token-hqkj1") pod "44e95551-f430-11e6-bd2a-00155d01bd15" (UID: "44e95551-f430-11e6-bd2a-00155d01bd15").

[root@kubenode1 ~]# docker images
REPOSITORY                                                       TAG                 IMAGE ID            CREATED             SIZE
gcr.io/google_containers/cluster-proportional-autoscaler-amd64   1.0.0               e183460c484d        3 months ago        48.16 MB
kubemaster:5000/nginx-https                                      latest              4290b082ed77        3 months ago        571.8 MB
gcr.io/google-containers/pause-amd64                             3.0                 99e59f495ffa        9 months ago        746.9 kB
gcr.io/google_containers/pause-amd64                             3.0                 99e59f495ffa        9 months ago        746.9 kB

[root@kubenode1 ~]# docker ps -a
CONTAINER ID        IMAGE                                                                  COMMAND                  CREATED             STATUS           NAMES
b5bbf2368df7     gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0   "/cluster-proportiona"   3 minutes ago       Up 3 minutes     k8s_autoscaler.58a2f52f_kube-dns-autoscaler-2715466192-vbb9t_kube-system_44e95551-f430-11e6-bd2a-00155d01bd15_9d3b0622
72c73b97608d        gcr.io/google_containers/pause-amd64:3.0                               "/pause"                 10 minutes ago      Up 10 minutes    k8s_POD.d8dbe16c_kube-dns-autoscaler-2715466192-vbb9t_kube-system_44e95551-f430-11e6-bd2a-00155d01bd15_30aa2820
9ade465a0d77        kubemaster:5000/nginx-https                                            "nginx -g 'daemon off"   51 minutes ago      Up 51 minutes    k8s_nginx-first.841f396a_nginx-first-1279706673-nm91f_default_915b25f9-f42a-11e6-bd2a-00155d01bd15_7c4bfbaa
196bf8d8b4a8        gcr.io/google_containers/pause-amd64:3.0                               "/pause"                 51 minutes ago      Up 51 minutes    k8s_POD.b2390301_nginx-first-1279706673-nm91f_default_915b25f9-f42a-11e6-bd2a-00155d01bd15_b190f010

[root@kubemaster dns-horizontal-autoscaler]# kubectl get configmap --namespace=kube-system
NAME                  DATA      AGE
kube-dns-autoscaler   1         8m
[root@kubemaster dns-horizontal-autoscaler]# kubectl get deployment --namespace=kube-system
NAME                  DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-dns              1         1         1            1           2h
kube-dns-autoscaler   1         1         1            1           15m

 


 

第 8 步,界面访问,安装Dashboard插件

Dashboard是一个基于网页的管理界面,但在我看来不如kubectl好用,可选。

[root@kubemaster data]# cd k8s/addons/
[root@kubemaster addons]# kubectl get pods --all-namespaces | grep dashboard
[root@kubemaster addons]# mkdir dashboard
[root@kubemaster addons]# cd dashboard/
[root@kubemaster dashboard]# wget https://rawgit.com/kubernetes/dashboard/master/src/deploy/kubernetes-dashboard.yaml
[root@kubemaster dashboard]# vi kubernetes-dashboard.yaml

    spec:
      containers:
      - name: kubernetes-dashboard
        image: gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1
        imagePullPolicy: IfNotPresent
        ports:
        - containerPort: 9090
          protocol: TCP
[root@kubemaster dashboard]# kubectl create -f kubernetes-dashboard.yaml
deployment "kubernetes-dashboard" created
service "kubernetes-dashboard" created
[root@kubemaster addons]# tail -f /var/log/messages
Feb 17 12:13:57 kubemaster etcd: start to snapshot (applied: 260026, lastsnap: 250025)
Feb 17 12:13:57 kubemaster etcd: saved snapshot at index 260026
Feb 17 12:13:57 kubemaster etcd: compacted raft log at 255026
Feb 17 12:13:58 kubemaster etcd: segmented wal file /var/lib/etcd/default.etcd/member/wal/0000000000000003-000000000003f7be.wal is created
Feb 17 12:14:17 kubemaster etcd: purged file /var/lib/etcd/default.etcd/member/snap/000000000000000d-0000000000033465.snap successfully
Feb 17 12:17:40 kubemaster kube-controller-manager: W0217 12:17:40.050318     641 reflector.go:319] pkg/controller/garbagecollector/garbagecollector.go:768: watch of <nil> ended with: 401: The event in requested index is outdated and cleared (the requested history has been cleared [96335/96182]) [97334]
Feb 17 12:27:54 kubemaster kube-apiserver: W0217 12:27:54.454921    2125 listers.go:69] can not retrieve list of objects using index : Index with name namespace does not exist
Feb 17 12:27:54 kubemaster kube-controller-manager: I0217 12:27:54.458476     641 event.go:217] Event(api.ObjectReference{Kind:"Deployment", Namespace:"kube-system", Name:"kubernetes-dashboard", UID:"73785324-f4c9-11e6-b9c4-00155d01bd15", APIVersion:"extensions", ResourceVersion:"98069", FieldPath:""}): type: 'Normal' reason: 'ScalingReplicaSet' Scaled up replica set kubernetes-dashboard-3203831700 to 1
Feb 17 12:27:54 kubemaster kube-controller-manager: I0217 12:27:54.494379     641 event.go:217] Event(api.ObjectReference{Kind:"ReplicaSet", Namespace:"kube-system", Name:"kubernetes-dashboard-3203831700", UID:"737a677d-f4c9-11e6-b9c4-00155d01bd15", APIVersion:"extensions", ResourceVersion:"98070", FieldPath:""}): type: 'Normal' reason: 'SuccessfulCreate' Created pod: kubernetes-dashboard-3203831700-zrhpz
Feb 17 12:27:54 kubemaster kube-scheduler: I0217 12:27:54.556431     651 event.go:217] Event(api.ObjectReference{Kind:"Pod", Namespace:"kube-system", Name:"kubernetes-dashboard-3203831700-zrhpz", UID:"737f957b-f4c9-11e6-b9c4-00155d01bd15", APIVersion:"v1", ResourceVersion:"98072", FieldPath:""}): type: 'Normal' reason: 'Scheduled' Successfully assigned kubernetes-dashboard-3203831700-zrhpz to kubenode1
[root@kubemaster addons]# kubectl get deploy --namespace=kube-system
NAME                   DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
kube-dns               1         1         1            1           20h
kube-dns-autoscaler    1         1         1            1           18h
kubernetes-dashboard   1         1         1            1           4m
[root@kubemaster addons]# kubectl get service --namespace=kube-system
NAME                   CLUSTER-IP     EXTERNAL-IP   PORT(S)         AGE
kube-dns               10.254.0.10    <none>        53/UDP,53/TCP   20h
kubernetes-dashboard   10.254.93.73   <nodes>       80:31958/TCP    4m
[root@kubemaster addons]# kubectl get pod --namespace=kube-system
NAME                                    READY     STATUS    RESTARTS   AGE
kube-dns-3019842428-0x0v9               3/3       Running   3          20h
kube-dns-autoscaler-2715466192-5pc5x    1/1       Running   1          18h
kubernetes-dashboard-3203831700-zrhpz   1/1       Running   0          4m
[root@kubemaster addons]# kubectl cluster-info
Kubernetes master is running at http://localhost:8080
KubeDNS is running at http://localhost:8080/api/v1/proxy/namespaces/kube-system/services/kube-dns
there's no kubernetes dashboard? I don't know why...anyone knows please tell me
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
[root@kubenode1 ~]# docker ps
CONTAINER ID        IMAGE                                                                  COMMAND                  CREATED             STATUS             NAMES
140bbe834011     gcr.io/google_containers/kubernetes-dashboard-amd64:v1.5.1   "/dashboard --port=90"   8 minutes ago   Up 8 minutes  k8s_kubernetes-dashboard.70f5d850_kubernetes-dashboard-3203831700-zrhpz_kube-system_737f957b-f4c9-11e6-b9c4-00155d01bd15_e914143d
648d36ecb41a     gcr.io/google_containers/pause-amd64:3.0                     "/pause"                 9 minutes ago   Up 9 minutes  k8s_POD.2225036b_kubernetes-dashboard-3203831700-zrhpz_kube-system_737f957b-f4c9-11e6-b9c4-00155d01bd15_a67454f9
833074293dd4        kubemaster:5000/nginx-https                                            "nginx -g 'daemon off"   About an hour ago   Up About an hour   k8s_nginx-first.841f396a_nginx-first-1279706673-nm91f_default_915b25f9-f42a-11e6-bd2a-00155d01bd15_208c673e
53de5a7520dc        gcr.io/google_containers/cluster-proportional-autoscaler-amd64:1.0.0   "/cluster-proportiona"   About an hour ago   Up About an hour   k8s_autoscaler.58a2f52f_kube-dns-autoscaler-2715466192-5pc5x_kube-system_ff490043-f432-11e6-bd2a-00155d01bd15_2429c170
b2e7ad10aa37        gcr.io/google_containers/pause-amd64:3.0                               "/pause"                 About an hour ago   Up About an hour   k8s_POD.d8dbe16c_kube-dns-autoscaler-2715466192-5pc5x_kube-system_ff490043-f432-11e6-bd2a-00155d01bd15_3e630c7c
c478f1e3f19b        gcr.io/google_containers/pause-amd64:3.0                               "/pause"                 About an hour ago   Up About an hour   k8s_POD.b2390301_nginx-first-1279706673-nm91f_default_915b25f9-f42a-11e6-bd2a-00155d01bd15_9dfda496
[root@kubenode1 ~]# docker logs 140bbe834011
Using HTTP port: 9090
Creating API server client for https://10.254.0.1:443
Successful initial request to the apiserver, version: v1.5.2
Creating in-cluster Heapster client
Using service account token for csrf signing
[root@kubemaster addons]# kubectl edit svc/kubernetes-dashboard --namespace=kube-system
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: v1
kind: Service
metadata:
  creationTimestamp: 2017-02-17T04:27:54Z
  labels:
    app: kubernetes-dashboard
  name: kubernetes-dashboard
  namespace: kube-system
  resourceVersion: "98084"
  selfLink: /api/v1/namespaces/kube-system/services/kubernetes-dashboard
  uid: 7391ffe7-f4c9-11e6-b9c4-00155d01bd15
spec:
  clusterIP: 10.254.93.73
  ports:
  - nodePort: 31958
    port: 80
    protocol: TCP
    targetPort: 9090
  selector:
    app: kubernetes-dashboard
  sessionAffinity: None
  type: NodePort
status:
  loadBalancer: {}
[root@kubemaster addons]# kubectl describe nodes kubenode1
Name:                   kubenode1
Role:
Labels:                 beta.kubernetes.io/arch=amd64
                        beta.kubernetes.io/os=linux
                        kubernetes.io/hostname=kubenode1
Taints:                 <none>
CreationTimestamp:      Tue, 14 Feb 2017 15:10:44 +0800
Phase:
Conditions:
  Type                  Status  LastHeartbeatTime                       LastTransitionTime                      Reason                          Message
  ----                  ------  -----------------                       ------------------                      ------                          -------
  OutOfDisk             False   Fri, 17 Feb 2017 13:12:51 +0800         Tue, 14 Feb 2017 15:10:44 +0800         KubeletHasSufficientDisk        kubelet has sufficient disk space available
  MemoryPressure        False   Fri, 17 Feb 2017 13:12:51 +0800         Tue, 14 Feb 2017 15:10:44 +0800         KubeletHasSufficientMemory      kubelet has sufficient memory available
  DiskPressure          False   Fri, 17 Feb 2017 13:12:51 +0800         Tue, 14 Feb 2017 15:10:44 +0800         KubeletHasNoDiskPressure        kubelet has no disk pressure
  Ready                 True    Fri, 17 Feb 2017 13:12:51 +0800         Tue, 14 Feb 2017 15:10:44 +0800         KubeletReady                    kubelet is posting ready status
Addresses:              192.168.1.173,192.168.1.173,kubenode1
Capacity:
 alpha.kubernetes.io/nvidia-gpu:        0
 cpu:                                   1
 memory:                                3850324Ki
 pods:                                  110
 

你可以在你windows上通过node的地址及端口访问管理界面了:http://192.168.1.173:31958。

 

现在我们已经完成了kubernetes v1.5.2的安装,如你想安装更多插件可以访问https://kubernetes.io/docs/admin/addons/

完成!祝你好运~~~