jenkins pipeline自动构建springboot并部署至k8s


image-20200731133049536

1、准备

1.1、安装k8s集群

每个docker机器

1
2
3
4
5
6
7
vi /etc/docker/daemon.json
{
"registry-mirrors" : [
"https://k8spv7nq.mirror.aliyuncs.com"
],
"insecure-registries": ["czharbor.com"]
}

1.2、harbor 启动

1
2
docker-compose -f /data/tools/harbor/docker-compose.yml stop
docker-compose -f /data/tools/harbor/docker-compose.yml start

2. k8s中部署jenkins

2.1、制作jenkins镜像

https://hub.docker.com/r/jenkins/jenkins/tags

1
2
3
docker pull jenkins/jenkins:2.251-alpine
mkdir -p /data/jenkins && cd /data/jenkins
vi Dockerfile

dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
FROM jenkins/jenkins:2.251-alpine
USER root
RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories \
&& apk update \
&& apk add -U tzdata \
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo "Asia/Shanghai" > /etc/timezone \
&& apk add git \
# && apk add maven=3.3.9-r1 \
&& apk add docker

RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers
RUN mkdir -p /opt/maven/repository

编译镜像

1
2
docker build -t czharbor.com/devops/cz-jenkins:lts-alpine .
docker push czharbor.com/devops/cz-jenkins:lts-alpine

image-20200805234423183

2.2、共享存储NFS部署

NFS服务部署:https://www.jianshu.com/p/26003390626e
创建NFS 动态供给参考:https://www.jianshu.com/p/092eb3aacefc

  • centos7-hub(192.168.145.130):磁盘所在机器

  • k8s-dn1():使用centos7-hub共享的磁盘

  • k8s-dn2():使用centos7-hub共享的磁盘

  • k8s-dn1():使用centos7-hub共享的磁盘

2.2.1、在 centos7-hub 上设置

1、关闭防火墙

1
2
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service

2、安装配置 nfs

1
$ yum -y install nfs-utils rpcbind

3、共享目录设置权限:

1
2
$ mkdir -p /data/nfs/jenkins/
$ chmod -R 755 /data/nfs/jenkins/

4、配置 nfs,nfs 的默认配置文件在 /etc/exports 文件下,在该文件中添加下面的配置信息:

1
2
$ vi /etc/exports
/data/nfs/jenkins/ *(rw,sync,no_root_squash)

5、配置说明:
/data/nfs/jenkins/:是共享的数据目录
*:表示任何人都有权限连接,当然也可以是一个网段,一个 IP,也可以是域名
rw:读写的权限
sync:表示文件同时写入硬盘和内存
no_root_squash:当登录 NFS 主机使用共享目录的使用者是 root 时,其权限将被转换成为匿名使用者,通常它的 UID 与 GID,都会变成 nobody 身份

启动服务 nfs 需要向 rpc 注册,rpc 一旦重启了,注册的文件都会丢失,向他注册的服务都需要重启

1
$ vi /etc/netconfig

image-20200726220714201

注意启动顺序,先启动 rpcbind

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
$ systemctl start rpcbind.service
$ systemctl enable rpcbind
$ systemctl status rpcbind
● rpcbind.service - RPC bind service
Loaded: loaded (/usr/lib/systemd/system/rpcbind.service; disabled; vendor preset: enabled)
Active: active (running) since Tue 2018-07-10 20:57:29 CST; 1min 54s ago
Process: 17696 ExecStart=/sbin/rpcbind -w $RPCBIND_ARGS (code=exited, status=0/SUCCESS)
Main PID: 17697 (rpcbind)
Tasks: 1
Memory: 1.1M
CGroup: /system.slice/rpcbind.service
└─17697 /sbin/rpcbind -w

Jul 10 20:57:29 master systemd[1]: Starting RPC bind service...
Jul 10 20:57:29 master systemd[1]: Started RPC bind service.

看到上面的 Started 证明启动成功了。

然后启动 nfs 服务:

1
2
3
4
5
6
7
8
9
10
11
12
13
$ systemctl start nfs.service
$ systemctl enable nfs
$ systemctl status nfs
● nfs-server.service - NFS server and services
Loaded: loaded (/usr/lib/systemd/system/nfs-server.service; enabled; vendor preset: disabled)
Drop-In: /run/systemd/generator/nfs-server.service.d
└─order-with-mounts.conf
Active: active (exited) since Tue 2018-07-10 21:35:37 CST; 14s ago
Main PID: 32067 (code=exited, status=0/SUCCESS)
CGroup: /system.slice/nfs-server.service

Jul 10 21:35:37 master systemd[1]: Starting NFS server and services...
Jul 10 21:35:37 master systemd[1]: Started NFS server and services.

同样看到 Started 则证明 NFS Server 启动成功了。

另外还可以通过下面的命令确认下:

1
2
3
4
5
6
7
$ rpcinfo -p|grep nfs
100003 3 tcp 2049 nfs
100003 4 tcp 2049 nfs
100227 3 tcp 2049 nfs_acl
100003 3 udp 2049 nfs
100003 4 udp 2049 nfs
100227 3 udp 2049 nfs_acl

查看具体目录挂载权限:

1
2
$ cat /var/lib/nfs/etab
/data/nfs/jenkins *(rw,sync,wdelay,hide,nocrossmnt,secure,no_root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,anonuid=65534,anongid=65534,sec=sys,rw,secure,no_root_squash,no_all_squash)

到这里nfs server就安装成功了,接下来在节点 k8s-n1 上来安装 nfs 的客户端来验证下 nfs

2.2.2、在 k8s-n1上设置

安装 nfs 当前也需要先关闭防火墙:

1
2
$ systemctl stop firewalld.service
$ systemctl disable firewalld.service

然后安装 nfs

1
$ yum -y install nfs-utils rpcbind

禁用ipv6

1
$ vi /etc/netconfig

安装完成后,和上面的方法一样,先启动 rpc、然后启动 nfs:

1
2
3
4
5
6
7
$ systemctl start rpcbind.service 
$ systemctl enable rpcbind.service
$ systemctl status rpcbind.service

$ systemctl start nfs-server
$ systemctl enable nfs-server
$ systemctl status nfs-server

挂载数据目录 客户端启动完成后,在客户端来挂载下 nfs 测试下:
首先检查下 nfs 是否有共享目录:

1
2
3
$ showmount -e 192.168.145.130
Export list for 192.168.145.130:
/data/nfs/jenkins *

然后我们在客户端上新建目录:

1
$ mkdir /data

将 nfs 共享目录挂载到上面的目录:

1
$ mount -t nfs 192.168.145.130:/data/nfs/jenkins /data

挂载成功后,在客户端上面的目录中新建一个文件,然后观察下 nfs 服务端的共享目录下面是否也会出现该文件:

1
$ touch /data/test.txt

然后在 nfs 服务端查看:

1
2
3
$ ls -ls /data/nfs/jenkins
total 4
4 -rw-r--r--. 1 root root 4 Jul 10 21:50 test.txt

如果上面出现了 test.txt 的文件,那么证明 nfs 挂载成功了。

2.3、部署jenkins到k8s

jenkins-pv-pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-home-pv
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteMany
nfs:
server: 192.168.145.130
path: "/data/nfs/jenkins"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-home-pvc
namespace: devops
spec:
accessModes: ["ReadWriteMany"]
resources:
requests:
storage: 5Gi

jenkins-service-account.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
apiVersion: v1
kind: ServiceAccount
metadata:
name: jenkins
namespace: devops
labels:
app: jenkins
---
kind: Role
apiVersion: rbac.authorization.k8s.io/v1beta1
metadata:
name: jenkins
labels:
app: jenkins
rules:
- apiGroups: [""]
resources: ["pods"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/exec"]
verbs: ["create","delete","get","list","patch","update","watch"]
- apiGroups: [""]
resources: ["pods/log"]
verbs: ["get","list","watch"]
- apiGroups: [""]
resources: ["secrets"]
verbs: ["get"]

---
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: RoleBinding
metadata:
name: jenkins
labels:
app: jenkins
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: Role
name: jenkins
subjects:
- kind: ServiceAccount
name: jenkins
namespace: devops

jenkins-statefulset.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: jenkins
namespace: devops
labels:
app: jenkins
spec:
serviceName: jenkins
replicas: 1
selector:
matchLabels:
app: jenkins
updateStrategy:
type: RollingUpdate
template:
metadata:
name: jenkins
labels:
app: jenkins
spec:
terminationGracePeriodSeconds: 10
serviceAccountName: jenkins
hostNetwork: true
containers:
- name: jenkins
image: czharbor.com/devops/cz-jenkins:lts-alpine
imagePullPolicy: Always
ports:
- containerPort: 8080
- containerPort: 50000
resources:
limits:
cpu: 1
memory: 1Gi
requests:
cpu: 0.5
memory: 500Mi
env:
- name: LIMITS_MEMORY
valueFrom:
resourceFieldRef:
resource: limits.memory
divisor: 1Mi
- name: JAVA_OPTS
value: -Xmx$(LIMITS_MEMORY)m -XshowSettings:vm -Dhudson.slaves.NodeProvisioner.initialDelay=0 -Dhudson.slaves.NodeProvisioner.MARGIN=50 -Dhudson.slaves.NodeProvisioner.MARGIN0=0.85 -Duser.timezone=GMT+08
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
- name: docker-sock
mountPath: /var/run/docker.sock
livenessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
readinessProbe:
httpGet:
path: /login
port: 8080
initialDelaySeconds: 60
timeoutSeconds: 5
failureThreshold: 12
securityContext:
fsGroup: 1000
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: jenkins-home-pvc
- name: docker-sock
hostPath:
path: /var/run/docker.sock

问题1:jenkins镜像里已经安装了docker,千万不能把 主机的 /usr/bin/docker 挂载到 docker 里去,否则会出错:

1
2
3
> bash-4.4# docker ps
> bash: /usr/bin/docker: No such file or directory
>

image-20200805233316341

问题2:Jenkinsfile中运行docker 挂载 父容器的数据卷的时候,会继承父容器的网络,将 jenkins的网络模式指定为 hostNetwork: true 表示使用主机的网络和iptables,这样kubectl发布应用才能正常运行

jenkins-service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
apiVersion: v1
kind: Service
metadata:
labels:
app: jenkins
name: jenkins
namespace: devops
annotations:
prometheus.io/scrape: 'true'
spec:
type: NodePort
ports:
- name: jenkins-web
port: 8080
targetPort: 8080
nodePort: 31442
- name: jenkins-agent
port: 50000
targetPort: 50000
nodePort: 30005
selector:
app: jenkins

jenkins-ingress.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: jenkins
labels:
name: jenkins
namespace: devops
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: 50m
ingress.kubernetes.io/proxy-request-buffering: "off"
spec:
rules:
- host: cz-jenkins.dev
http:
paths:
- path: /
backend:
serviceName: jenkins
servicePort: 8080

执行部署动作

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
[root@k8s-dn1 ~]# kubectl create ns devops

kubectl apply -f /data/jenkins/jenkins-pv-pvc.yaml
kubectl apply -f /data/jenkins/jenkins-service-account.yaml
kubectl apply -f /data/jenkins/jenkins-statefulset.yaml
kubectl apply -f /data/jenkins/jenkins-service.yaml
kubectl apply -f /data/jenkins/jenkins-ingress.yaml

kubectl delete -f /data/jenkins/jenkins-ingress.yaml
kubectl delete -f /data/jenkins/jenkins-service.yaml
kubectl delete -f /data/jenkins/jenkins-statefulset.yaml
kubectl delete -f /data/jenkins/jenkins-service-account.yaml
kubectl delete -f /data/jenkins/jenkins-pv-pvc.yaml

kubectl get pv -n devops
kubectl get sa -n devops
kubectl get StatefulSet -n devops
kubectl describe StatefulSet jenkins -n devops
kubectl get Service -n devops
kubectl describe Service jenkins -n devops
kubectl get Ingress -n devops
kubectl describe Ingress jenkins -n devops

访问

在hosts文件中添加

1
192.168.145.151     cz-jenkins.dev

访问jenkins

http://cz-jenkins.dev/

admin/123456

1
2
$ cat /data/nfs/jenkins/secrets/initialAdminPassword
e8422ab2e6104faa8cf3a5033f1b0dd2

image-20200726222903171

2.4、Jenkins初始化

http://focus-1.wiki/devops/jenkins/jenkins-centos7-setup/

image-20200726225212624

image-20200726225409947

2.5、Jenkins配置

系统管理 –> 插件管理 –> available,安装需要的插件,有的插件下载不下来可以去官网下载之后上传安装。

系统管理——》插件管理——》可选插件——》安装

  • Git

  • Git Parameter

  • Pipeline

  • Kubernetes

  • Kubernetes Continuous Deploy

  • Gitee

3、构建Jenkins Slave镜像

参考:https://github.com/jenkinsci/docker-jnlp-slave

img

1. 构建Jenkins Slave镜像环境准备

构建Jenkins Slave镜像环境准备:
代码拉取:git,安装git命令
单元测试:忽略,这不是我们擅长的,如果公司有可以写进来
代码编译:maven,安装maven包
构建镜像:Dockerfile文件、docker命令(通过挂载宿主机docker)
推送镜像:docker命令(通过挂载宿主机docker)
镜像启动后支持slave: 下载官方slave.jar包(获取:http://10.40.6.213:30006/jnlpJars/slave.jar
启动 slave.ja包:jenkins-slave启动脚步(通过参考文档URL)
maven配置文件:settings.xml (这里配置阿里云的仓库源)

获取相关文件:
Dockerfile
jenkins-slave 启动脚步
settings.xml
slave.jar

创建目录并进入:
mkdir jenkins-slave && cd jenkins-slave

1
2
3
4
[root@centos7cz jenkin]# pwd
/data/jenkins/
[root@centos7cz jenkins]# mkdir jenkins-slave && cd jenkins-slave
[root@centos7cz jenkins-slave]# wget http://192.168.145.151:31442/jnlpJars/slave.jar

2. jenkins-slave启动脚本

1
[root@centos7cz jenkins-slave]# vi jenkins-slave
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
# cat jenkins-slave
#!/usr/bin/env sh

if [ $# -eq 1 ]; then

# if `docker run` only has one arguments, we assume user is running alternate command like `bash` to inspect the image
exec "$@"

else

# if -tunnel is not provided try env vars
case "$@" in
*"-tunnel "*) ;;
*)
if [ ! -z "$JENKINS_TUNNEL" ]; then
TUNNEL="-tunnel $JENKINS_TUNNEL"
fi ;;
esac

# if -workDir is not provided try env vars
if [ ! -z "$JENKINS_AGENT_WORKDIR" ]; then
case "$@" in
*"-workDir"*) echo "Warning: Work directory is defined twice in command-line arguments and the environment variable" ;;
*)
WORKDIR="-workDir $JENKINS_AGENT_WORKDIR" ;;
esac
fi

if [ -n "$JENKINS_URL" ]; then
URL="-url $JENKINS_URL"
fi

if [ -n "$JENKINS_NAME" ]; then
JENKINS_AGENT_NAME="$JENKINS_NAME"
fi

if [ -z "$JNLP_PROTOCOL_OPTS" ]; then
echo "Warning: JnlpProtocol3 is disabled by default, use JNLP_PROTOCOL_OPTS to alter the behavior"
JNLP_PROTOCOL_OPTS="-Dorg.jenkinsci.remoting.engine.JnlpProtocol3.disabled=true"
fi

# If both required options are defined, do not pass the parameters
OPT_JENKINS_SECRET=""
if [ -n "$JENKINS_SECRET" ]; then
case "$@" in
*"${JENKINS_SECRET}"*) echo "Warning: SECRET is defined twice in command-line arguments and the environment variable" ;;
*)
OPT_JENKINS_SECRET="${JENKINS_SECRET}" ;;
esac
fi

OPT_JENKINS_AGENT_NAME=""
if [ -n "$JENKINS_AGENT_NAME" ]; then
case "$@" in
*"${JENKINS_AGENT_NAME}"*) echo "Warning: AGENT_NAME is defined twice in command-line arguments and the environment variable" ;;
*)
OPT_JENKINS_AGENT_NAME="${JENKINS_AGENT_NAME}" ;;
esac
fi

#TODO: Handle the case when the command-line and Environment variable contain different values.
#It is fine it blows up for now since it should lead to an error anyway.

exec java $JAVA_OPTS $JNLP_PROTOCOL_OPTS -cp /usr/share/jenkins/slave.jar hudson.remoting.jnlp.Main -headless $TUNNEL $URL $WORKDIR $OPT_JENKINS_SECRET $OPT_JENKINS_AGENT_NAME "$@"
fi

3、maven源配置文件settings.xml

maven源配置文件settings.xml,这里配置阿里云的源。

1
[root@centos7cz jenkins-slave]# vi settings.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
# cat settings.xml
<?xml version="1.0" encoding="UTF-8"?>

<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0 http://maven.apache.org/xsd/settings-1.0.0.xsd">
<pluginGroups>
</pluginGroups>

<proxies>
</proxies>

<servers>
</servers>

<mirrors>
<mirror>
<id>central</id>
<mirrorOf>central</mirrorOf>
<name>aliyun maven</name>
<url>https://maven.aliyun.com/repository/public</url>
</mirror>
</mirrors>

<profiles>
</profiles>

</settings>

4. Dockerfile配置文件

1
[root@centos7cz jenkins-slave]# vi Dockerfile
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
FROM alpine:latest
USER root

RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories \
&& apk update
RUN apk add -U tzdata \
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo "Asia/Shanghai" > /etc/timezone
RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \
&& wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.31-r0/glibc-2.31-r0.apk \
&& apk add glibc-2.31-r0.apk
RUN apk add openjdk8 \
&& apk add maven \
&& apk add protoc \
&& apk add grpc \
&& apk add git \
&& apk add docker\
&& apk add sshpass

RUN echo "jenkins ALL=NOPASSWD: ALL" >> /etc/sudoers

COPY slave.jar /usr/share/jenkins/slave.jar
COPY jenkins-slave /usr/bin/jenkins-slave
COPY settings.xml /etc/maven/settings.xml
RUN chmod +x /usr/bin/jenkins-slave

ENTRYPOINT ["jenkins-slave"]

5. 构建镜像, 并推送至私有镜像仓库

1
2
3
[root@centos7cz jenkins-slave]# docker build -t czharbor.com/devops/jenkins-slave:2.249 .

[root@centos7cz jenkins-slave]# docker push czharbor.com/devops/jenkins-slave:2.249

image-20200804211825096

4、构建maven镜像

4.1、Dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
FROM alpine:latest
USER root

ENV LANG C.UTF-8
ENV TZ Asia/Shanghai

RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories \
&& apk update

RUN apk add -U tzdata \
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo "Asia/Shanghai" > /etc/timezone

RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \
&& wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.31-r0/glibc-2.31-r0.apk \
&& apk add glibc-2.31-r0.apk

RUN apk add openjdk8 \
&& apk add maven \
&& apk add protoc \
&& apk add grpc \
&& apk add docker

为了调用protoc,一定要安装glibc

https://github.com/sgerrand/alpine-pkg-glibc

否则会碰到如下问题:

https://github.com/xolstice/protobuf-maven-plugin/issues/23

用Alpine跑了JDK8的镜像结果发现,JDK还是无法执行.后来翻阅文档才发现
Java是基于GUN Standard C library(glibc)
Alpine是基于MUSL libc(mini libc)

所以Alpine需要安装glibc的库,以下是官方给出wiki
https://wiki.alpinelinux.org/wiki/Running_glibc_programs

1
2
> > [ERROR] Failed to execute goal org.xolstice.maven.plugins:protobuf-maven-plugin:0.6.1:compile (default) on project nacos-grpc-iface: An error occurred while invoking protoc: Error while executing process.: Cannot run program "/var/jenkins_home/workspace/nacos-grpc-k8s@2/nacos-grpc-iface/target/protoc-plugins/protoc-3.12.2-linux-x86_64.exe": error=2, No such file or directory -> [Help 1]
> >

4.2、构建运行

1
2
3
4
5
6
7
docker build -t czharbor.com/devops/cz-maven:3.6.3-alpine .
# --net=host 表示使用主机iptables等网络环境,对docker in docker 很重要
docker --net=host run -it czharbor.com/devops/cz-maven:3.6.3-alpine
java -version
mvn -v

docker push czharbor.com/devops/cz-maven:3.6.3-alpine

5、构建kubectl镜像

dockerfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
FROM alpine:latest
USER root

ENV LANG C.UTF-8
ENV TZ Asia/Shanghai

RUN sed -i 's/dl-cdn.alpinelinux.org/mirrors.aliyun.com/g' /etc/apk/repositories \
&& apk update

RUN apk add -U tzdata \
&& cp /usr/share/zoneinfo/Asia/Shanghai /etc/localtime \
&& echo "Asia/Shanghai" > /etc/timezone

RUN wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://alpine-pkgs.sgerrand.com/sgerrand.rsa.pub \
&& wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.31-r0/glibc-2.31-r0.apk \
&& apk add glibc-2.31-r0.apk

# ADD kubectl /usr/local/bin/
1
2
3
4
5
6
7
8
9
10
docker build -t czharbor.com/devops/kubectl:1.18.6-alpine .
docker push czharbor.com/devops/kubectl:1.18.6-alpine

# --net=host 表示使用主机iptables等网络环境,对docker in docker 很重要
docker run -it --net=host \
-v /root/.kube:/root/.kube \
-v /usr/local/bin/kubectl:/usr/local/bin/kubectl \
czharbor.com/devops/kubectl:1.18.6-alpine

docker ps -a | grep kubectl

6、SpringBoot项目准备

image-20200810233931007

关键文件:

  • nacos-grpc-srv 本身的 Dockerfile

  • deployment.yaml的模板文件 k8s-deployment.tpl

  • jenkins Pipeline 文件 Jenkinsfile

deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
apiVersion: apps/v1
kind: Deployment
metadata:
name: {APP_NAME}-deployment
labels:
app: {APP_NAME}
spec:
replicas: 1
selector:
matchLabels:
app: {APP_NAME}
template:
metadata:
labels:
app: {APP_NAME}
spec:
containers:
- name: {APP_NAME}
image: {IMAGE_URL}:{IMAGE_TAG}
imagePullPolicy: Always
ports:
- containerPort: 8010
env:
- name: SPRING_PROFILES_ACTIVE
value: {SPRING_PROFILE}

Jenkinsfile

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
// 需要在jenkins的Credentials设置中配置jenkins-harbor-creds、jenkins-k8s-config参数
pipeline {
agent any
parameters {
string(name: 'K8S_NAMESPACE', defaultValue: 'default', description: 'k8s的namespace名称')
string(name: 'HARBOR_HOST', defaultValue: 'czharbor.com/devops', description: 'harbor仓库地址')
// string(name: 'DOCKER_IMAGE', defaultValue: 'czharbor.com/nacos-grpc-srv', description: 'docker镜像名')
// string(name: 'APP_NAME', defaultValue: 'nacos-grpc-srv', description: 'k8s中标签名')
// choice(name: 'APP_NAME_LIST', choices: ['nacos-grpc-srv', 'nacos-grpc-cli'], description: 'app_name')
}
environment {
HARBOR_CREDS = credentials('jenkins-harbor-creds')
K8S_CONFIG = credentials('jenkins-kubeconfig')
GIT_TAG = sh(returnStdout: true,script: 'git describe --tags').trim()
}
stages {
stage('Maven Build') {
agent {
docker {
image 'czharbor.com/devops/cz-maven:3.6.3-alpine'
args '-v $HOME/.m2:/root/.m2 -v /var/run/docker.sock:/var/run/docker.sock'
}
}
steps {
sh 'mvn clean package -Dfile.encoding=UTF-8 -DskipTests=true'
sh "cd nacos-grpc-srv && mvn docker:build && cd .."
sh "cd nacos-grpc-srv/target/docker && docker build --build-arg JAR_FILE='nacos-grpc-srv-0.0.1-SNAPSHOT.jar' -t ${params.HARBOR_HOST}/nacos-grpc-srv:1.0 ."
sh "docker login -u ${HARBOR_CREDS_USR} -p ${HARBOR_CREDS_PSW} ${params.HARBOR_HOST}"
sh "docker push ${params.HARBOR_HOST}/nacos-grpc-srv:1.0"
sh "docker rmi -f ${params.HARBOR_HOST}/nacos-grpc-srv:1.0"
}
}
// stage('Docker Build') {
// when {
// allOf {
// expression { env.GIT_TAG != null }
// }
// }
// agent any
// steps {
// script {
// def APP_NAMES = ['nacos-grpc-srv']
// for (int i = 0;i < APP_NAMES.size(); ++i) {
// sh "cd ${APP_NAMES[i]}/target/docker"
// sh "docker login -u ${HARBOR_CREDS_USR} -p ${HARBOR_CREDS_PSW} ${params.HARBOR_HOST}"
// sh "docker build --build-arg JAR_FILE=`ls *.jar |cut -d '/' -f1` -t ${params.HARBOR_HOST}/${APP_NAMES[i]}:1.0 ."
// sh "docker push ${params.HARBOR_HOST}/${APP_NAMES[i]}:1.0"
// sh "docker rmi -f ${params.HARBOR_HOST}/${APP_NAMES[i]}:1.0"
// sh "cd .."
// sh "cd .."
// sh "cd .."
// }
// }
// }
// }
stage('Deploy') {
when {
allOf {
expression { env.GIT_TAG != null }
}
}
agent {
docker {
image 'czharbor.com/devops/kubectl:1.18.6-alpine'
args '--net=host -v /root/.kube:/root/.kube -v /usr/local/bin/kubectl:/usr/local/bin/kubectl'
}
}
steps {
script {
def APP_NAMES = ['nacos-grpc-srv']
for (int i = 0; i < APP_NAMES.size(); ++i) {
// sh "mkdir -p ~/.kube"
// sh "echo ${K8S_CONFIG} | base64 -d > ~/.kube/config"
sh "sed -e 's#{IMAGE_URL}#${params.HARBOR_HOST}/${APP_NAMES[i]}#g;s#{IMAGE_TAG}#1.0#g;s#{APP_NAME}#${APP_NAMES[i]}#g;s#{SPRING_PROFILE}#k8s#g' k8s-deployment.tpl > k8s-deployment.yml"
sh "kubectl apply -f k8s-deployment.yml --namespace=${params.K8S_NAMESPACE}"
}
}
}
}
}
}

7、使用jenkins部署项目到k8s

gitee秘钥

image-20200802204243247

kubeconfig配置

1
[root@k8s-dn1 ~]# base64 ~/.kube/config > kube-config.txt

然后类似上一步,在jenkins凭据中增加配置文件内容。在凭据设置界面,类型选择为“Secret text”,ID设置为“jenkins-kubeconfig”(此处的ID必须与Jenkinsfile中的保持一致),Secret设置为上面经过base64编码后的配置文件内容。

image-20200802204912224

image-20200802205107574

image-20200802204224110

开始构建

image-20200810234404325

image-20200810234326243

在k8s中查看

image-20200810234516285

1
kubectl logs -f pod/nacos-grpc-srv-deployment-66fbb6c749-vmm8s

image-20200810234702488

8、暴露nacos

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
[root@k8s-dn1 nacos]# vi nacos-ingress.yaml

apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: nacos
labels:
name: nacos
namespace: default
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "true"
kubernetes.io/tls-acme: "true"
nginx.ingress.kubernetes.io/proxy-body-size: 50m
nginx.ingress.kubernetes.io/proxy-request-buffering: "off"
ingress.kubernetes.io/ssl-redirect: "true"
ingress.kubernetes.io/proxy-body-size: 50m
ingress.kubernetes.io/proxy-request-buffering: "off"
spec:
rules:
- host: nacos.dev
http:
paths:
- path: /
backend:
serviceName: nacos-headless
servicePort: 8848
1
2
3
kubectl apply -f /data/nacos-grpc/nacos-ingress.yaml

kubectl describe ingress/nacos

9、问题总结

1
Unable to connect to the server: x509: certificate signed by unknown authority