0%

nfs与pvc使用

在k8s集群上利用nfs实现跨节点目录共享,部署ollama和nginx作为练习。

环境为openeuler22.03 sp3,k8s1.23.6,docker为容器引擎,flannel提供通信支持

一主(node0-192.168.195.40)二从(node1-192.168.195.41、node2-192.168.195.42)

所有数据利用nfs实现共享(所有节点均需安装nfs服务),数据存储在主节点/opt/k8s_store/目录。分别为/opt/k8s_store/ollama和/opt/k8s_store/nginx

nfs共享

安装NFS服务(所有节点):

1
sudo dnf install nfs-utils

编辑/etc/exports文件,添加共享目录配置(存储节点):

1
/opt/k8s_store  *(rw,sync,no_root_squash,no_subtree_check)

这会使/opt/k8s_store目录对所有节点(*表示所有)可读写。

重启NFS服务:

1
2
sudo systemctl restart nfs-server
sudo systemctl enable nfs-server

ollama部署

1
2
[root@node0 ollama]# ls
deployment.yaml pvc.yaml pv.yaml service.yaml

配置文件pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@node0 ollama]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
name: ollama-pv
spec:
capacity:
storage: 8Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany # 使用ReadWriteMany,支持多个节点挂载
persistentVolumeReclaimPolicy: Retain
nfs:
path: /opt/k8s_store/ollama # 共享目录
server: node0 # NFS服务器地址

配置文件pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
[root@node0 ollama]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: ollama-pvc
spec:
accessModes:
- ReadWriteMany # 对应修改后的PV配置
resources:
requests:
storage: 8Gi

配置文件deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
[root@node0 ollama]# cat deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: ollama-deployment # Deployment 名称
spec:
replicas: 1 # 副本数(可以根据需求调整副本数)
selector:
matchLabels:
app: ollama # 匹配标签,Pod 会以此标签进行选择
template:
metadata:
labels:
app: ollama # Pod 标签
spec:
containers:
- name: ollama # 容器名称
image: ollama/ollama:latest # 使用的镜像
ports:
- containerPort: 11434 # 容器端口映射
volumeMounts:
- name: ollama-storage # 使用 PVC 挂载卷
mountPath: /root/.ollama # 挂载路径,容器内的路径
restartPolicy: Always # 确保容器失败时重启
volumes:
- name: ollama-storage
persistentVolumeClaim:
claimName: ollama-pvc # 绑定的 PVC 名称

配置文件service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
[root@node0 ollama]# cat service.yaml 
apiVersion: v1
kind: Service
metadata:
name: ollama-service # Service 名称
spec:
selector:
app: ollama # 匹配的标签,确保请求被转发到 Deployment 对应的 Pod
ports:
- protocol: TCP
port: 11434 # Service 端口
targetPort: 11434 # 容器内部端口
type: NodePort # 让外部能够通过 NodePort 访问该服务

(实际上在本示例中并未进一步使用此处开放的端口),这里可以进一步与dify等工具联动,实现外部api访问服务。可以查阅

Private Deployment of Ollama + DeepSeek + Dify: Build Your Own AI Assistant | Dify

利如下命令进行部署

1
2
3
kubectl apply -f pv.yaml
kubectl apply -f pvc.yaml
kubectl apply -f deployment.yaml

最终结果为

1
2
3
4
5
6
7
[root@node0 ollama]# kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-nginx-6d779d947c-mvj99 1/1 Running 2 (76m ago) 46d
k8s-nginx-6d779d947c-qdkvh 1/1 Running 2 (76m ago) 46d
k8s-nginx-6d779d947c-zh8zs 1/1 Running 2 (76m ago) 46d
ollama-deployment-777b9bccb7-sj7wd 1/1 Running 0 3m18s
[root@node0 ollama]#

ollama试用

此处进入pod内部,下载deepseekr1:1.5b完整部署结果

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
[root@node0 ollama]# kubectl exec -it ollama-deployment-777b9bccb7-sj7wd -- /bin/bash
root@ollama-deployment-777b9bccb7-sj7wd:/# ollama run deepseek-r1:1.5b
pulling manifest
pulling aabd4debf0c8... 100% ▕█████████████████████████████████████████████████████████▏ 1.1 GB
pulling 369ca498f347... 100% ▕█████████████████████████████████████████████████████████▏ 387 B
pulling 6e4c38e1172f... 100% ▕█████████████████████████████████████████████████████████▏ 1.1 KB
pulling f4d24e9138dd... 100% ▕█████████████████████████████████████████████████████████▏ 148 B
pulling a85fe2a2e58e... 100% ▕█████████████████████████████████████████████████████████▏ 487 B
verifying sha256 digest
writing manifest
success
>>> hi
<think>

</think>

Hello! How can I assist you today? 😊

>>> wo are you
<think>

</think>

Hello! It seems like your message is a bit unclear. Could you please provide more context or clarify what you're asking? I'm here to help! 😊

>>> /bye
root@ollama-deployment-777b9bccb7-sj7wd:/# exit
exit
[root@node0 ollama]#

可以顺利使用,部署成功。

nginx搭建html服务器

此处准备了wireshark的本地用户手册作为服务内容。

1
WiresharkUser'sGuide.zip # 内含n多html文档

解压到/opt/k8s_store/nginx/index里面。

1
unzip "WiresharkUser'sGuide.zip" -d /opt/k8s_store/nginx/index

在共享目录写一个index.html放在/opt/k8s_store/nginx

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
[root@node0 nginx]# nano index.html
[root@node0 nginx]# pwd
/opt/k8s_store/nginx

<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>Index of Files</title>
<style>
body {
font-family: Arial, sans-serif;
margin: 20px;
}
h1 {
color: #333;
}
ul {
list-style-type: none;
padding: 0;
}
li {
margin: 8px 0;
}
a {
text-decoration: none;
color: #007bff;
}
a:hover {
text-decoration: underline;
}
</style>
</head>
<body>
<h1>Index of Files</h1>
<ul>
<li><a href="index/index.html">Click Here to 'Wireshark User’s Guide'</a></li>
<!-- Add more links here as needed -->
</ul>
</body>
</html>

用到的配置文件

1
2
[root@node0 nginx]# ls
nginx-deployment.yaml nginx-service.yaml pvc.yaml pv.yaml

pv.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@node0 nginx]# cat pv.yaml 
apiVersion: v1
kind: PersistentVolume
metadata:
name: nginx-pv
spec:
capacity:
storage: 1Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany # 使用ReadWriteMany,支持多个节点挂载
persistentVolumeReclaimPolicy: Retain
nfs:
path: /opt/k8s_store/nginx # 共享目录
server: node0 # NFS服务器地址

pvc.yaml

1
2
3
4
5
6
7
8
9
10
11
[root@node0 nginx]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nginx-pvc
spec:
accessModes:
- ReadWriteMany # 对应修改后的PV配置
resources:
requests:
storage: 1Gi

nginx-deployment.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
[root@node0 nginx]# cat nginx-deployment.yaml 
apiVersion: apps/v1
kind: Deployment
metadata:
name: k8s-nginx
spec:
replicas: 1
selector:
matchLabels:
app: k8s-nginx
template:
metadata:
labels:
app: k8s-nginx
spec:
containers:
- name: nginx
image: nginx:latest
ports:
- containerPort: 80
volumeMounts:
- name: nginx-storage
mountPath: /usr/share/nginx/html # NFS 共享目录挂载到 Nginx 的默认网页目录
volumes:
- name: nginx-storage
persistentVolumeClaim:
claimName: nginx-pvc # 绑定的 PVC 名称

nginx-service.yaml

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
[root@node0 nginx]# cat nginx-service.yaml 
apiVersion: v1
kind: Service
metadata:
name: nginx-service # 服务名称
spec:
selector:
app: k8s-nginx # 选择标签,确保将流量转发到对应的 Pod
ports:
- protocol: TCP
port: 80 # 服务端口
targetPort: 80 # 容器内部端口
type: NodePort # 使用 NodePort 类型来暴露服务

[root@node0 nginx]#

应用即可

常见问题

更改共享目录权限

所有人可以访问

1
[root@node0 nginx]# sudo chmod -R 777 /opt/k8s_store

pvc/pv更新

如果之前部署文件有误要重新部署,可以使用删除原配置重新部署

1
2
3
4
5
6
7
8
9
10
11
# 删除旧的pvc和pv创建新的
[root@node0 ollama]# vi pv.yaml
[root@node0 ollama]# vi pvc.yaml
[root@node0 ollama]# kubectl delete pvc ollama-pvc
persistentvolumeclaim "ollama-pvc" deleted
[root@node0 ollama]# kubectl delete pv ollama-pv
persistentvolume "ollama-pv" deleted
[root@node0 ollama]# kubectl apply -f pv.yaml
persistentvolume/ollama-pv created
[root@node0 ollama]# kubectl apply -f pvc.yaml
persistentvolumeclaim/ollama-pvc created

pvc删除卡住

显示

1
2
3
4
[root@node0 ~]# kubectl get pvc ollama-pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
ollama-pvc Terminating ollama-pv 8Gi RWO 27m
[root@node0 ~]#

当PVC显示为Terminating时,通常意味着Kubernetes正在执行删除操作,但未能完全完成。这通常是由于某些资源清理不完整或在删除过程中发生阻塞。

查看该PVC详细信息

1
kubectl describe pvc ollama-pvc

此处显示

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
[root@node0 ~]# kubectl describe pvc ollama-pvc
Name: ollama-pvc
Namespace: default
StorageClass:
Status: Terminating (lasts 2m16s)
Volume: ollama-pv
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 8Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: ollama-deployment-777b9bccb7-wq6lh
Events: <none>
[root@node0 ~]#

可以看到,它正处于Finalizers保护机制下。Finalizers: [kubernetes.io/pvc-protection]表示Kubernetes正在确保该PVC不会在被使用时被删除。

由于ollama-deployment-777b9bccb7-wq6lh这个Pod仍在使用该PVC,删除操作没有完成。

解决方案为删除Pod

1
kubectl delete pod ollama-deployment-777b9bccb7-wq6lh

这时候pvc即可删除

除此之外,可以选择强制删除pvc,不过不做推荐(此处未使用)

1
kubectl delete pvc ollama-pvc --force --grace-period=0

pod自动重启

刚才执行了

1
kubectl delete pod ollama-deployment-777b9bccb7-wq6lh

但由于之前部署由Deployment管理,Deployment会自动创建一个新的Pod来替代被删除的Pod。

如下所示

1
2
3
4
5
6
7
[root@node0 ~]# kubectl get po
NAME READY STATUS RESTARTS AGE
k8s-nginx-6d779d947c-mvj99 1/1 Running 2 (59m ago) 46d
k8s-nginx-6d779d947c-qdkvh 1/1 Running 2 (59m ago) 46d
k8s-nginx-6d779d947c-zh8zs 1/1 Running 2 (59m ago) 46d
ollama-deployment-777b9bccb7-nb9ct 0/1 Pending 0 10s
[root@node0 ~]#

阻止pod自动重启可以删除deployment

1
kubectl delete deployment ollama-deployment

重新查看,发现删除成功

1
2
3
4
5
6
7
[root@node0 ollama]# kubectl get pods
NAME READY STATUS RESTARTS AGE
k8s-nginx-6d779d947c-mvj99 1/1 Running 2 (76m ago) 46d
k8s-nginx-6d779d947c-qdkvh 1/1 Running 2 (76m ago) 46d
k8s-nginx-6d779d947c-zh8zs 1/1 Running 2 (76m ago) 46d
ollama-deployment-777b9bccb7-sj7wd 1/1 Running 0 3m18s
[root@node0 ollama]

更改副本数

1
kubectl scale deployment k8s-nginx --replicas=1

scp传输

1
2
3
4
5
6
PS C:\Users\dd\Desktop> scp '.\WiresharkUser''sGuide.zip' root@192.168.195.40:/opt/k8s_store/nginx

Authorized users only. All activities may be monitored and reported.
root@192.168.195.40's password:
WiresharkUser'sGuide.zip 100% 10MB 129.3MB/s 00:00
PS C:\Users\dd\Desktop>