This may indicate that the storage must be wiped and the GlusterFS nodes must be reset

heketi 默认至少需要三个节点,可以在执行gk-deploy时加上--single-ndoe参数跳过此报错

 

操作前删除对应块设备上的lvm数据

[root@200 deploy]# kubectl exec -it -n default glusterfs-drtp7 -- /bin/bash
[root@106 /]# lvm
lvm> lvs
lvm> pvs
  PV         VG                                  Fmt  Attr PSize  PFree 
  /dev/sda   vg_fae256e6b16ea3a62ef1ab1341fb23ed lvm2 a--  99.87g 99.87g
lvm> vgremove vg_fae256e6b16ea3a62ef1ab1341fb23ed
  Volume group "vg_fae256e6b16ea3a62ef1ab1341fb23ed" successfully removed
lvm> pvremove /dev/sda
  Labels on physical volume "/dev/sda" successfully wiped.

清理历史数据

kubectl delete sa heketi-service-account
kubectl delete clusterrolebinding heketi-sa-view
kubectl delete secret heketi-config-secret
kubectl delete svc deploy-heketi
kubectl delete deploy deploy-heketi

重新部署

./gk-deploy -g --admin-key=key --user-key=key --single-node​

 

 

Error: WARNING: This metadata update is NOT backed up.

 

lvm    not found: device not cleared

基于以下dockerfile 重新制作镜像

https://github.com/hknarutofk/gluster-containers

 

 

unknown filesystem type 'glusterfs'

宿主服务器安装glusterfs-fuse

yum install -y glusterfs-fuse

 

已标记关键词 清除标记
相关推荐
©️2020 CSDN 皮肤主题: 技术黑板 设计师:CSDN官方博客 返回首页