提问者:小点点

使用持久卷运行mongo会抛出错误- Kubernetes


我想创建一个 mongodb 有状态部署,它与 /data/db 的所有 mongodb pod 共享主机的本地目录 /mnt/nfs/data/myproject/production/permastore/mogno(网络文件系统目录)。我在三个虚拟机上运行我的 kubernetes 集群。

当我不使用持久卷声明时,我可以毫无问题地启动mongo!但是,当我使用持久卷声明启动mongodb时,我得到了这个错误。

Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :

有人知道为什么mongo启动失败,当< code>/data/db是带持久卷的mountend吗?怎么修?

由于路径不同,下面的配置文件在您的环境中无法工作。但是,您应该能够了解我的设置背后的想法。

持久卷pv.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: phenex-mongo
  labels:
    type: local
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  hostPath:
    path: /mnt/nfs/data/phenex/production/permastore/mongo
  claimRef:
    name: phenex-mongo
  persistentVolumeReclaimPolicy: Retain
  storageClassName: manual
  volumeMode: Filesystem

永久卷声明< code>pvc.yaml

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: phenex-mongo
spec:
  storageClassName: manual
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi

部署部署.yaml

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo
  labels:
    run: mongo
spec:
  selector:
    matchLabels:
      run: mongo
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        run: mongo
    spec:
      containers:
      - image: mongo:4.2.0-bionic
        name: mongo
        ports:
        - containerPort: 27017
          name: mongo
         volumeMounts:
         - name: phenex-mongo
           mountPath: /data/db
       volumes:
       - name: phenex-mongo
         persistentVolumeClaim:
           claimName: phenex-mongo

应用配置

$ kubectl apply -f pv.yaml
$ kubectl apply -f pc.yaml
$ kubectl apply -f deployment.yaml

检查群集状态

$ kubectl get deploy,po,pv,pvc --output=wide
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES               SELECTOR
deployment.extensions/mongo   1/1     1            1           38m   mongo        mongo:4.2.0-bionic   run=mongo

NAME                         READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
pod/mongo-59f669657d-fpkgv   1/1     Running   0          35m   10.44.0.2   web01   <none>           <none>

NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE    VOLUMEMODE
persistentvolume/phenex-mongo   1Gi        RWO            Retain           Bound    phenex/phenex-mongo   manual                  124m   Filesystem

NAME                                 STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE    VOLUMEMODE
persistentvolumeclaim/phenex-mongo   Bound    phenex-mongo   1Gi        RWO            manual         122m   Filesystem

运行mongo豆荚

$ kubectl exec -it mongo-59f669657d-fpkgv mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
2019-08-14T14:25:25.452+0000 E  QUERY    [js] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed: SocketException: Error connecting to 127.0.0.1:27017 :: caused by :: Connection refused :
connect@src/mongo/shell/mongo.js:341:17
@(connect):2:6
2019-08-14T14:25:25.453+0000 F  -        [main] exception: connect failed
2019-08-14T14:25:25.453+0000 E  -        [main] exiting with code 1
command terminated with exit code 1

日志

$ kubectl logs mongo-59f669657d-fpkgv 
2019-08-14T14:00:32.287+0000 I  CONTROL  [main] Automatically disabling TLS 1.0, to force-enable TLS 1.0 specify --sslDisabledProtocols 'none'
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] MongoDB starting : pid=1 port=27017 dbpath=/data/db 64-bit host=mongo-59f669657d-fpkgv
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] db version v4.2.0
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] git version: a4b751dcf51dd249c5865812b390cfd1c0129c30
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] OpenSSL version: OpenSSL 1.1.1  11 Sep 2018
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] allocator: tcmalloc
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] modules: none
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] build environment:
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten]     distmod: ubuntu1804
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten]     distarch: x86_64
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten]     target_arch: x86_64
2019-08-14T14:00:32.291+0000 I  CONTROL  [initandlisten] options: { net: { bindIp: "*" } }
root@mongo-59f669657d-fpkgv:/# ps aux
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
mongodb      1  0.0  2.7 208324 27920 ?        Dsl  14:00   0:00 mongod --bind_ip_all
root        67  0.0  0.2  18496  2060 pts/1    Ss   15:12   0:00 bash
root        81  0.0  0.1  34388  1536 pts/1    R+   15:13   0:00 ps aux

共1个答案

匿名用户

我找到了原因和解决方案!在我的设置中,我使用NFS在网络上共享一个目录。这样,我的所有集群节点(小黄人)都可以访问位于/mnt/nfs/data/的公共目录。

mongo 无法启动的原因是由于无效的持久卷。也就是说,我使用的是持久卷 HostPath 类型 - 这将适用于单节点测试,或者如果您在所有群集节点上手动创建目录结构,例如 /tmp/your_pod_data_dir/。但是,如果您尝试将 nfs 目录挂载为 hostPath,它会导致问题 - 就像我一样!

对于通过网络文件系统共享的目录,使用NFS持久卷类型(NFS示例)!您将在下面找到我的设置和两个解决方案。

/etc/hosts-我的集群节点。

# Cluster nodes
192.168.123.130 master
192.168.123.131 web01
192.168.123.132 compute01
192.168.123.133 compute02

导出的NFS目录列表。

[vagrant@master]$ showmount -e
Export list for master:
/nfs/data compute*,web*
/nfs/www  compute*,web*

此解决方案展示了通过卷装载nfs目录的部署—查看< code >卷和< code >卷装载部分。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo
  labels:
    run: mongo
spec:
  selector:
    matchLabels:
      run: mongo
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        run: mongo
    spec:
      containers:
      - image: mongo:4.2.0-bionic
        name: mongo
        ports:
        - containerPort: 27017
          name: mongo
        volumeMounts:
        - name: phenex-nfs
          mountPath: /data/db
      volumes:
      - name: phenex-nfs
        nfs:
          # IP of master node
          server: 192.168.123.130
          path: /nfs/data/phenex/production/permastore/mongo

此解决方案展示了通过卷声明装载nfs目录的部署—请看< code > persistentVolumeClaim ,永久卷和永久卷声明定义如下。

apiVersion: extensions/v1beta1
kind: Deployment
metadata:
  name: mongo
  labels:
    run: mongo
spec:
  selector:
    matchLabels:
      run: mongo
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        run: mongo
    spec:
      containers:
      - image: mongo:4.2.0-bionic
        name: mongo
        ports:
        - containerPort: 27017
          name: mongo
        volumeMounts:
        - name: phenex-nfs
          mountPath: /data/db
      volumes:
      - name: phenex-nfs
        persistentVolumeClaim:
          claimName: phenex-nfs

持久卷 - NFS

apiVersion: v1
kind: PersistentVolume
metadata:
  name: phenex-nfs
spec:
  accessModes:
  - ReadWriteOnce
  capacity:
    storage: 1Gi
  nfs:
    # IP of master node
    server: 192.168.123.130
    path: /nfs/data
  claimRef:
    name: phenex-nfs
  persistentVolumeReclaimPolicy: Retain

持续量索赔

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: phenex-nfs
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 100Mi
# Checking cluster state
[vagrant@master ~]$ kubectl get deploy,po,pv,pvc --output=wide
NAME                          READY   UP-TO-DATE   AVAILABLE   AGE   CONTAINERS   IMAGES               SELECTOR
deployment.extensions/mongo   1/1     1            1           18s   mongo        mongo:4.2.0-bionic   run=mongo

NAME                         READY   STATUS    RESTARTS   AGE   IP          NODE    NOMINATED NODE   READINESS GATES
pod/mongo-65b7d6fb9f-mcmvj   1/1     Running   0          18s   10.44.0.2   web01   <none>           <none>

NAME                            CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS   CLAIM                 STORAGECLASS   REASON   AGE     VOLUMEMODE
persistentvolume/phenex-nfs     1Gi        RWO            Retain           Bound    /phenex-nfs                             27s     Filesystem

NAME                                 STATUS   VOLUME         CAPACITY   ACCESS MODES   STORAGECLASS   AGE     VOLUMEMODE
persistentvolumeclaim/phenex-nfs     Bound    phenex-nfs     1Gi        RWO                           27s     Filesystem

# Attaching to pod and checking network bindings
[vagrant@master ~]$ kubectl exec -it mongo-65b7d6fb9f-mcmvj -- bash
root@mongo-65b7d6fb9f-mcmvj:/$ apt update
root@mongo-65b7d6fb9f-mcmvj:/$ apt install net-tools
root@mongo-65b7d6fb9f-mcmvj:/$ netstat -tunlp tcp 0 0 0.0.0.0:27017
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address           Foreign Address         State       PID/Program name    
tcp        0      0 0.0.0.0:27017           0.0.0.0:*               LISTEN      - 

# Running mongo clinet
root@mongo-65b7d6fb9f-mcmvj:/$ mongo
MongoDB shell version v4.2.0
connecting to: mongodb://127.0.0.1:27017/?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("45287a0e-7d41-4484-a267-5101bd20fad3") }
MongoDB server version: 4.2.0
Server has startup warnings: 
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] 
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] ** WARNING: Access control is not enabled for the database.
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] **          Read and write access to data and configuration is unrestricted.
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] 
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] 
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/enabled is 'always'.
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] 
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] ** WARNING: /sys/kernel/mm/transparent_hugepage/defrag is 'always'.
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] **        We suggest setting it to 'never'
2019-08-14T18:03:29.703+0000 I  CONTROL  [initandlisten] 
---
Enable MongoDB's free cloud-based monitoring service, which will then receive and display
metrics about your deployment (disk utilization, CPU, operation statistics, etc).

The monitoring data will be available on a MongoDB website with a unique URL accessible to you
and anyone you share the URL with. MongoDB may use this information to make product
improvements and to suggest MongoDB products and deployment options to you.

To enable free monitoring, run the following command: db.enableFreeMonitoring()
To permanently disable this reminder, run the following command: db.disableFreeMonitoring()
---

>