提问者:小点点

如何应用kubernetes网络策略来限制从其他命名空间访问命名空间?


我是库伯内特斯的新人我有一个多租户的场景

1)我有3个命名空间,如下所示:

 default,
 tenant1-namespace,
 tenant2-namespace

2)命名空间默认有两个数据库pod

tenant1-db - listening on port 5432
tenant2-db - listening on port 5432

namespacetenant1-ns有一个应用程序pod

tenant1-app - listening on port 8085

namespacetenant2-ns有一个应用程序pod

tenant2-app - listening on port 8085

3)我在默认命名空间上应用了3个网络策略

a)限制从其他命名空间访问两个db pod

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-all
  namespace: default
spec:
  podSelector: {}
  policyTypes:
  - Ingress
  - Egress

b)仅允许从tenant1-ns的tenant1-app访问tenant1-db pod

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-1
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant1-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant1-development
    - podSelector:
        matchLabels:
          app: tenant1-app

c)仅允许从tenant2-ns的tenant2-app访问tenant2-db pod

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-2
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant2-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant2-development
    - podSelector:
        matchLabels:
          app: tenant2-app

我想仅将tenant1-db的访问限制为tenant1-app,tenant2-db仅限于tenant2-app。但似乎tenant2-app可以访问tenant1-db,这不应该发生。

下面是tenant2-app的db-config. js

module.exports = {
  HOST: "tenant1-db",
  USER: "postgres",
  PASSWORD: "postgres",
  DB: "tenant1db",
  dialect: "postgres",
  pool: {
    max: 5,
    min: 0,
    acquire: 30000,
    idle: 10000
  }
};

如您所见,我指向tenant2-app以使用tenant1-db,我想仅将tennat1-db限制为tenant1-app?网络策略中需要做哪些修改?

更新:

tenant1部署

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 
kind: Deployment 
metadata: 
  name: tenant1-app-deployment
  namespace: tenant1-namespace 
spec: 
  selector: 
    matchLabels: 
      app: tenant1-app 
  replicas: 1 # tells deployment to run 1 pods matching the template 
  template: 
    metadata: 
      labels: 
        app: tenant1-app 
    spec: 
      containers: 
      - name: tenant1-app-container 
        image: tenant1-app-dock-img:v1 
        ports: 
        - containerPort: 8085 
--- 
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service  
kind: Service 
apiVersion: v1 
metadata: 
  name: tenant1-app-service
  namespace: tenant1-namespace  
spec: 
  selector: 
    app: tenant1-app 
  ports: 
  - protocol: TCP 
    port: 8085 
    targetPort: 8085 
    nodePort: 31005 
  type: LoadBalancer 

tenant2-app部署

apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2 
kind: Deployment 
metadata: 
  name: tenant2-app-deployment
  namespace: tenant2-namespace 
spec: 
  selector: 
    matchLabels: 
      app: tenant2-app 
  replicas: 1 # tells deployment to run 1 pods matching the template 
  template: 
    metadata: 
      labels: 
        app: tenant2-app 
    spec: 
      containers: 
      - name: tenant2-app-container 
        image: tenant2-app-dock-img:v1 
        ports: 
        - containerPort: 8085 
--- 
# https://kubernetes.io/docs/concepts/services-networking/service/#defining-a-service  
kind: Service 
apiVersion: v1 
metadata: 
  name: tenant2-app-service
  namespace: tenant2-namespace  
spec: 
  selector: 
    app: tenant2-app 
  ports: 
  - protocol: TCP 
    port: 8085 
    targetPort: 8085 
    nodePort: 31006 
  type: LoadBalancer 

更新2:

db-pod1. yaml

    apiVersion: apps/v1
    kind: Deployment
    metadata:
      annotations:
        deployment.kubernetes.io/revision: "1"
      creationTimestamp: null
      generation: 1
      labels:
        k8s-app: tenant1-db
      name: tenant1-db
    spec:
      progressDeadlineSeconds: 600
      replicas: 1
      revisionHistoryLimit: 10
      selector:
        matchLabels:
          k8s-app: tenant1-db
      strategy:
        rollingUpdate:
          maxSurge: 25%
          maxUnavailable: 25%
        type: RollingUpdate
      template:
        metadata:
          creationTimestamp: null
          labels:
            k8s-app: tenant1-db
          name: tenant1-db
        spec:
          volumes:
          - name: tenant1-pv-storage
            persistentVolumeClaim:
              claimName: tenant1-pv-claim
          containers:
          - env:
            - name: POSTGRES_USER
              value: postgres
            - name: POSTGRES_PASSWORD
              value: postgres
            - name: POSTGRES_DB
              value: tenant1db
            - name: PGDATA
              value: /var/lib/postgresql/data/pgdata
            image: postgres:11.5-alpine
            imagePullPolicy: IfNotPresent
            name: tenant1-db
            volumeMounts:
            - mountPath: "/var/lib/postgresql/data/pgdata"
              name: tenant1-pv-storage
            resources: {}
            securityContext:
              privileged: false
            terminationMessagePath: /dev/termination-log
            terminationMessagePolicy: File
          dnsPolicy: ClusterFirst
          restartPolicy: Always
          schedulerName: default-scheduler
          securityContext: {}
          terminationGracePeriodSeconds: 30
status: {}

db-pod2. ymal

apiVersion: apps/v1
kind: Deployment
metadata:
  annotations:
    deployment.kubernetes.io/revision: "1"
  creationTimestamp: null
  generation: 1
  labels:
    k8s-app: tenant2-db
  name: tenant2-db
spec:
  progressDeadlineSeconds: 600
  replicas: 1
  revisionHistoryLimit: 10
  selector:
    matchLabels:
      k8s-app: tenant2-db
  strategy:
    rollingUpdate:
      maxSurge: 25%
      maxUnavailable: 25%
    type: RollingUpdate
  template:
    metadata:
      creationTimestamp: null
      labels:
        k8s-app: tenant2-db
      name: tenant2-db
    spec:
      volumes:
      - name: tenant2-pv-storage
        persistentVolumeClaim:
          claimName: tenant2-pv-claim
      containers:
      - env:
        - name: POSTGRES_USER
          value: postgres
        - name: POSTGRES_PASSWORD
          value: postgres
        - name: POSTGRES_DB
          value: tenant2db
        - name: PGDATA
          value: /var/lib/postgresql/data/pgdata
        image: postgres:11.5-alpine
        imagePullPolicy: IfNotPresent
        name: tenant2-db
        volumeMounts:
        - mountPath: "/var/lib/postgresql/data/pgdata"
          name: tenant2-pv-storage
        resources: {}
        securityContext:
          privileged: false
        terminationMessagePath: /dev/termination-log
        terminationMessagePolicy: File
      dnsPolicy: ClusterFirst
      restartPolicy: Always
      schedulerName: default-scheduler
      securityContext: {}
      terminationGracePeriodSeconds: 30
status: {}

更新3:

kubectl get svc -n default
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)          AGE
kubernetes      ClusterIP      10.96.0.1        <none>           443/TCP          5d2h
nginx           ClusterIP      10.100.24.46     <none>           80/TCP           5d1h
tenant1-db   LoadBalancer   10.111.165.169   10.111.165.169   5432:30810/TCP   4d22h
tenant2-db   LoadBalancer   10.101.75.77     10.101.75.77     5432:30811/TCP   2d22h

kubectl get svc -n tenant1-namespace
NAME                  TYPE           CLUSTER-IP      EXTERNAL-IP                               PORT(S)          AGE
tenant1-app-service   LoadBalancer   10.111.200.49   10.111.200.49                             8085:31005/TCP   3d
tenant1-db         ExternalName   <none>          tenant1-db.default.svc.cluster.local   5432/TCP         2d23h

kubectl get svc -n tenant2-namespace
NAME                  TYPE           CLUSTER-IP     EXTERNAL-IP                               PORT(S)          AGE
tenant1-db         ExternalName   <none>         tenant1-db.default.svc.cluster.local   5432/TCP         2d23h
tenant2-app-service   LoadBalancer   10.99.139.18   10.99.139.18                              8085:31006/TCP   2d23h

共1个答案

匿名用户

参考文档让我们了解您对tenant2的以下策略。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-2
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant2-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: development
    - podSelector:
        matchLabels:
          app: tenant2-app

您定义的上述网络策略在表单数组中有两个元素,它们表示允许来自本地(默认)命名空间中带有标签app=tenant2-app的Pod的连接,或者来自任何命名空间中带有标签name=Development的任何Pod的连接。

如果您将规则合并为一个规则,如下所示,它应该可以解决问题。

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-2
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant2-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant2-development
      podSelector:
        matchLabels:
          app: tenant2-app

上述网络策略意味着允许来自标签为app=tenant2-app的命名空间中标签为name=tenant2-Development的Pod的连接。

将标签name=tenant2-Development添加到tenant2-ns命名空间。

对tenant1和以下内容做同样的练习:

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: deny-from-other-namespaces-except-specific-pod-1
  namespace: default
spec:
  podSelector:
    matchLabels:
      k8s-app: tenant1-db
  ingress:
  - from:
    - namespaceSelector:
        matchLabels:
          name: tenant1-development
      podSelector:
        matchLabels:
          app: tenant1-app

将标签name=tenant1-Development添加到tenant1-ns命名空间。