Networking and Security with NSX-T in vSphere with Tanzu

Published by Jimmy Mankowitz on

In this post we will explore how Networking and Security with NSX-T in vSphere with Tanzu works and can be utilized by VirtualInfrastructure Admins and also with DevOps and Developers.

A secondary post regarding how Developers themselves can utilize the built in Network CNI Antrea that is the default shipped with Tanzu is also coming, stay tuned for that.

Introduction to vSphere with Tanzu and NSX-T:

Content

With Tanzu in vSphere there is a possibility to utilize either the VMware vSphere networking stack based on vDS virtual Distributed Switches with an external Loadbalancer or VMware NSX-T to have connectivity and different services for the Tanzu Kubernetes control plane VMs, Container workloads and services (Loadbalancers, Ingress etc.)

In my previous post I described the different components that constitutes Tanzu on vSphere so if you’re not familiar to that go to here and read about it.

With NSX-T in vSphere with Tanzu we have different features provided:

– IPAM
– Segmented Networking
– Firewall Isolation
– Load Balancing
– Visibility

This post will go more into detail around Firewall isolation and visibility.

Securing Tanzu Kubernetes Networks with NSX-T Firewall Isolation:

North-South traffic:

Traffic going in and out of the different namespaces is with NSX-T controlled on the Edge Gateways T0/T1’s. This is done by the Administrators in NSX-T UI or API; creating security policies to be able to restrict traffic in/out of the Supervisor clusters.

East-West traffic:

Inter-Namespace: Security isolation between different Supervisor Namespaces is enabled by default. There is a default rule in NSX-T created for every Namespace denying network traffic between Namespaces so called 

Intra-Namespace: By default traffic within each Namespace is allowed

To be able to create rules allowing traffic into Namespaces, Kubernetes has something called a Network Policies. See here for more information at Kubernetes.io https://kubernetes.io/docs/concepts/services-networking/network-policies/

So whenever a person want to create firewall rules for an application or a namespace the create different Network Policies with Ingress and Egress rules. Defining what source, destination and sort of service to open up for with by selecting the applications based on matching Labels that are set on the application containers and pod Selected with the network policy. In turn NSX-T with Tanzu then create these policies as DFW, Distributed Firewall Rules in NSX-T.

Let’s look at how that can be performed:

Utilize the Redis PHP Guestbook from Kubernetes.io as an example see link here.

The Guestbook demonstrates how to build a multi-tier web application. The tutorial shows how to setup a guestbook web service on an external IP with a load balancer, and how to run a Redis cluster with a single leader and multiple replicas/followers.

The following diagram shows an overview of the application architecture along with the Network Policies in-place with NSX-T:

So let’s start building the application: 

Download the PHP Guestbook with GIT:

git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples
cd kubernetes-engine-samples/guestbook

Download the docker images described in all the yaml files to a Private Image Registry, I use Harbor by VMWare enabled by vSphere with Tanzu.
So that later on when creating the different application components the images where pulled from the private registry.

Prerequisities:

Enable the Embedded Harbor Registry on the Supervisor Cluster 

Configure a Docker Client with the Embedded Harbor Registry Certificate 

Install the vSphere Docker Credential Helper and Connect to the Registry

The IP on the internal Harbor registry is 10.30.150.4
The Namespace I am working in and also the project where the images will be uploaded into repositories is called: greenbag
Login and connect with the vSphere Docker Credential Helper:

docker-credential-vsphere login
docker-credential-vsphere login 10.179.145.77
Username: a_jimmy@int.rtsvl.se
Password: INFO[0017] Fetched username and password
INFO[0017] Fetched auth token
INFO[0017] Saved auth token

Download all the images:
docker pull docker.io/redis:6.0.5
docker pull gcr.io/google_samples/gb-redis-follower:v2
docker pull gcr.io/google_samples/gb-frontend:v5
Tag Images to Embedded Harbor Registry:
docker tag docker.io/redis:6.0.5 10.30.150.4/greenbag/redis:6.0.5
docker tag gcr.io/google_samples/gb-redis-follower:v2 10.30.150.4/greenbag/gb-redis-follower:v2
docker tag gcr.io/google_samples/gb-frontend:v5 10.30.150.4/greenbag/gb-frontend:v5
Check the images in docker:
docker images
REPOSITORY                                  TAG                IMAGE ID       CREATED         SIZE

10.30.150.4/greenbag/gb-frontend            v5                 3efc9307f034   6 weeks ago     981MB
gcr.io/google_samples/gb-frontend           v5                 3efc9307f034   6 weeks ago     981MB
10.30.150.4/greenbag/gb-redis-follower      v2                 6148f7d504f2   3 months ago    104MB
gcr.io/google_samples/gb-redis-follower     v2                 6148f7d504f2   3 months ago    104MB
10.30.150.4/greenbag/redis                  6.0.5              235592615444   15 months ago   104MB
redis                                       6.0.5              235592615444   15 months ago   104MB
Push Images to Embedded Harbor Registry:
docker push 10.30.150.4/greenbag/redis:6.0.5
docker push 10.30.150.4/greenbag/gb-redis-follower:v2
docker push 10.30.150.4/greenbag/gb-frontend:v5

This is how it looks in Harbor after the images are pushed into the registry.

With all images uploaded into Harbor it was time to download all the yaml files for the PHP Guestbook to be able to change the images path

Setting up the Redis leader:

Edit the yaml with the correct path to the harbor registry

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-leader
  labels:
    app: redis
    role: leader
    tier: backend
spec:
  replicas: 1
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: leader
        tier: backend
    spec:
      containers:
      - name: leader
        image: "10.30.150.4/greenbag/redis:6.0.5"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379
Run the following command to deploy the Redis leader:
kubectl apply -f redis-leader-deployment.yaml
Verify that the Redis leader Pod is running:
kubectl get pods

Create the Redis leader service:

Start the Redis leader Service by running:
kubectl apply -f redis-leader-service.yaml
Verify that the Service is created:
kubectl get service
NAME                                    TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)                      AGE
redis-leader                            ClusterIP      10.30.152.190   <none>         6379/TCP                     4h7m

Setting up Redis followers and change the path to correct Harbor registry:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: redis-follower
  labels:
    app: redis
    role: follower
    tier: backend
spec:
  replicas: 2
  selector:
    matchLabels:
      app: redis
  template:
    metadata:
      labels:
        app: redis
        role: follower
        tier: backend
    spec:
      containers:
      - name: follower
        image: 10.30.150.4/greenbag/gb-redis-follower:v2
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 6379

To create the Redis follower Deployment, run:
kubectl apply -f redis-follower-deployment.yaml

Verify that the two Redis follower replicas are running by querying the list of Pods:
kubectl get pods

Create the Redis follower service:

kubectl apply -f redis-follower-service.yaml

Verify that the Service is created:
kubectl get service

Setting up the guestbook web frontend and change the path to correct Harbor registry:

apiVersion: apps/v1
kind: Deployment
metadata:
  name: frontend
spec:
  replicas: 3
  selector:
    matchLabels:
        app: guestbook
        tier: frontend
  template:
    metadata:
      labels:
        app: guestbook
        tier: frontend
    spec:
      containers:
      - name: php-redis
image: 10.30.150.4/greenbag/gb-frontend:v5
        env:
        - name: GET_HOSTS_FROM
          value: "dns"
        resources:
          requests:
            cpu: 100m
            memory: 100Mi
        ports:
        - containerPort: 80

To create the guestbook web frontend Deployment, run:
kubectl apply -f frontend-deployment.yaml
kubectl get pods -l app=guestbook -l tier=frontend

NAME                        READY   STATUS    RESTARTS   AGE
frontend-6cbb49f8df-g5jjg   1/1     Running   0          4h3m
frontend-6cbb49f8df-k86bf   1/1     Running   0          4h3m
frontend-6cbb49f8df-rmggc   1/1     Running   0          4h3m

Expose frontend on an LoadBalancer with an external IP address:

apiVersion: v1
kind: Service
metadata:
  name: frontend
  labels:
    app: guestbook
    tier: frontend
spec:
# if your cluster supports it, uncomment the following to automatically create
# an external load-balanced IP for the frontend service.
  # type: LoadBalancer
  type: LoadBalancer
  ports:
    # the port that this service should serve on
  - port: 80
  selector:
    app: guestbook
    tier: frontend

To create the Service, run the following command:
kubectl apply -f frontend-service.yaml

Visiting the guestbook website:

kubectl get service frontend

NAME       TYPE           CLUSTER-IP      EXTERNAL-IP    PORT(S)        AGE
frontend   LoadBalancer   10.30.152.165   10.30.150.10   80:31681/TCP   4h2m

Copy the IP address from the EXTERNAL-IP column, and load the page in your browser:

Create Network Policy rules:

Once all the functionality for the Guestbook is completed it’s time to build the Network Policies to isolate and control what traffic is allowed and denied.
This is done with Ingress and Egress rules for the 3 different services – frontend, follower and leader.

Frontend Network Policy:

Starting with the frontend, this is where the connections outside is coming into the application.
It is exposed on port 80 with the service against the Loadbalancer.
So we create an Network Policy with an Ingress allowing access for all against port 80 on TCP protocol and that is applied on all pods that a tagged with a label of app=guestbook and tier=frontend.

To get the labels on all pods run the following:
kubectl get pods –show-labels

frontend-6cbb49f8df-g5jjg         1/1     Running   0          11m   10.30.160.101   esxi01   <none>           <none>            app=guestbook,pod-template-hash=6cbb49f8df,tier=frontend
frontend-6cbb49f8df-k86bf         1/1     Running   0          11m   10.30.160.102   esxi01   <none>           <none>            app=guestbook,pod-template-hash=6cbb49f8df,tier=frontend
frontend-6cbb49f8df-rmggc         1/1     Running   0          11m   10.30.160.100   esxi01   <none>           <none>            app=guestbook,pod-template-hash=6cbb49f8df,tier=frontend
redis-follower-7bd547b745-297jw   1/1     Running   0          15m   10.30.160.99    esxi01   <none>           <none>            app=redis,pod-template-hash=7bd547b745,role=follower,tier=backend
redis-follower-7bd547b745-ngk6s   1/1     Running   0          15m   10.30.160.98    esxi01   <none>           <none>            app=redis,pod-template-hash=7bd547b745,role=follower,tier=backend
redis-leader-7759fd599f-bfwdk     1/1     Running   0          21m   10.30.160.27    esxi01   <none>           <none>            app=redis,pod-template-hash=7759fd599f,role=leader,tier=backend

I created a new yaml called: redis-guestbook-networkpolicy-nsxt.yaml

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: guestbook-network-policy
spec:
  podSelector:
    matchLabels:
      app: guestbook
      tier: frontend
  policyTypes:
  - Ingress
- Egress
ingress:
  - ports:
    - protocol: TCP
      port: 80

Create a Network Policy for the Redis Leaders:

The frontend need to be able to access the leaders on port 6379 for reading and writing data, so we create ingress and egress rules for these ports in/out.

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: redis-leader-network-policy
spec:
  podSelector:
    matchLabels:
      app: redis
      role: leader
      tier: backend
  policyTypes:
  - Ingress
  - Egress
  ingress:
    - ports:
      - protocol: TCP
        port: 6379
  egress:
   - ports:
     - protocol: TCP
       port: 6379    

Network Policy for the Redis Followers:

The frontend need to be able to access the followers on port 6379 for reading and writing data, so we create ingress and egress rules for these ports in/out

apiVersion: networking.k8s.io/v1
kind: NetworkPolicy
metadata:
  name: redis-follower-network-policy
spec:
  podSelector:
    matchLabels:
      app: redis
      role: follower
      tier: backend
  policyTypes:
  -  Ingress
  -  Egress
  ingress:
    - ports:
      - protocol: TCP
        port: 6379
  egress:
    - ports:
- protocol: TCP
        port: 6379

Verifying in the NSX-T UI

We see rules are created corresponding to the ingress and egress rules we created earlier.

Digging into the greenbag-redis-follower-network-policy-whitelist Policy Section we see that TCP.6379-ingress-allow allows anyone to talk against the follower pods with IP 10.30.160.99, .98 on port 6379. The group definition is based on the tag tier=backend, role=follower, app=redis
And so forth for the rest on the different rules in the tiered application.

We also have Policy Sections for each part that Drops everything else that is not allowed:

Lastly we can with NSX-T run a Traceflow between one of the Frontend pods and Follower Pods and see that traffic is allowed on port 6379

And denied on a different port:

We have now secured ingress and egress traffic into and between vSphere Pods in a Namespace using Network Policies.

As we look further Developers can also utilize Antrea for securing traffic flows between application for use within Tanzu Kubernetes Grid Clusters that are running ontop of the Supervisor layer. But more on that in a different post.

Happy securing you Modern Applications!


0 Comments

Leave a Reply

Avatar placeholder

Your email address will not be published. Required fields are marked *