Skip to content

1. Admission Controller with Kyverno

Time to Complete

Planned time: ~40 minutes

Admission controllers are a critical component of Kubernetes security. They intercept requests to the Kubernetes API server before objects are persisted, allowing you to validate, mutate, or reject resources based on custom policies. In this lab, you’ll work with Kyverno, a Kubernetes-native policy engine that makes it easy to enforce security and governance policies without writing code.


What You’ll Learn

  • How Kyverno integrates with the Kubernetes Admission Controller mechanism
  • How to install and configure Kyverno using Helm
  • How to create validation policies that block insecure configurations
  • How to enforce required labels on workloads
  • How to use mutation policies to automatically add defaults to resources
  • The difference between enforce and audit validation modes
  • How to block the use of the :latest image tag
  • How to require resource requests and limits
Trainer Instructions

Tested versions:

  • Kyverno Helm chart: 3.7.0
  • Kyverno: v1.17.0
  • Kubernetes: 1.32.x
  • nginx image: 1.27.3

Ensure participants have cluster-admin permissions. The lab works on any Kubernetes cluster with Linux nodes.

No external integrations are required.


Info

We are in the AKS cluster: kx c<x>-s1

1. Install Kyverno

Before we can enforce policies, we need to install Kyverno into our cluster. Kyverno runs as a set of controllers that watch for policy resources and intercept admission requests.

Info

Kyverno integrates with the Kubernetes Dynamic Admission Control mechanism. It registers as a ValidatingWebhookConfiguration and MutatingWebhookConfiguration, allowing it to intercept and process API requests before resources are stored in etcd.

Install with Helm

Add the Kyverno Helm repository and install Kyverno into a dedicated namespace named kyverno in helm chart version 3.7.0:

Hint

Look at the docs for the example.

Solution

kubectl create namespace kyverno

helm repo add kyverno https://kyverno.github.io/kyverno/
helm repo update

# if you want to know all versions
helm search repo kyverno

helm install kyverno kyverno/kyverno -n kyverno --version 3.7.0 --wait

Verify Installation

Check that the Kyverno pods are running:

Solution

kubectl get pods -n kyverno
You should see several pods running, including the admission controller, background controller, cleanup controller, and reports controller.

Try to answer

  • Which Kubernetes component does Kyverno integrate with?
  • Is Kyverno enforcing or detecting issues at runtime?
Answers
  1. Kyverno integrates with the Kubernetes Admission Controller mechanism via webhooks.
  2. Kyverno enforces policies at admission time, before resources are stored in etcd. This is fundamentally different from runtime security tools like Falco that detect issues after resources are running.

Create Test Namespace

Create a dedicated namespace for the exercises:

kubectl create namespace kyverno-test

All pods in this lab will be created in this namespace.


2. Block Privileged Containers

Privileged containers have full access to the host system and are a major security risk. They should be forbidden in production clusters. In this exercise, you’ll create a Kyverno policy that blocks any pod requesting privileged access.

Info

A privileged container can access all devices on the host, modify kernel parameters, and escape container isolation. This is why blocking privileged containers is often the first policy organizations implement.

Task

  1. Create a Kyverno ClusterPolicy that blocks privileged containers
  2. Try to create a pod that requests privileged access
  3. Observe what happens
Hint

Browse the Kyverno Policy Library for ready-to-use policies. You can search for “privileged” to find a matching policy. The policy should deny pods where any container sets securityContext.privileged: true.

Note: Since Kyverno 1.13, validationFailureAction at the spec level is deprecated. Use failureAction under each validate rule instead.

Solution

Create the policy (~/solution/kubernetes/Kyverno/policy-disallow-privileged.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-privileged-containers
  annotations:
    policies.kyverno.io/title: Disallow Privileged Containers
    policies.kyverno.io/category: Pod Security Standards (Baseline)
    policies.kyverno.io/severity: high
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      Privileged containers have full access to the host system and should
      be forbidden in production clusters. This policy blocks any pod that
      requests privileged access.
spec:
  background: true
  rules:
  - name: privileged-containers
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      failureAction: Enforce
      message: "Privileged containers are not allowed."
      deny:
        conditions:
          any:
          - key: "{{ request.object.spec.containers[?securityContext.privileged == `true`] | length(@) }}"
            operator: GreaterThan
            value: 0

Apply the policy:

kubectl apply -f ~/solution/kubernetes/Kyverno/policy-disallow-privileged.yaml

Test the Policy

Now try to create a privileged pod (~/solution/kubernetes/Kyverno/pod-privileged.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: privileged-pod
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
    securityContext:
      privileged: true
Solution

Apply the pod:

kubectl apply -f ~/solution/kubernetes/Kyverno/pod-privileged.yaml -n kyverno-test
Expected result: The pod creation is **denied** by Kyverno with a message like:
Error from server: error when creating "pod-privileged.yaml": admission webhook "validate.kyverno.svc-fail" denied the request:
resource Pod/kyverno-test/privileged-pod was blocked due to the following policies

disallow-privileged-containers:
  privileged-containers: Privileged containers are not allowed.

Questions

  • At which point is the pod rejected?
  • Why is this control better enforced at admission time rather than at runtime?
Answers
  • The pod is rejected before it is created in etcd. The API server calls the Kyverno webhook, which validates the request and returns a denial.
  • Enforcing at admission time prevents the insecure configuration from ever existing. Runtime detection can only alert after the fact, and the privileged container may have already caused damage.

3. Enforce Required Labels

Many organizations require workloads to have ownership or classification labels for cost allocation, incident response, and compliance. In this exercise, you’ll create a policy that requires all pods to have an owner label.

Task

  1. Apply a Kyverno policy that requires the label owner on all pods
  2. Try to create a pod without this label
  3. Fix the pod so it passes admission
Hint

Use the pattern owner: "?*" to match any non-empty value. Browse the Kyverno Policy Library for inspiration and ready-to-use policies.

Remember to use failureAction: Enforce under the validate rule (not validationFailureAction at the spec level, which is deprecated).

Solution

Create the policy (~/solution/kubernetes/Kyverno/policy-require-owner.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-owner-label
spec:
  rules:
  - name: check-owner-label
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      failureAction: Enforce
      message: "Label 'owner' is required on all pods."
      pattern:
        metadata:
          labels:
            owner: "?*"

Apply the policy:

kubectl apply -f ~/solution/kubernetes/Kyverno/policy-require-owner.yaml

Test Without Label

Try to create a pod without the owner label (~/solution/kubernetes/Kyverno/pod-no-label.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-no-label
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
Solution

kubectl apply -f ~/solution/kubernetes/Kyverno/pod-no-label.yaml -n kyverno-test
Expected result: The pod is denied with message: `Label 'owner' is required on all pods.`

Fix the Pod

Add the required label and try again (~/solution/kubernetes/Kyverno/pod-with-label.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-with-label
  labels:
    owner: team-a
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
Solution

kubectl apply -f ~/solution/kubernetes/Kyverno/pod-with-label.yaml -n kyverno-test
The pod should now be created successfully.

Questions

  • What feedback does Kubernetes provide when the label is missing?
  • How does this help platform teams?
Answers
  • Kubernetes returns a clear error message from Kyverno explaining exactly which label is missing and on which resource.
  • Platform teams can ensure all workloads are properly labeled for cost tracking, ownership, and incident response without manually reviewing every deployment.

4. Mutate Resources Automatically

Admission controllers can not only validate but also mutate resources. This is useful for adding sensible defaults, injecting sidecars, or ensuring consistent configurations without requiring developers to remember every setting.

Task

  1. Apply a Kyverno policy that automatically adds a label environment=dev to all pods
  2. Create a pod without this label
  3. Inspect the pod after creation to verify the label was added
Solution

Create the policy (~/exercise/kubernetes/Kyverno/policy-add-env.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: add-environment-label
spec:
  rules:
  - name: add-env
    match:
      any:
      - resources:
          kinds:
          - Pod
    mutate:
      patchStrategicMerge:
        metadata:
          labels:
            environment: dev

Apply the policy:

kubectl apply -f ~/exercise/kubernetes/Kyverno/policy-add-env.yaml

Test the Mutation

Create a basic pod (~/exercise/kubernetes/Kyverno/pod-basic.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-basic
  labels:
    owner: team-a
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
Solution

kubectl apply -f ~/exercise/kubernetes/Kyverno/pod-basic.yaml -n kyverno-test
Inspect the pod to verify the label was added:
kubectl get pod pod-basic -n kyverno-test -o jsonpath='{.metadata.labels}' | jq .
Expected result: - Pod is created **successfully** (not rejected) - Label `environment: dev` is added automatically by Kyverno

Questions

  • Was the pod rejected or modified?
  • Why is mutation sometimes preferable to validation?
Answers
  • The pod was modified, not rejected. Kyverno added the missing label automatically.
  • Mutation is preferable when you want to enforce defaults without blocking developers. It reduces friction while ensuring consistency. Validation is better when you want to explicitly reject non-compliant resources.

5. Audit Mode vs Enforce Mode

Optional Task

This section explores the difference between Audit and Enforce modes.

Kyverno policies can run in two modes:

  • Enforce: Blocks non-compliant resources from being created
  • Audit: Allows resources but logs violations in policy reports

This is useful when rolling out new policies gradually.

Task

  1. Modify one of your existing policies to use failureAction: Audit
  2. Create a non-compliant resource
  3. Check the policy reports
Solution

Update the privileged container policy's validate rule:

    validate:
      failureAction: Audit
Create a privileged pod - it will now be allowed but reported:
kubectl apply -f ~/exercise/kubernetes/Kyverno/pod-privileged.yaml -n kyverno-test
View policy reports:
kubectl get policyreport -A
kubectl get clusterpolicyreport

Tip

Use Audit mode when first deploying policies to understand their impact before enforcing them. This helps avoid breaking existing workloads.


6. Bonus: Disallow Latest Tag

Bonus Exercise

This section is optional and provides an additional challenge.

The :latest tag is mutable and can lead to unexpected behavior when images are updated. A pod deployed today with nginx:latest may run a different version tomorrow. This makes deployments non-reproducible and can introduce security vulnerabilities or breaking changes.

Task

  1. Create a Kyverno policy that blocks pods using the :latest image tag
  2. Test with a pod using nginx:latest
  3. Fix the pod by using a specific tag like nginx:1.27.3
Hint

Use the pattern image: "!*:latest" to match images that do NOT end with :latest. Check the Kyverno Policy Library for a ready-to-use “Disallow Latest Tag” policy.

Solution

Create the policy (~/exercise/kubernetes/Kyverno/policy-disallow-latest-tag.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: disallow-latest-tag
  annotations:
    policies.kyverno.io/title: Disallow Latest Tag
    policies.kyverno.io/category: Best Practices
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      The ':latest' tag is mutable and can lead to unexpected behavior when
      images are updated. This policy requires that container images specify
      a tag and that the tag is not 'latest'.
spec:
  background: true
  rules:
  - name: validate-image-tag
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      failureAction: Enforce
      message: "Using ':latest' tag is not allowed. Please specify a specific image tag."
      pattern:
        spec:
          containers:
          - image: "!*:latest"

Apply the policy:

kubectl apply -f ~/exercise/kubernetes/Kyverno/policy-disallow-latest-tag.yaml

Test With Latest Tag

Try to create a pod using the :latest tag (~/exercise/kubernetes/Kyverno/pod-latest-tag.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-latest-tag
  labels:
    owner: team-a
spec:
  containers:
  - name: nginx
    image: nginx:latest
Solution

kubectl apply -f ~/exercise/kubernetes/Kyverno/pod-latest-tag.yaml -n kyverno-test
Expected result: The pod is denied with message: `Using ':latest' tag is not allowed.`

Fix With Pinned Tag

Use a specific image tag (~/exercise/kubernetes/Kyverno/pod-pinned-tag.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-pinned-tag
  labels:
    owner: team-a
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
Solution

kubectl apply -f ~/exercise/kubernetes/Kyverno/pod-pinned-tag.yaml -n kyverno-test
The pod should now be created successfully.

Tip

Consider also blocking images without any tag, as they default to :latest. You can extend the policy pattern to require an explicit tag.


7. Bonus: Require Resource Limits

Bonus Exercise

This section is optional and provides an additional challenge.

Resource requests and limits are essential for cluster stability. Without them, a single misbehaving pod can consume all available resources and cause node-wide outages. Requests ensure fair scheduling, while limits prevent resource exhaustion.

Task

  1. Create a Kyverno policy that requires all containers to have memory requests, memory limits, and CPU requests
  2. Test with a pod that has no resource specifications
  3. Fix the pod by adding appropriate resource constraints
Hint

Use a pattern with resources.requests.memory: "?*" to require at least one character (non-empty value). Search the Kyverno Policy Library for “require resources” for reference policies.

Solution

Create the policy (~/exercise/kubernetes/Kyverno/policy-require-requests-limits.yaml):

apiVersion: kyverno.io/v1
kind: ClusterPolicy
metadata:
  name: require-requests-limits
  annotations:
    policies.kyverno.io/title: Require Resource Requests and Limits
    policies.kyverno.io/category: Best Practices
    policies.kyverno.io/severity: medium
    policies.kyverno.io/subject: Pod
    policies.kyverno.io/description: >-
      Resource requests and limits are required to ensure fair scheduling
      and prevent resource exhaustion. This policy requires all containers
      to specify memory requests and limits, and CPU requests.
spec:
  background: true
  rules:
  - name: validate-resources
    match:
      any:
      - resources:
          kinds:
          - Pod
    validate:
      failureAction: Enforce
      message: "CPU and memory resource requests and limits are required."
      pattern:
        spec:
          containers:
          - name: "*"
            resources:
              requests:
                memory: "?*"
                cpu: "?*"
              limits:
                memory: "?*"

Apply the policy:

kubectl apply -f ~/exercise/kubernetes/Kyverno/policy-require-requests-limits.yaml

Test Without Resources

Try to create a pod without resource specifications (~/exercise/kubernetes/Kyverno/pod-no-resources.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-no-resources
  labels:
    owner: team-a
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
Solution

kubectl apply -f ~/exercise/kubernetes/Kyverno/pod-no-resources.yaml -n kyverno-test
Expected result: The pod is denied with message: `CPU and memory resource requests and limits are required.`

Fix With Resources

Add resource specifications (~/exercise/kubernetes/Kyverno/pod-with-resources.yaml):

apiVersion: v1
kind: Pod
metadata:
  name: pod-with-resources
  labels:
    owner: team-a
spec:
  containers:
  - name: nginx
    image: nginx:1.27.3
    resources:
      requests:
        memory: "64Mi"
        cpu: "100m"
      limits:
        memory: "128Mi"
Solution

kubectl apply -f ~/exercise/kubernetes/Kyverno/pod-with-resources.yaml -n kyverno-test
The pod should now be created successfully. Verify the resources:
kubectl get pod pod-with-resources -n kyverno-test -o jsonpath='{.spec.containers[0].resources}' | jq .

Questions

  • Why do we require memory limits but not CPU limits?
  • What happens when a container exceeds its memory limit vs its CPU limit?
Answers
  • Memory limits are critical because exceeding them causes the container to be OOM-killed. CPU limits are optional because exceeding them only causes throttling, not termination.
  • When a container exceeds its memory limit, it is killed by the OOM killer. When it exceeds its CPU limit, it is throttled (slowed down) but continues running.

8. Clean Up

Remove the resources created during this lab:

kubectl delete clusterpolicy --all
kubectl delete namespace kyverno-test
Optional: Uninstall Kyverno

If you want to completely remove Kyverno:

helm uninstall kyverno -n kyverno
kubectl delete namespace kyverno

Recap

You have:

  • Installed Kyverno v1.17.0 using Helm and verified its operation
  • Created a validation policy to block privileged containers
  • Created a validation policy to enforce required labels
  • Created a mutation policy to automatically add default labels
  • Learned the difference between Enforce and Audit modes
  • (Bonus) Created a policy to block the :latest image tag
  • (Bonus) Created a policy to require resource requests and limits
  • Understood how Kyverno complements RBAC and runtime security tools

Wrap-Up Questions

Discussion

  • Which policies would you enforce vs only audit in a production cluster?
  • Which teams should be allowed to bypass policies (if any)?
  • Where does Kyverno fit compared to RBAC (who can do what) and Falco (runtime detection)?
Discussion Points
  • Enforce vs Audit: Start with audit mode for new policies. Enforce critical security policies (privileged containers, host namespaces) immediately. Use audit for organizational policies (labels, resource limits) until teams are ready.
  • Policy Exceptions: Kyverno supports Policy Exceptions for specific workloads that need to bypass policies (e.g., monitoring agents that need host access).
  • Defense in Depth: Kyverno (admission) + RBAC (authorization) + Falco (runtime) form a complete security strategy. RBAC controls who can create resources, Kyverno controls what resources can be created, and Falco detects runtime anomalies.

End of Lab